Cookie policy notice

By continuing to use this site you agree to our cookies policy below:
Since 26 May 2011, the law now states that cookies on websites can ony be used with your specific consent. Cookies allow us to ensure that you enjoy the best browsing experience.

This site is intended for health professionals only

At the heart of general practice since 1960

Why insist on scientific rigour?

Be wary of 'real life' studies, says Professor Edzard Ernst

Be wary of 'real life' studies, says Professor Edzard Ernst

I could provide dozens of examples of 'alternative' therapies for which we have a whole spectrum of research results, from the flimsy to the rigorous.

The flimsy stuff usually relates to observational studies or case series allegedly depicting the 'real life' situation. Such data typically suggest that the treatment in question works. But they are wide open to bias.

The rigorous trials, on the other hand, are usually tightly controlled and their results often fail to show an effect. Some people then insist that this (rather typical) scenario represents a contradiction. They say the 'the jury is out' because some of the evidence is positive and some is negative.

Consequently, CAM enthusiasts conclude that it is best to discard the rigorous research in favour of the 'real life' studies. After all, at the ‘coal face', clinicians are faced with ‘real life' patients!

This type of conclusion has always puzzled me. The thing is, if we look at this scenario analytically, there is no contradiction at all. The more we exclude bias from research, the smaller the effect of the treatment.

This rule of thumb applies to all types of research. Take HRT as an example from mainstream medicine. Only a few years ago, tons of observational data suggested that it protects women from cardiovascular and malignant diseases. Subsequently large controlled studies emerged which showed the opposite. Do we trust the former, weak evidence or the new strong one? Not a hard question, I would say. Yet, in CAM, people disagree; they often seem to prefer potentially biased over unbiased data.

If the best evidence shows an effect size close to zero, then the treatment is probably not effective. If the 'real life' observational study suggests an effect and the rigorous trial does not, the explanation is usually simple: the observed outcome is not due to the treatment itself but to non-specific effects, confounding or bias. Where, for heaven sakes, is the contradiction?

What follows is as straightforward as it has proven to be unacceptable to proponents of CAM: if we want to know what the true cause of an observed outcome is, we should primarily stick to the rigorous stuff and take the 'real life' studies with a pinch of salt – not vice versa.

Professor Edzard Ernst Professor Edzard Ernst

Rate this article 

Click to rate

  • 1 star out of 5
  • 2 stars out of 5
  • 3 stars out of 5
  • 4 stars out of 5
  • 5 stars out of 5

0 out of 5 stars

Have your say