This site is intended for health professionals only

At the heart of general practice since 1960

Read the latest issue online

GPs buried under trusts' workload dump

Research with a stacked deck

Edzard Ernst explains how the design of some studies undermines their claims about the effectiveness of complementary therapies.

Edzard Ernst explains how the design of some studies undermines their claims about the effectiveness of complementary therapies.

Pragmatic trials have become more and more popular. Their advantage is, according to their proponents, that they reflect reality. Most efficacy studies, they argue, are very artificial and therefore tell us little about actual practice. On the other hand, some pragmatic trial designs seem quite non-sensical, even unethical to me.

The study design I have in mind is perhaps best described as the A+B versus B design. A recent example might explain. A group of fibromyalgia patients were randomised to receive either homeopathy plus standard care (A+B) or standard care alone (B). Patients in the homeopathy group received up to three hours of consultation with a homeopath and prescriptions of individualized homeopathic remedies. In addition, they also received standard care. Patients in the control group only received standard care.

Imagine for a moment that the homeopathic remedies were, in fact, pure placebos; the three hours of consultations are, in my view, a sufficient explanation for the positive result that was generated by this study [1].

That is probably quite clear and fairly undisputed – but why do I claim this study design to be nonsensical and unethical? If what I just tried to explain is true, then such a trial has no chance of producing a negative result.

We have investigated this particular issue in a recent systematic review [2]. Its results seem to confirm my hypothesis: the A+B versus B design has no or only minimal chances to generate a negative result even if the experimental group received a placebo. That is, of course, unless one commits a type two error, i.e. fails to identify an effect where one exists, for instance, because the sample size was too small or the outcome measure was not sufficiently sensitive.

If that is so, the result of such a study is known before the trial even started and the finding will be positive even if the treatment itself has no effect whatsoever. This, I argue, is a fatal design flaw.

To conduct research that is known to be flawed is, according to all standards of research ethics which I am aware of, unethical. Why? Because wasting money or patients' cooperation in this way is a breech of research ethics.

And what has this to with CAM? Nothing – except that, in CAM, such trial designs are increasingly popular; I suspect because negative results are increasingly unpopular.

Professor Edzard Ernst

Rate this article 

Click to rate

  • 1 star out of 5
  • 2 stars out of 5
  • 3 stars out of 5
  • 4 stars out of 5
  • 5 stars out of 5

0 out of 5 stars

Have your say