Posted by: Edzard Ernst25 August 2012
The field of alternative medicine (AM) is littered with surveys and other observational studies.1 Typically a group of patients who have elected to use some form of AM, are asked whether they experienced any benefit from it. Usually, the results show that around 70% of patients experience benefit after the administration of this or that form of AM.2 Because their sample size is often large and the setting is that of everyday practice, such surveys are promoted by AM enthusiasts as being somehow more meaningful, important and relevant to 'real life' than randomized clinical trials which typically are much smaller and more 'artificial'.3
The argument that a particular treatment has stood the 'test of time' is used for a similar purpose. Acupuncture, for instance, has been around for thousands of years. Homeopathy has been used 'successfully' by millions since almost 200 years. If these treatments were ineffective, surely they would not have survived that long. The authors of a recent article on oxygen-ozone therapy (OOT) put it in a nutshell: 'Several million treatments by OOT have been performed worldwide indicating its usefulness.'4
Clinicians practising AM tell us that they witness the effectiveness of AM on a daily basis. Their patients report benefit, and they even pay considerable amounts of money out of their own pockets for the treatment. Surely, they cannot all be mistaken or deluded! Who needs artificial and imperfect scientific studies to test something that is already perfectly obvious? Who needs trials when we already have 'field tests' on millions of patients?3
To many people, particularly journalists and politicians, these arguments seem intuitively convincing. Yet such observational data all have one crucial characteristic in common: they do not allow inference of cause and effect.
Perhaps the patient’s condition would have improved anyway? Perhaps some patients also administered other treatments? Perhaps they were not helped by AM but by an associated placebo-effect? Perhaps some clinicians predominantly remember their successes and forget their failures? Perhaps patients who do not get better do not come back for more? Perhaps the ‘test of time’ simply shows that, given the right circumstances, even nonsense can survive?
This is not to say that observations are useless; on the contrary, observational data can be helpful for several purposes (for example, formulating hypotheses of for generating information about therapeutic risks)
But for establishing cause and effect, they usually are worthless. After all, the plural of anecdote is 'anecdotes' and not 'data'.
Relying on observations for casual inferences is a mistake which, in my experience, is much more prevalent in AM than in conventional medicine - it is a mistake that hinders progress and puts patients at risk.
Professor Edzard Ernst is professor of complementary medicine at the Peninsula Medical School, University of Exeter
(1) Ernst E. Prevalence surveys: to be taken with a pinch of salt. Complement Ther Clin Pract 2006; 12(4):272-275.
(2) Spence DS, Thompson EA, Barron SJ. Homeopathic treatment for chronic disease: a 6-year, university-hospital outpatient observational study. J Altern Complement Med 2005; 11:793-798.
(3) Ernst E. Classic flaws in clinical CAM research. FACT 2010; 15(3):207-209.
(4) Bocci V, Zanardi I, Borelli E, Travagli V. Reliable and effective oxygen-ozone therapy at a crossroads with ozonated saline infusion and ozone rectal insufflation. J Pharm Pharmacol 2012; 64(4):482-489.