In previous posts I described various kinds of medical evidence, ranked from less to more reliable. This time I’ll lay out how researchers collect the best evidence — something called the randomized, controlled trial. These are the gold-standard for clinical research in medicine. They are also the most difficult and expensive kind of clinical research to do.
The concept is simple. If, for example, an investigator wants to test if a particular pill is effective against a certain disease, she identifies a group of patients with the condition, gives half of them the pill and the other half a fake pill, called a placebo, and sees what happens. The key to the validity of the trial is that assignment of a patient to the group that gets the drug or the placebo is truly random (this is often done by computer code), and that neither the patient nor the investigator knows which patient is in which group until after the trial is over, at which time the code disguising the two groups is unsealed and the data analyzed. The randomized, controlled trial eliminates nearly all the problems you read about in earlier posts, although even randomized trials have occasionally been bedeviled by the discovery afterwards that the control and experimental groups did, in fact, differ in key ways.
Randomized, controlled trials are the centerpiece for a recent movement in medical practice called evidence-based medicine. The term could, of course, allow doctors to use any sort of evidence, but what people mean by evidence-based medicine is that doctors should be guided by the results, whenever possible, of proper randomized, controlled trials. If there are not any such data available, then doctors should use a formal, defined process for evaluating what evidence there is. They should weigh the evidence much as we are doing now, ranking it from expert opinion, through case series, uncontrolled trials, and up to any controlled trial information there is available.
There is even a name assigned to this formal evaluative process — meta-analysis. The notion of meta-analysis is that the results of several, disparate studies, which by themselves might be inconclusive, can be pooled together to reach a conclusive, composite answer. Of course one cannot make a silk purse from a sow’s ear; the summation of several bad studies can simply be a single bad study. However, meta-analysis does have the ability to make explicit how we judge the validity of medical research.
If randomized, controlled studies are the gold standard, why are we even discussing any of those less useful methods? Why not just do that kind of research for everything? The answer is two-fold. For one thing, relatively few disorders have been the subject of randomized trials because they are extremely complicated and expensive experiments to plan and carry out. They often take years to map out and execute, followed by another year or more to analyze the data. A second issue is that it is not really possible to devise a randomized, controlled trial to examine many of the medical questions parents — and pediatricians, too — have about their children, even if we had the time and money to do it.
Randomized trials are best suited to testing some kind of therapy or intervention. But the intervention must be of the sort that neither the researcher nor the subject knows if they are using the experimental treatment or the placebo. This is feasible for a pill, although even then it can prove difficult. For example, one of the trials about the effectiveness of fish oil in reducing the risk of heart disease was complicated by the fact the fish oil smelled and tasted a certain way, alerting the subject to what group he was in.
For some things it is difficult even to devise a placebo — a surgical procedure, for example. Some questions are so important to answer that patients have undergone (with their informed consent, of course) sham surgery as a way to blind both the subject and the evaluator to which group the patient was in. Considering how difficult it is to set trials like this up it is understandable why so many things, important things for children’s health, have never been studied with a randomized, controlled trial, and very likely never will be. We will just have to decide what to do with data which, although still usable, are intrinsically less reliable. That is unfortunate, but that is the reality.
For myself, I find this notion comforting — it means there still is a place in medical practice for intuition and common sense.
12/27/2008 • In previous posts I described various kinds of medical evidence, ranked from less to more reliable. This time I'll lay out how researchers collect the best evidence -- something called the randomized, controlled trial. These ...more
07/12/2011 • Here is another post about how to read news reports about a medical claim and evaluate for yourself if it's valid or not. My last post dealt with case reports -- doctors' descriptions of patients ...more
05/23/2011 • Over the next month I'm going to repost a series I did about how to understand medical evidence -- how to read the medical news. How can a non-physician interpret the latest breathless bulletin about ...more
05/28/2011 • This is another post about how non-physicians can understand how physicians use evidence. As I noted before, medical evidence has a hierarchy of reliability. The least reliable of these is expert opinion. This seems counter-intuitive: ...more
07/17/2011 • Here is another post in understanding the proper use of medical evidence. (Previous ones in the series are here, here, here, and here) Medical researchers can use several techniques to try and get around some ...more