Investors interested in pharma stocks and patients eager to know if an experimental drug works have one thing in common: they devour stories reporting the results of clinical trials, which assess whether a new drug is safe and effective. Now it turns out they have something else in common: they’re not getting the whole story.
Results from fully one third of the clinical trials of five classes of drugs never see the light of day, finds an analysis published in Annals of Internal Medicine. The drugs were anticholesteremics, antidepressants, antipsychotics, proton-pump inhibitors (which reduce gastric acid), and vasodilators (which relax blood-vessel walls in order to reduce blood pressure).
Such publication bias was supposed to be addressed by the requirement that all clinical trials be registered at clinicaltrials.gov, and the results posted when they come in. But as a British neuroscientist who blogs as Neuroskeptic notes, that is still a step short of requiring publication: “The problem is that clinicaltrials.gov doesn’t appear on PubMed [the online database of biomedical studies and publications], and medical science works on the rule of ‘PubMed or it didn’t happen.’ Someone searching for papers about ‘drug X for disease Y’—which I suspect accounts for the vast majority of clinical paper downloads—will still only get told about the trials that the authors chose to publish.”
The specific pattern of nonpublication is particularly disturbing. The Annals study, led by Florence Bourgeois of Children’s Hospital Boston and Kenneth Mandl of the Children’s Hospital Informatics Program there, finds that of 546 drug trials conducted between 2000 and 2006 (the cutoff was to allow the trials several years to see the light of print), only 32 percent of those primarily funded by industry were published within 24 months of finishing. That compares with 56 percent of the trials funded by nonprofit or nonfederal organizations with no industry money. Combine that with another finding—that industry-funded trials reported positive outcomes in 85 percent of publications, compared with 50 percent for government-funded trials—and a worrisome specter emerges: industry is cherry-picking which of its clinical trials to publish, deep-sixing those that failed to show that a new drug is effective.
We’ve known for years that publication bias has skewed the perception that doctors and patients have of antidepressants. That is, the perception is that the drugs act on the brain’s neurochemicals and work. But the reality is that, when you analyze not only the cherry-picked data but the unpublished studies as well, that is far from the case, as I wrote earlier this year. That this problem also extends to drugs meant to reduce cholesterol or to treat serious mental illness, gastric acid, or hypertension suggests that many more patients have real reason to wonder about the effectiveness of their medication.
There are two interpretations of why industry-funded trials that get published tend to be more positive than those without industry funding. The benign one is that industry is simply better at doing trials and picking winners, says Mandl. A more worrisome interpretation is that “bias may be creeping in that causes these drugs to appear to have more positive outcomes than they actually do,” he says.
Publication bias does not affect whether a drug is approved; the Food and Drug Administration requires drugmakers to submit all data, published and unpublished, when they seek approval to sell a new drug. The problem comes later, when individual doctors comb through the literature for information on a drug. Selective publication of positive studies means doctors are getting an inaccurate picture of drugs’ effectiveness. And now that we are in an age of evidence-based medicine, there is an even bigger threat. Professional societies use published studies, not those locked in a safe at a drug company or buried on a Web site (in September 2008 clinicaltrials.gov began requiring that all results, published and not, be submitted), when they issue treatment recommendations and guidelines, Mandl pointed out to me. “The practice of medicine is heavily dependent on the published literature, which is synthesized into guidelines and recommendations,” he notes. It is the “evidence” of evidence-based medicine.
This is not the first study to find that publication bias is still going strong, but it is the most complete one. A 2009 analysis in PLoS Medicine, a peer reviewed, open-access journal, examined a random 10 percent of registered clinical trials, and found that after at least two years from the study’s completion, less than half (311 of 677, or 46 percent) had been published. Again, trials primarily sponsored by industry were less likely to be published than those funded by nonindustry/nongovernment sources—40 percent vs. 56 percent.
The consequences of selective publication have escalated in an age when millions of people are looking for health information online. Harris Interactive reported this week that 175 million Americans have gone online this year to look for health information—that’s 88 percent of all the adults online, and enough to inspire the term “cyberchondriacs.” Few of them have the knowledge to wade through the raw data at clinicaltrials.gov. Instead, they perform searches that return links to published papers, or to Web sites and publications that report on those published papers. The result is that we are too often misled about whether drugs are effective.