Re-examining Data from Clinical Trials can Yield Different Results
The results from medical clinical trials can be flawed, a new study reported. According to researchers from Stanford University School of Medicine, re-examining roughly one-third of the available randomized clinical trials resulted in very different results.
"Randomized trials are at the hub of making decisions about whether these are drugs or devices we want to use on real patients in real life," said study senior author Dr. John Ioannidis, director of the Stanford Prevention Research Center reported by Philly.
For this study, the team found 37 published re-analyses of trials that contained raw clinical trial data. The team reviewed these publications and found that 35 percent of them (13 re-analyses) had yielded different conclusions from the original trials. Nine of these reviews reported that more patients could have been treated whereas one of the reviews found that fewer patients should have been treated. Three of the reviews suggested that the trials should have used different parameters when deciding which patients would qualify for treatment.
Loannidis explained that different results could occur from the fact that the researchers used different statistical methods. Researchers conducting follow-up analyses could also be applying new research that the original trial researchers did not have. Regardless, the fact that data can be interpreted in so many different ways could be troubling because it suggests that who can or cannot be treated with certain medications depends greatly on how or when a research team analyzes the raw data.
"There are new insights and different opinions if you have a different group interpret the data, and that represents the degree to which people can analyze data differently," said Dr. Eric Peterson, a cardiologist at Duke University Medical Center, who authored an editorial accompanying this study.
In extremely rare cases, there were clear errors that affected the results. However, many of the differences between the analyses can be attributed to human biases that are not tainted.
"These are not biases that result from fraud. These are biases that result from being human," commented George Prendergast, CEO of the Lankenau Institute for Medical Research in Wynnewood, Pa., and editor-in-chief of Cancer Research. "The brain is a pattern-generating tool, and it tends to want to generate patterns that come from previous experience."
Due to these differences, the researchers stressed the importance of sharing raw data so that other researchers can reexamine them. When trials do not share this data, it can promote distrust.
"I am very much in favor of data sharing, and believe there should be incentives for independent researchers to conduct these kinds of re-analyses," said Ioannidis according to the University's press release. "They can be extremely insightful."
The study, "Reanalyses of Randomized Clinical Trial Data," was published in the Journal of the American Medical Association (JAMA).