Another Study Warns That Evidence From Observational Studies Provides Unreliable Results For Therapies

Status

Another Study Warns That Evidence From Observational Studies Provides Unreliable Results For Therapies

We have previously mentioned the enormous contributions made by John Ioannidis MD in the area of understanding the reliability of medical evidence. [Ioannidis, Delfini Blog, Giannakakis] We want to draw your attention to a recent publication dealing with the risks of relying on observational data for cause and effect conclusions. [Hemkens] In this recent study, Hemkens, Ioannidis and other colleagues assessed differences in mortality effect size reported in observational (routinely collected data [RCD]) studies as compared with results reported in RCTs.

Eligible RCD studies used propensity scores in an effort to address confounding bias in the observational studies. The authors  compared the results of RCD and RCTs. The analysis included only RCD studies conducted before any RCT was published on the same topic. They assessed the risk of bias for RCD studies and randomized controlled trials (RCTs) using The Cochrane Collaboration risk of bias tools.  The direction of treatment effects, confidence intervals and effect sizes (odds ratios) were compared between RCD studies and RCTs. The relative odds ratios were calculated across all pairs of RCD studies and trials.

The authors found that RCD studies systematically and substantially overestimated mortality benefits of medical treatments compared with subsequent trials investigating the same question. Overall, RCD studies reported significantly more favorable mortality estimates by a relative 31% than subsequent trials (summary relative odds ratio 1.31 (95% confidence interval 1.03 to 1.65; I2 (I square)=0%)).

These authors remind us yet again that If no randomized trials exist, clinicians and other decision-makers should not trust results from observational data from sources such as local or national databases, registries, cohort or case-control studies. 

References
Delfini Blog: https://delfini.org/blog/?p=292

Giannakakis IA, Haidich AB, Contopoulos-Ioannidis DG, Papanikolaou GN, Baltogianni MS, Ioannidis JP. Citation of randomized evidence in support of guidelines of therapeutic and preventive interventions. J Clin Epidemiol. 2002 Jun;55(6):545-55. PubMed PMID: 12063096.

Hemkens LG, Contopoulos-Ioannidis DG, Ioannidis JP. Agreement of treatment effects for mortality from routinely collected data and subsequent randomized trials: meta-epidemiological survey. BMJ. 2016 Feb 8;352:i493. doi: 10.1136/bmj.i493. PubMed PMID: 26858277.

Ioannidis JPA. Why Most Published Research Findings are False. PLoS Med 2005; 2(8):696-701 PMID: 16060722

 

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Comparative Study Designs: Claiming Superiority, Equivalence and Non-inferiority—A Few Considerations & Practical Approaches

Status

Comparative Study Designs: Claiming Superiority, Equivalence and Non-inferiority—A Few Considerations & Practical Approaches

This is a complex area, and we recommend downloading our freely available 1-page summary to help assess issues with equivalence and non-inferiority trials. Here is a short sampling of some of the problems in these designs: lack of sufficient evidence confirming efficacy of referent treatment, (“referent” refers to the comparator treatment); study not sufficiently similar to referent study; inappropriate Deltas (meaning the margin established for equivalence or non-inferiority); or significant biases or analysis methods that would tend to diminish an effect size and “favor” no difference between groups (e.g., conservative application of ITT analysis, insufficient power, etc.), thus pushing toward non-inferiority or equivalence.

However, we do want to say a few more things about non-inferiority trials based on some recent questions and readings.

Is it acceptable to claim superiority in a non-inferiority trial? Yes. The Food and Drug Administration (FDA) and the European Medicines Agency (EMA), among others, including ourselves, all agree that declaring superiority in a non-inferiority trial is acceptable. What’s more, there is agreement that multiplicity adjusting does not need to be done when first testing for non-inferiority and then superiority.

See Delfini Recommended Reading: Included here is a nice article by Steve Snapinn. Snappin even recommends that “…most, if not all, active-controlled clinical trial protocols should define a noninferiority margin and include a noninferiority hypothesis.” We agree. Clinical trials are expensive to do, take time, have opportunity costs, and—most importantly—are of impact on the lives of the human subjects who engage in them. This is a smart procedure that costs nothing especially as multiplicity adjusting is not needed.

What does matter is having an appropriate population for doing a superiority analysis. For superiority, in studies with dichotomous variables, the population should be Intention-to-Treat (ITT) with an appropriate imputation method that does not favor the intervention under study. In studies with time-to-event outcomes, the population should be based on the ITT principle (meaning all randomized patients should be used in the analysis by the group to which they were randomized) with unbiased censoring rules.

Confidence intervals (CIs) should be evaluated to determine superiority. Some evaluators seem to suggest that superiority can be declared only if the CIs are wholly above the Delta. Schumi et al. express their opinion that you can declare superiority if the confidence interval for the new treatment is above the line of no difference (i.e.., is statistically significant). They state, “The calculated CI does not know whether its purpose is to judge superiority or non-inferiority. If it sits wholly above zero [or 1, depending upon the measure of outcome], then it has shown superiority.” EMA would seem to agree. We agree as well. If one wishes to take a more conservative approach, one method we recommend is to judge whether the Delta seems clinically reasonable (you should always do this) and if not, establishing your own through clinical judgment. Then determine if the entire CI meets or exceeds what you deem to be clinically meaningful. To us, this method satisfies both approaches and makes practical and clinical sense.

Is it acceptable to claim non-inferiority trial superiority? It depends. This area is controversial with some saying no and some saying it depends. However, there is agreement amongst those on the “it depends” side that it generally should not be done due to validity issues as described above.

References
US Department of Health and Human Services, Food and Drug Administration: Guidance for Industry Non-Inferiority Clinical Trials (DRAFT). 2010.
http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInformation/ Guidances/UCM202140.pdf

European Agency for the Evaluation of Medicinal Products Committee for Proprietary Medicinal Products (CPMP): Points to Consider on Switching Between Superiority and Non-Inferiority. 2000. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2014556/

http://www.delfini.org/delfiniReading.htm#equivalence

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Are Adaptive Trials Ready For Primetime?

Status

Are Adaptive Trials Ready For Primetime?

It is well-known that many patients volunteer for clinical trials because they mistakenly believe that the goal of the trial is to improve outcomes for the volunteers. A type of trial that does attempt to improve outcomes for those who enter into the trial late is the adaptive trial. In adaptive trials investigators change the enrollment and treatment procedures as the study gathers data from the trial about treatment efficacy. For example, if a study compares a new drug against a placebo treatment and the drug appears to be working, subjects enrolling later will be more likely to receive it. The idea is that adaptive designs will attract more study volunteers.

As pointed out in a couple of recent commentaries, however, there are many unanswered questions about this type of trial. A major concern is the problem of unblinding that may occur with this design with resulting problems with allocation of patients to groups. Frequent peeks at the data may influence decisions made by monitoring boards, investigators and participants.  Another issue is the unknown ability to replicate adaptive trials.  Finally, there are ethical questions such as the issue of greater risk for early enrollees compared to risk for later enrollees.

For further information see—

1. Adaptive Trials in Clinical Research: Scientific and Ethical Issues to Consider
van der Graaf R, Roes KC, van Delden JJ. Adaptive Trials in Clinical Research: Scientific and Ethical Issues to ConsiderAdaptive Trials in Clinical Research. JAMA. 2012 Jun 13;307(22):2379-80. PubMed PMID: 22692169.

2. Adaptive Clinical Trials: A Partial Remedy for the Therapeutic Misconception?
Meurer WJ, Lewis RJ, Berry DA. Adaptive clinical trials: a partial remedy for the therapeutic Misconception?adaptive clinical trials. JAMA. 2012 Jun 13;307(22):2377-8. PubMed PMID: 22692168.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Advice On Some Quasi-Experimental Alternatives To Randomization

Status

Advice On Some Quasi-Experimental Alternatives To Randomization

We have found a lot of help over the years in reading the advice and postings of statistician, Dr. Steve Simon.  Here’s an entry in which he discusses some considerations when dealing with quasi-experimental designs.  You can sign up for his newsletter to receive it directly.  (Note: if you keep reading to the next entry about how much in practice is estimated to be evidence-based, we suspect that the reported percent might be inflated if the reviewers were not applying a solid critical appraisal approach.)  You can read Steve’s advice about quasi-experimental design considerations here:

http://www.pmean.com/news/201201.html#1

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email