Loss to Follow-up Update

Status

Loss to Follow-up Update
Heads up about an important systematic review of the effects of attrition on outcomes of randomized controlled trials (RCTs) that was recently published in the BMJ.[1]

Background

  • Key Question: Would the outcomes of the trial change significantly if all persons had completed the study, and we had complete information on them?
  • Loss to follow-up in RCTs is important because it can bias study results if the balance between study groups that was established through randomization is disrupted in key prognostic variables that would otherwise result in different outcomes.  If there is no imbalance between and within various study subgroups (i.e., as randomized groups compared to completers), then loss to follow-up may not present a threat to validity, except in instances in which statistical significance is not reached because of decreased power.

BMJ Study
The aim of this review was to assess the reporting, extent and handling of loss to follow-up and its potential impact on the estimates of the effect of treatment in RCTs. The investigators evaluated 235 RCTs published between 2005 through 2007 in the five general medical journals with the highest impact factors: Annals of Internal Medicine, BMJ, JAMA, Lancet, and New England Journal of Medicine. All eligible studies reported a significant (P<0.05) primary patient-important outcome.

Methods
The investigators did several sensitivity analyses to evaluate the effect varying assumptions about the outcomes of participants lost to follow-up on the estimate of effect for the primary outcome.  Their analyses strategies were—

  • None of the participants lost to follow-up had the event
  • All the participants lost to follow-up had the event
  • None of those lost to follow-up in the treatment group had the event and all those lost to follow-up in the control group did (best case scenario)
  • All participants lost to follow-up in the treatment group had the event and none of those in the control group did (worst case scenario)
  • More plausible assumptions using various event rates which the authors call the “the event incidence:” The investigators performed sensitivity analyses using what they considered to be plausible ratios of event rates in the dropouts compared to the completers using ratios of 1, 1.5, 2, 3.5 in the intervention group compared to the control group (see Appendix 2 at the link at the end of this post below the reference). They chose an upper limit of 5 times as many dropouts for the intervention group as it represents the highest ratio reported in the literature.

Key Findings

  • Of the 235 eligible studies, 31 (13%) did not report whether or not loss to follow-up occurred.
  • In studies reporting the relevant information, the median percentage of participants lost to follow-up was 6% (interquartile range 2-14%).
  • The method by which loss to follow-up was handled was unclear in 37 studies (19%); the most commonly used method was survival analysis (66, 35%).
  • When the investigators varied assumptions about loss to follow-up, results of 19% of trials were no longer significant if they assumed no participants lost to follow-up had the event of interest, 17% if they assumed that all participants lost to follow-up had the event, and 58% if they assumed a worst case scenario (all participants lost to follow-up in the treatment group and none of those in the control group had the event).
  • Under more plausible assumptions, in which the incidence of events in those lost to follow-up relative to those followed-up was higher in the intervention than control group, 0% to 33% of trials—depending upon which plausible assumptions were used (see Appendix 2 at the link at the end of this post below the reference)— lost statistically significant differences in important endpoints.

Summary
When plausible assumptions are made about the outcomes of participants lost to follow-up in RCTs, this study reports that up to a third of positive findings in RCTs lose statistical significance. The authors recommend that authors of individual RCTs and of systematic reviews test their results against various reasonable assumptions (sensitivity analyses). Only when the results are robust with all reasonable assumptions should inferences from those study results be used by readers.

For more information see the Delfini white paper  on “missingness” at http://www.delfini.org/Delfini_WhitePaper_MissingData.pdf

Reference

1. Akl EA, Briel M, You JJ et al. Potential impact on estimated treatment effects of information lost to follow-up in randomised controlled trials (LOST-IT): systematic review BMJ 2012;344:e2809 doi: 10.1136/bmj.e2809 (Published 18 May 2012). PMID: 19519891

Article is freely available at—

http://www.bmj.com/content/344/bmj.e2809

Supplementary information is available at—

http://www.bmj.com/content/suppl/2012/05/18/bmj.e2809.DC1

For sensitivity analysis results tables, see Appendix 2 at—

http://www.bmj.com/highwire/filestream/585392/field_highwire_adjunct_files/1

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Appendicitis 1889 to 2012: What, No Surgery?

Status

Appendicitis 1889 to 2012: What, No Surgery?

All medical students learn about McBurney’s point—that’s the spot, named for McBurney, in the right lower quadrant of the abdomen where classical appendicitis pain finally localizes.[1] If the patient’s history fits the classic history of appendicitis with vague abdominal pain eventually localizing to McBurney’s point, the norm has been—at least in the U.S. —to take the appendix out. However, as pointed out in a new systematic review done as a meta-analysis, starting in the late 1950s there were reports of success in treating appendicitis with conservative therapy (antibiotics) and good outcomes without resorting to appendectomy.[2]

This systematic review presents a review of our traditions and lack of conclusive evidence about best practices in managing appendicitis and suggests that, for many patients, avoiding appendectomy may be a reasonable option. The current meta-analysis of four selected randomized controlled trials from 59 eligible trials with a total of 900 patients, reported a relative risk reduction for complications (perforation, peritonitis, wound infection) from appendicitis of 31% for antibiotic treatment compared with appendectomy (risk ratio 0.69 (95% confidence interval 0.54 to 0.89); I2=0%; P=0.004). There were no significant differences between antibiotic treatment and appendectomy for length of hospital stay, efficacy of treatment, or risk of complicated appendicitis.

The biggest problem in this meta-analysis is that the results are based on trials with significant threats to validity. Randomization sequence was computer generated in one trial, by “external randomization” in one trial, by date of birth in one trial and unclear in one trial. Concealment of allocation was by sealed envelopes in two trials and not reported in the other two trials. All trials were unblinded. Withdrawal rates are unclear. Therefore, it is uncertain how much the results of this meta-analysis may have been distorted by bias. In addition, as pointed out by an editorialist, in patients who have persistent problems despite antibiotic treatment, delayed appendectomy might be necessary.[3] Delayed appendectomy has been associated with a high complication rate. Also, if a patient develops an inflammatory phlegmon—a palpable mass at clinical examination or an inflammatory mass or abscess at imaging or at surgical exploration—appendectomy sometimes has to be converted to an ileocecal resection—a much more involved operation. Another important issue with antibiotic treatment is the chance of recurrence. The current meta-analysis found a 20% chance of recurrence of appendicitis after conservative treatment within one year. Of the recurrences, 20% of patients presented with a perforated or gangrenous appendicitis. The editorialist questions whether a failure rate of 20% within one year is acceptable.

These four trials and this meta-analysis suggest that antibiotics may be safe for some patients with uncomplicated appendicitis. If this option is considered, we believe detailed information about the uncertainties regarding benefits and risks should be made known to patients. Details are available at http://www.bmj.com/content/344/bmj.e2156

References

1. Thomas CG Jr. Experiences with Early Operative Interference in Cases of Disease of the Vermiform Appendix by Charles McBurney, M.D., Visiting Surgeon to the Roosevelt Hospital, New York City. Rev Surg. 1969 May-Jun;26(3):153-66. PubMed PMID: 4893208.

2. Varadhan KK, Neal KR, Lobo DN. Safety and efficacy of antibiotics compared with appendicectomy for treatment of uncomplicated acute appendicitis: meta-analysis of randomised controlled trials. BMJ. 2012 Apr 5;344:e2156. doi: 10.1136/bmj.e2156. PubMed PMID: 22491789.

3. BMJ 2012;344:e2546 (Published 5 April 2012).

 

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Empirical Evidence of Attrition Bias in Clinical Trials

Status

Empirical Evidence of Attrition Bias in Clinical Trials

The commentary, “Empirical evidence of attrition bias in clinical trials,” by Juni et al [1] is a nice review of what has transpired since 1970 when attrition bias received attention in a critical appraisal of a non-valid trial of extracranial bypass surgery for transient ischemic attack. [2] At about the same time Bradford Hill coined the phrase “intention-to-treat.”  He wrote that excluding patient data after “admission to the treated or control group” may affect the validity of clinical trials and that “unless the losses are very few and therefore unimportant, we may inevitably have to keep such patients in the comparison and thus measure the ‘intention-to-treat’ in a given way, rather than the actual treatment.”[3] The next major development was meta-epidemiological research which assessed trials for associations between methodological quality and effect size and found conflicting results in terms of the effect of attrition bias on effect size.  However, as the commentary points out, the studies assessing attrition bias were flawed. [4,5,6].

Finally a breakthrough in understanding the distorting effect of loss of subjects following randomization was seen by two authors evaluating attrition bias in oncology trials.[7] The investigators compared the results from their analyses which utilized individual patient data, which invariably followed the intention-to-treat principle with those done by the original investigators, which often excluded some or many patients. The results showed that pooled analyses of trials with patient exclusions reported more beneficial effects of the experimental treatment than analyses based on all or most patients who had been randomized. Tierney and Stewart showed that, in most meta-analyses they reviewed based on only “included” patients, the results favored the research treatment (P = 0.03). The commentary gives deserved credit to Tierney and Stewart for their tremendous contribution to critical appraisal and is a very nice, short read.

References

1. Jüni P, Egger M. Commentary: Empirical evidence of attrition bias in clinical  trials. Int J Epidemiol. 2005 Feb;34(1):87-8. Epub 2005 Jan 13. Erratum in: Int J Epidemiol. 2006 Dec;35(6):1595. PubMed PMID: 15649954.

2. Fields WS, Maslenikov V, Meyer JS, Hass WK, Remington RD, Macdonald M. Joint study of extracranial arterial occlusion. V. Progress report of prognosis following surgery or nonsurgical treatment for transient cerebral ischemic attacks. PubMed PMID: 5467158.

3. Bradford Hill A. Principles of Medical Statistics, 9th edn. London: The Lancet Limited, 1971.

4. Schulz KF, Chalmers I, Hayes RJ, Altman D. Empirical evidence of bias. Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA 1995;273:408–12. PMID: 7823387

5. Kjaergard LL, Villumsen J, Gluud C. Reported methodological quality and discrepancies between large and small randomized trials in metaanalyses. Ann Intern Med 2001;135:982–89. PMID 11730399

6. Balk EM, Bonis PA, Moskowitz H, Schmid CH, Ioannidis JP, Wang C, Lau J. Correlation of quality measures with estimates of treatment effect in meta-analyses of randomized controlled trials. JAMA. 2002 Jun 12;287(22):2973-82. PubMed PMID: 12052127.

7. Tierney JF, Stewart LA. Investigating patient exclusion bias in meta-analysis. Int J Epidemiol. 2005 Feb;34(1):79-87. Epub 2004 Nov 23. PubMed PMID: 15561753.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Have You Seen PRISMA?

Status

Have You Seen PRISMA?

Systematic reviews and meta-analyses are needed to synthesize evidence regarding clinical questions. Unfortunately the quality of these reviews varies greatly. As part of a movement to improve the transparency and reporting of important details in meta-analyses of randomized controlled trials (RCTs), the QUOROM (quality of reporting of meta-analysis) statement was developed in 1999.[1] In 2009, that guidance was updated and expanded by a group of 29 review authors, methodologists, clinicians, medical editors, and consumers, and the  name was changed to PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses).[2] Although some authors have used PRISMA to improve the reporting of systematic reviews, and thereby assisting critical appraisers assess the benefits and harms of a healthcare intervention, we (and others) continue to see systematic reviews that include RCTs at high-risk-of-bias in their analyses. Critical appraisers might want to be aware of the PRISMA statement.

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2714672/?tool=pubmed

1. Moher D, Cook DJ, Eastwood S, Olkin I, Rennie D, et al. Improving the 8 quality of reports of meta-analyses of randomised controlled trials: The QUOROM statement. Quality of Reporting of Meta-analyses. Lancet 1999;354:1896-1900. PMID: 10584742.

2. Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JP, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration. BMJ. 2009 Jul 21;339:b2700. doi: 10.1136/bmj.b2700. PubMed PMID: 19622552.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

A Caution When Evaluating Systematic Reviews and Meta-analyses

Status

A Caution When Evaluating Systematic Reviews and Meta-analyses

We would like to draw critical appraisers’ attention to an infrequent but important problem encountered in some systematic reviews—the accuracy of standardized mean differences in some reviews. Meta-analysis of trials that have used different scales to record outcomes of a similar nature requires data transformation to a uniform scale, the standardized mean difference (SMD). Gøtzsche and colleagues, in a review of 27 meta-analyses utilizing SMD found that a high proportion of meta-analyses based on SMDs contained meaningful errors in data extraction and calculation of point estimates.[1] Gøtzsche et al. audited two trials from each review and found that, in 17 meta-analyses (63%), there were errors for at least 1 of the 2 trials examined. We recommend that critical appraisers be aware of this issue.

1. Gøtzsche PC, Hróbjartsson A, Maric K, Tendal B. Data extraction errors in meta-analyses that use standardized mean differences. JAMA. 2007 Jul 25;298(4):430-7. Erratum in: JAMA. 2007 Nov 21;298(19):2264. PubMed PMID:17652297.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

A Controlled Trial of Sildenafil in Advanced Idiopathic Pulmonary Fibrosis Study (STEP_IPF Study): Evidence-based Student Review

Status

A Controlled Trial of Sildenafil in Advanced Idiopathic Pulmonary Fibrosis Study (STEP_IPF Study): Evidence-based Student Review

New publication of an evidence-based student review at our California Pharmacist page. Link: http://www.delfini.org/Showcase_Publication_CPhA.htm

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Controlling Hypertension and Hypotension Immediately Post Stroke Study (CHHIPS Study): Evidence-based Student Review

Status

Controlling Hypertension and Hypotension Immediately Post Stroke Study (CHHIPS Study): Evidence-based Student Review

New publication of an evidence-based student review at our California Pharmacist page: Controlling Hypertension and Hypotension Immediately Post Stroke Study (CHHIPS Study).

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email