Loss to Follow-up Update

Status

Loss to Follow-up Update
Heads up about an important systematic review of the effects of attrition on outcomes of randomized controlled trials (RCTs) that was recently published in the BMJ.[1]

Background

  • Key Question: Would the outcomes of the trial change significantly if all persons had completed the study, and we had complete information on them?
  • Loss to follow-up in RCTs is important because it can bias study results if the balance between study groups that was established through randomization is disrupted in key prognostic variables that would otherwise result in different outcomes.  If there is no imbalance between and within various study subgroups (i.e., as randomized groups compared to completers), then loss to follow-up may not present a threat to validity, except in instances in which statistical significance is not reached because of decreased power.

BMJ Study
The aim of this review was to assess the reporting, extent and handling of loss to follow-up and its potential impact on the estimates of the effect of treatment in RCTs. The investigators evaluated 235 RCTs published between 2005 through 2007 in the five general medical journals with the highest impact factors: Annals of Internal Medicine, BMJ, JAMA, Lancet, and New England Journal of Medicine. All eligible studies reported a significant (P<0.05) primary patient-important outcome.

Methods
The investigators did several sensitivity analyses to evaluate the effect varying assumptions about the outcomes of participants lost to follow-up on the estimate of effect for the primary outcome.  Their analyses strategies were—

  • None of the participants lost to follow-up had the event
  • All the participants lost to follow-up had the event
  • None of those lost to follow-up in the treatment group had the event and all those lost to follow-up in the control group did (best case scenario)
  • All participants lost to follow-up in the treatment group had the event and none of those in the control group did (worst case scenario)
  • More plausible assumptions using various event rates which the authors call the “the event incidence:” The investigators performed sensitivity analyses using what they considered to be plausible ratios of event rates in the dropouts compared to the completers using ratios of 1, 1.5, 2, 3.5 in the intervention group compared to the control group (see Appendix 2 at the link at the end of this post below the reference). They chose an upper limit of 5 times as many dropouts for the intervention group as it represents the highest ratio reported in the literature.

Key Findings

  • Of the 235 eligible studies, 31 (13%) did not report whether or not loss to follow-up occurred.
  • In studies reporting the relevant information, the median percentage of participants lost to follow-up was 6% (interquartile range 2-14%).
  • The method by which loss to follow-up was handled was unclear in 37 studies (19%); the most commonly used method was survival analysis (66, 35%).
  • When the investigators varied assumptions about loss to follow-up, results of 19% of trials were no longer significant if they assumed no participants lost to follow-up had the event of interest, 17% if they assumed that all participants lost to follow-up had the event, and 58% if they assumed a worst case scenario (all participants lost to follow-up in the treatment group and none of those in the control group had the event).
  • Under more plausible assumptions, in which the incidence of events in those lost to follow-up relative to those followed-up was higher in the intervention than control group, 0% to 33% of trials—depending upon which plausible assumptions were used (see Appendix 2 at the link at the end of this post below the reference)— lost statistically significant differences in important endpoints.

Summary
When plausible assumptions are made about the outcomes of participants lost to follow-up in RCTs, this study reports that up to a third of positive findings in RCTs lose statistical significance. The authors recommend that authors of individual RCTs and of systematic reviews test their results against various reasonable assumptions (sensitivity analyses). Only when the results are robust with all reasonable assumptions should inferences from those study results be used by readers.

For more information see the Delfini white paper  on “missingness” at http://www.delfini.org/Delfini_WhitePaper_MissingData.pdf

Reference

1. Akl EA, Briel M, You JJ et al. Potential impact on estimated treatment effects of information lost to follow-up in randomised controlled trials (LOST-IT): systematic review BMJ 2012;344:e2809 doi: 10.1136/bmj.e2809 (Published 18 May 2012). PMID: 19519891

Article is freely available at—

http://www.bmj.com/content/344/bmj.e2809

Supplementary information is available at—

http://www.bmj.com/content/suppl/2012/05/18/bmj.e2809.DC1

For sensitivity analysis results tables, see Appendix 2 at—

http://www.bmj.com/highwire/filestream/585392/field_highwire_adjunct_files/1

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Adjusting for Multiple Comparisons

Status

Adjusting for Multiple Comparisons

Frequently studies report results that are not the primary or secondary outcome measures—sometimes because the finding is not anticipated, is unusual or judged to be important by the authors. How should these findings be assessed? A common belief is that if outcomes are not pre-specified, serious attention to them is not warranted. But is this the case? Kenneth J. Rothman in 1990 wrote an article that we feel is very helpful in such situations.[1]

  • Rothman points out that making statistical adjustments for multiple comparisons is similar to the problem of statistical significance testing where the investigator uses the P-value to estimate the probability of a study demonstrating an effect size as great or greater than the one found in the study, given that the null hypothesis is true—i.e., that there is truly no difference between the groups being studied (with alpha as the arbitrary cutoff for clinical significance which is frequently set at 5%).  Obviously if the risk for rejecting a truly null hypothesis is 5% for every hypothesis examined, then examining multiple hypotheses will generate a larger number of falsely positive statistically significant findings because of the increasing number of hypotheses examined.
  • Adjusting for multiple comparisons is thought by many to be desirable because it will result in a smaller probability of erroneously rejecting the null hypothesis. Rothman argues this “pay for peeking” at more data by adjusting P-values with multiple comparisons is unnecessary and can be misleading. Adjusting for multiple comparisons might be paying a penalty for simply appropriately doing more comparisons, and there is no logical reason (or good evidence) for doing statistical adjusting. Rather, the burden is on those who advocate for multiple comparison adjustments to show there is a problem requiring a statistical fix.
  • Rothman’s  conclusion: It is reasonable to consider each association on its own for the information it conveys—he believes that there is no need for adjusting P-values with multiple comparisons.

Delfini Comment: Reading his paper is a bit difficult, but he make some good points about our not really understanding what chance is all about and that evaluating study outcomes for validity requires critical appraisal for the assessment of bias and other factors as well as the use of statistics for evaluating chance effects.

Reference

Rothman KJ. No adjustments are needed for multiple comparisons. Epidemiology.  1990 Jan;1(1):43-6. PubMed PMID: 2081237.

 

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Some Points About Surrogate Outcomes Courtesy of Steve Simon PhD

Status

Some Points About Surrogate Outcomes Courtesy of Steve Simon PhD

Our experience is that most healthcare professionals have difficulty understanding the appropriate place of surrogate outcomes (aka intermediate outcome measures, proxy markers or intermediate or surrogate markers, etc). For a very nice, concise round-up of some key points you can read Steve Simon’s short review. Steve has a PhD in statistics  and many years of experience in teaching statistics.  http://www.pmean.com/news/201203.html#1

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

From Richard Lehman’s Blog on Clinical Trial Quality

Status

From Richard Lehman’s Blog JAMA 2 May 2012 Vol 307:

“Here the past and present custodians of this site look at the quality of the trials registered between 2007 and 2010. They ‘are dominated by small trials and contain significant heterogeneity in methodological approaches, including reported use of randomization, blinding, and data monitoring committees.’ In other words, these trials are never going to yield clinically dependable data; most of them are futile, and therefore by definition unethical. Something is terribly wrong with the system which governs clinical trials: it is failing to protect patients and failing to generate useful knowledge. Most of what it produces is not evidence, but rubbish. And with no system in place to compel full disclosure of the data, it is often impossible to tell one from the other.”
http://jama.ama-assn.org/content/307/17/1838.abstract

For more Richard Lehman go to Journal Watch http://www.cebm.net/index.aspx?o=2320

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

The Problems With P-values

Status

The Problems With P-values

Think you understand p-values? We thought we did too. We were wrong. A huge number of us have been taught incorrectly. Thanks to Dr. Brian Alper, Editor-in-Chief of DynaMed who brought this to our attention and who, with some other writers, helped us work through the brambles. See our new definitions and explanations of “p-value” and “confidence intervals” in our glossary on our website. We have also added some thinking about “multiplicity testing.” Our tools have been updated to reflect these changes so you may wish to download your favorites for validity anew. See also our recommendation for DynaMed. Go to http://www.delfini.org/delfiniNew.htm and see entry at 05/10/2012.

 

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Dr. Otis Brawley & Overuse in Healthcare

Status

Dr. Otis Brawley & Overuse in Healthcare

Everyone will want to listen to Dr Otis Brawley, Chief Medical Officer of the American Cancer Society, discuss why overuse in healthcare is costing us money, jobs and other harms. He talks like a real person—not like a professor and is easy to listen to.  Who is at fault for all of our healthcare woes? Watch it and you will see we are all to blame. We need reliable information to make good choices and very few people are getting it.

https://www.youtube.com/watch?v=LOdDS8rd4-8

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

“Move More” Packets for Cancer Patients

Status

“Move More” Packets for Cancer Patients

Macmillan Cancer Support is a London-based organization providing practical, medical and financial support to cancer patients in Britain. It is on the shortlist of the BMJ Group award for healthcare communication because of its “Move More” packet— a  physical activity and cancer information initiative, urging patients to become more active during and after cancer. The impetus for this project is the ongoing problem of cancer patients still being told to rest, rather than keep active, during and after cancer treatment. The packs, for patients and care givers outline the benefits of gentle activity and suggest ways to introduce activity into their lives. For example, one very popular inclusion was packs of seeds, to encourage people to get outside into their gardens. People loved the seeds and looking forward to seeing the flowers bloom and the veggies grow. For more information see BMJ 2012;341:e2866 doi: 10.1136/bmj.e2866

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Open Access—One Step Forward

Status

Open Access—One Step Forward

PLoS One, a peer reviewed, online publication has blazed the trail for open access in  that all publication costs are covered by the authors’ charges of $ 1,350—readers pay nothing. Other publications are expected to follow suit in the coming years. Open access may be assisted by a new bill in Congress—The Federal Research Public Access Act—that would require all federally funded research to be placed online for free access within six months of publication. Although this bill still embargoes access to providers and patients for six months, these developments signal what may be important progress towards full open access to healthcare information.

For further information see BMJ 2012;344:e2937 doi: 10.1136/bmj.e2937 and BMJ 2012;344:e2895 doi: 10.1136/bmj.e2895

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email