Why People Tend to Overuse Healthcare Interventions

Status

Why People Tend to Overuse Healthcare Interventions

This nice piece in Time Magazine by Maia Szalavitz provides some clues about our major problem of overuse. Ms. Szalavitz documents the convincing power of anecdotes compared to statistics which are poorly understood by most people. She provides a really nice example of decision support from the Harding Center for Risk Literacy for prostate cancer screening that illustrates graphically how prostate cancer screening is likely to create more harms than benefits.  For more information go to—

http://healthland.time.com/2012/05/25/why-people-cling-to-cancer-screening-and-other-questionable-medical-interventions-even-when-they-cause-harm/

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Safety Review of Five Biologic Antirheumatic Drugs

Status

Safety Review of Five Biologic Antirheumatic Drugs

An abstract of ours was selected for publication by the The European League Against Rheumatism (EULAR) for their Annual European Congress of Rheumatism 2012. We believe that our review provides important safety information for providers and patients. While the evidence has to be considered borderline at best due to study design and methodology issues (much is observational, for example), we believe the patterns are highly compelling, consistent and that they are not likely to be explained by a systematic bias. Therefore, we feel quite confident in the direction of the outcomes. (The link below is sometimes slow to load or needs to be loaded a subsequent time to view—so if it “fails to load,” try again.)

Stuart ME, Strite SA, Gandra SA. Systematic Safety Review Of Five Biologic Antirheumatic Drugs. Abstract number AB0478; EULAR 2012 Annual European Conference of Rheumatology

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Dr. John Ioannidis on Clinical Trials Issues, Cost and Inappropriate Care

Status

Dr. John Ioannidis on Clinical Trials Issues, Cost and Inappropriate Care

Since 1949, the NIH has provided a biweekly newsletter for employees of the National Institute of Health. Mostly the NIH Record announces talks to be given on-campus, but also summarizes some of the talks. In a recent issue the Record summarized a recent talk on bias in healthcare trials, delivered by  Dr.John Ioannidis, director of the Stanford Prevention Research Center. Some of his key points are quite thought-provoking and relate to our our huge problem of costly and  inappropriate care. Here is some food for thought from Dr Ioannidis:

  • Most statistically significant findings are not real at all—they’re just false positives
  • Many of these false positives are revealed when larger-scale studies attempt to replicate the findings of smaller studies
  • One of every four such trials is refuted when a larger trial is conducted
  • Journal editorial policies are responsible for much of this trend— editors want to see research that is novel and will have a large impact on the field. This generally means that editors are looking for papers that report very large, statistically significant effects.
  • An important safeguard is “repeatability” of positive findings
  • Individuals with a track record for doing high quality research should be recognized and given priority in publishing.

To read the entire entry go to:

http://nihrecord.od.nih.gov/newsletters/2012/05_11_2012/story2.htm

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Loss to Follow-up Update

Status

Loss to Follow-up Update
Heads up about an important systematic review of the effects of attrition on outcomes of randomized controlled trials (RCTs) that was recently published in the BMJ.[1]

Background

  • Key Question: Would the outcomes of the trial change significantly if all persons had completed the study, and we had complete information on them?
  • Loss to follow-up in RCTs is important because it can bias study results if the balance between study groups that was established through randomization is disrupted in key prognostic variables that would otherwise result in different outcomes.  If there is no imbalance between and within various study subgroups (i.e., as randomized groups compared to completers), then loss to follow-up may not present a threat to validity, except in instances in which statistical significance is not reached because of decreased power.

BMJ Study
The aim of this review was to assess the reporting, extent and handling of loss to follow-up and its potential impact on the estimates of the effect of treatment in RCTs. The investigators evaluated 235 RCTs published between 2005 through 2007 in the five general medical journals with the highest impact factors: Annals of Internal Medicine, BMJ, JAMA, Lancet, and New England Journal of Medicine. All eligible studies reported a significant (P<0.05) primary patient-important outcome.

Methods
The investigators did several sensitivity analyses to evaluate the effect varying assumptions about the outcomes of participants lost to follow-up on the estimate of effect for the primary outcome.  Their analyses strategies were—

  • None of the participants lost to follow-up had the event
  • All the participants lost to follow-up had the event
  • None of those lost to follow-up in the treatment group had the event and all those lost to follow-up in the control group did (best case scenario)
  • All participants lost to follow-up in the treatment group had the event and none of those in the control group did (worst case scenario)
  • More plausible assumptions using various event rates which the authors call the “the event incidence:” The investigators performed sensitivity analyses using what they considered to be plausible ratios of event rates in the dropouts compared to the completers using ratios of 1, 1.5, 2, 3.5 in the intervention group compared to the control group (see Appendix 2 at the link at the end of this post below the reference). They chose an upper limit of 5 times as many dropouts for the intervention group as it represents the highest ratio reported in the literature.

Key Findings

  • Of the 235 eligible studies, 31 (13%) did not report whether or not loss to follow-up occurred.
  • In studies reporting the relevant information, the median percentage of participants lost to follow-up was 6% (interquartile range 2-14%).
  • The method by which loss to follow-up was handled was unclear in 37 studies (19%); the most commonly used method was survival analysis (66, 35%).
  • When the investigators varied assumptions about loss to follow-up, results of 19% of trials were no longer significant if they assumed no participants lost to follow-up had the event of interest, 17% if they assumed that all participants lost to follow-up had the event, and 58% if they assumed a worst case scenario (all participants lost to follow-up in the treatment group and none of those in the control group had the event).
  • Under more plausible assumptions, in which the incidence of events in those lost to follow-up relative to those followed-up was higher in the intervention than control group, 0% to 33% of trials—depending upon which plausible assumptions were used (see Appendix 2 at the link at the end of this post below the reference)— lost statistically significant differences in important endpoints.

Summary
When plausible assumptions are made about the outcomes of participants lost to follow-up in RCTs, this study reports that up to a third of positive findings in RCTs lose statistical significance. The authors recommend that authors of individual RCTs and of systematic reviews test their results against various reasonable assumptions (sensitivity analyses). Only when the results are robust with all reasonable assumptions should inferences from those study results be used by readers.

For more information see the Delfini white paper  on “missingness” at http://www.delfini.org/Delfini_WhitePaper_MissingData.pdf

Reference

1. Akl EA, Briel M, You JJ et al. Potential impact on estimated treatment effects of information lost to follow-up in randomised controlled trials (LOST-IT): systematic review BMJ 2012;344:e2809 doi: 10.1136/bmj.e2809 (Published 18 May 2012). PMID: 19519891

Article is freely available at—

http://www.bmj.com/content/344/bmj.e2809

Supplementary information is available at—

http://www.bmj.com/content/suppl/2012/05/18/bmj.e2809.DC1

For sensitivity analysis results tables, see Appendix 2 at—

http://www.bmj.com/highwire/filestream/585392/field_highwire_adjunct_files/1

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Some Points About Surrogate Outcomes Courtesy of Steve Simon PhD

Status

Some Points About Surrogate Outcomes Courtesy of Steve Simon PhD

Our experience is that most healthcare professionals have difficulty understanding the appropriate place of surrogate outcomes (aka intermediate outcome measures, proxy markers or intermediate or surrogate markers, etc). For a very nice, concise round-up of some key points you can read Steve Simon’s short review. Steve has a PhD in statistics  and many years of experience in teaching statistics.  http://www.pmean.com/news/201203.html#1

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

The Problems With P-values

Status

The Problems With P-values

Think you understand p-values? We thought we did too. We were wrong. A huge number of us have been taught incorrectly. Thanks to Dr. Brian Alper, Editor-in-Chief of DynaMed who brought this to our attention and who, with some other writers, helped us work through the brambles. See our new definitions and explanations of “p-value” and “confidence intervals” in our glossary on our website. We have also added some thinking about “multiplicity testing.” Our tools have been updated to reflect these changes so you may wish to download your favorites for validity anew. See also our recommendation for DynaMed. Go to http://www.delfini.org/delfiniNew.htm and see entry at 05/10/2012.

 

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Open Access—One Step Forward

Status

Open Access—One Step Forward

PLoS One, a peer reviewed, online publication has blazed the trail for open access in  that all publication costs are covered by the authors’ charges of $ 1,350—readers pay nothing. Other publications are expected to follow suit in the coming years. Open access may be assisted by a new bill in Congress—The Federal Research Public Access Act—that would require all federally funded research to be placed online for free access within six months of publication. Although this bill still embargoes access to providers and patients for six months, these developments signal what may be important progress towards full open access to healthcare information.

For further information see BMJ 2012;344:e2937 doi: 10.1136/bmj.e2937 and BMJ 2012;344:e2895 doi: 10.1136/bmj.e2895

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Unnecessary or Harmful Tests and Treatments You May Wish To Avoid

Status

Unnecessary or Harmful Tests and Treatments You May Wish To Avoid

The Dartmouth Atlas of Healthcare and others have estimated that at least 30% of US healthcare spending is unnecessary. The American Board of Internal Medicine, along with nine prominent physician groups, announced on April 4, 2012 released lists of 45 common tests and treatments they say are often unnecessary and may even harm patients. For example the American Board of Family Practice recommended against imaging for low back pain unless red flags are present. Other items on the lists included avoiding antibiotics for most acute mild to moderate sinusitis symptoms, screening EKGs (or other cardiac screenings) in people without symptoms, DEXA screening for osteoporosis in women younger than 65 and many more. For details go to Kaiser Health News—http://www.kaiserhealthnews.org/Stories/2012/April/04/physicians-unnecessary-treatments.aspx

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Have You Seen PRISMA?

Status

Have You Seen PRISMA?

Systematic reviews and meta-analyses are needed to synthesize evidence regarding clinical questions. Unfortunately the quality of these reviews varies greatly. As part of a movement to improve the transparency and reporting of important details in meta-analyses of randomized controlled trials (RCTs), the QUOROM (quality of reporting of meta-analysis) statement was developed in 1999.[1] In 2009, that guidance was updated and expanded by a group of 29 review authors, methodologists, clinicians, medical editors, and consumers, and the  name was changed to PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses).[2] Although some authors have used PRISMA to improve the reporting of systematic reviews, and thereby assisting critical appraisers assess the benefits and harms of a healthcare intervention, we (and others) continue to see systematic reviews that include RCTs at high-risk-of-bias in their analyses. Critical appraisers might want to be aware of the PRISMA statement.

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2714672/?tool=pubmed

1. Moher D, Cook DJ, Eastwood S, Olkin I, Rennie D, et al. Improving the 8 quality of reports of meta-analyses of randomised controlled trials: The QUOROM statement. Quality of Reporting of Meta-analyses. Lancet 1999;354:1896-1900. PMID: 10584742.

2. Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JP, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration. BMJ. 2009 Jul 21;339:b2700. doi: 10.1136/bmj.b2700. PubMed PMID: 19622552.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email