Update on Decision Support for Clinicians and Patients

Status

Update on Decision Support for Clinicians and Patients

We have written extensively and provide many examples of decision support materials on our website. An easy way to round them up is to go to our website search window http://www.delfini.org/index_SiteGoogleSearch.htm and type in the terms “decision support.”

A nice systematic review of the topic funded by AHRQ—Clinical Decision Support Systems (CDSSs)— has recently been published in the Annals of Internal Medicine.[1]  The aim of the review was to evaluate the effect of CDSSs on clinical outcomes, health care processes, workload and efficiency, patient satisfaction, cost, and provider use and implementation. CDSSs include alerts, reminders, order sets, drug-dose information, care summary dashboards that provide performance feedback on quality indicators, and information and other aids designed to improve clinical decision-making.

Findings:  148 randomized controlled trials were included in the review. A total of 128 (86%) assessed health care process measures, 29 (20%) assessed clinical outcomes, and 22 (15%) measured costs. Both commercially and locally developed CDSSs improved health care process measures related to performing preventive services (n =25; odds ratio [OR] 1.42, 95% CI [1.27 to 1.58]), ordering clinical studies (n=20; OR 1.72, 95% CI [1.47 to 2.00]), and prescribing therapies (n=46; OR 1.57, 95% CI [1.35 to 1.82]). There was heterogeneity in interventions, populations, settings and outcomes as would be expected. The authors conclude that commercially and locally developed CDSSs are effective at improving health care process measures across diverse settings, but evidence for clinical, economic, workload and efficiency outcomes remains sparse.

Delfini Comment: Although this review focused on decision support systems, the entire realm of decision support for end users is of great importance to all health care decision-makers. Without good decision support, we will all make suboptimal decisions. This area is huge and is worth spending time understanding how to move evidence from a synthesis to decision support. Interested readers might want to look at some examples of wonderful decision support materials created at the Mayo Clinic. The URL is—

http://webpages.charter.net/vmontori/Wiser_Choices_Program_Aids_Site/Welcome.html

Reference

1.  Bright TJ, Wong A, Dhurjati R, Bristow E, Bastian L, Coeytaux RR, Samsa G, Hasselblad V, Williams JW, Musty MD, Wing L, Kendrick AS, Sanders GD, Lobach D. Effect of clinical decision-support systems: a systematic review. Ann Intern Med. 2012 Jul 3;157(1):29-43. PubMed PMID: 22751758.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Early Termination of Clinical Trials—2012 Update

Status

Early Termination of Clinical Trials—2012 Update

Several years ago we presented the increasing evidence of problems with early termination of clinical trials for benefit after interim analyses.[1] The bottom line is that results are very likely to be distorted because of chance findings.  A useful review of this topic has been recently published.[2] Briefly, this review points out that—

  • Frequently trials stopped early for benefit report results that are not credible, e.g., in one review, relative risk reductions were over 47% in half, over 70% in a quarter. The apparent overestimates were larger in smaller trials.
  • Stopping trials early for apparent benefit is highly likely to systematically overestimate treatment effects.
  • Large overestimates were common when the total number of events was less than 200.
  • Smaller but important overestimates are likely with 200 to 500 events, and trials with over 500 events are likely to show small overestimates.
  • Stopping rules do not appear to ensure protection against distortion of results.
  • Despite the fact that stopped trials may report chance findings that overestimate true effect sizes—especially when based on a small number of events—positive results receive significant attention and can bias clinical practice, clinical guidelines and subsequent systematic reviews.
  • Trials stopped early reduce opportunities to find potential harms.

The authors provide 3 examples to illustrate the above points where harm is likely to have occurred to patients.

Case 1 is the use of preoperative beta blockers in non-cardiac surgery in 1999 a clinical trial of bisoprolol in patients with vascular disease having non-cardiac surgery with a planned sample size of 266 stopped early after enrolling 112 patients—with 20 events. Two of 59 patients in the bisoprolol group and 18 of 53 in the control group had experienced a composite endpoint event (cardiac death or myocardial infarction). The authors reported a 91% reduction in relative risk for this endpoint, 95% confidence interval (63% to 98%). In 2002, a ACC/AHA clinical practice guideline recommended perioperative use of beta blockers for this population. In 2008, a systematic review and meta-analysis, including over 12,000 patients having non-cardiac surgery, reported a 35% reduction in the odds of non-fatal myocardial infarction, 95% CI (21% to 46%), a twofold increase in non-fatal strokes, odds ratio 2.1, 95% CI (2.7 to 3.68), and a possible increase in all-cause mortality, odds ratio 1.20, 95% CI (0.95 to 1.51). Despite the results of this good quality systematic review, subsequent guidelines published in 2009 and 2012 continue to recommend beta blockers.

Case 2 is the use of Intensive insulin therapy (IIT) in critically ill patients. In 2001, a single center randomized trial of IIT in critically ill patients with raised serum glucose reported a 42% relative risk reduction in mortality, 95% CI (22% to 62%). The authors used a liberal stopping threshold (P=0.01) and took frequent looks at the data, strategies they said were “designed to allow early termination of the study.” Results were rapidly incorporated into guidelines, e.g., American College Endocrinology practice guidelines, with recommendations for an upper limit of glucose of </=8.3 mmol/L. A systematic review published in 2008 summarized the results of subsequent studies which did not confirm lower mortality with IIT and documented an increased risk of hypoglycemia.  Later, a good quality SR confirmed these later findings. Nevertheless, some guideline groups continue to advocate limits of </=8.3 mmol/L. Other guidelines utilizing the results of more recent studies, recommend a range of 7.8-10 mmol/L.15.

Case 3 is the use of  activated protein C in critically ill patients with sepsis. The original 2001 trial of recombinant human activated protein C (rhAPC) was stopped early after the second interim analysis because of an apparent difference in mortality. In 2004, the Surviving Sepsis Campaign, a global initiative to improve management, recommended use of the drug as part of a “bundle” of interventions in sepsis. A subsequent trial, published in 2005, reinforced previous concerns from studies reporting increased risk of bleeding with rhAPC and raised questions about the apparent mortality reduction in the original study. As of 2007, trials had failed to replicate the favorable results reported in the pivotal Recombinant Human Activated Protein C Worldwide Evaluation in Severe Sepsis (PROWESS) study. Nevertheless, the 2008 iteration of the Surviving Sepsis guidelines and another guideline in 2009 continued to recommend rhAPC. Finally, after further discouraging trial results, Eli Lilly withdrew the drug, activated drotrecogin alfa (Xigris) from the market 2011.

Key points about trials terminated early for benefit:

  • Truncated trials are likely to overestimate benefits.
  • Results should be confirmed in other studies.
  • Maintain a high level of scepticism regarding the findings of trials stopped early for benefit, particularly when those trials are relatively small and replication is limited or absent.
  • Stopping rules do not protect against overestimation of benefits.
  • Stringent criteria for stopping for benefit would include not stopping before approximately 500 events have accumulated.

References

1. http://www.delfini.org/delfiniClick_PrimaryStudies.htm#truncation

2. Guyatt GH, Briel M, Glasziou P, Bassler D, Montori VM. Problems of stopping trials early. BMJ. 2012 Jun 15;344:e3863. doi: 10.1136/bmj.e3863. PMID:22705814.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

CONSORT Update of Abstract Guidelines 2012

Status

CONSORT Update of Abstract Guidelines 2012

We have previously described the rationale and details of The Consort Statement: Consolidated Standards of Reporting Trials (CONSORT).[1] In brief, CONSORT is a checklist, based on evidence, of 25 items that need to be addressed in reports of clinical trials in order to provide readers with a clear picture of study quality and the progress of all participants in the trial, from the time they are randomized until the end of their involvement. The intent is to make the experimental process clear, flawed or not, so that users of the data can more appropriately evaluate its validity and usefulness of the results. A recent BMJ study has assessed the use of CONSORT guidelines for abstracts in five top journals—JAMA, New England Journal of Medicine (NEJM), the British Medical Journal (BMJ), Lancet and the Annals of Internal Medicine. [2]

In this study, the authors checked each journal’s instructions to authors in January 2010 for any reference to the CONSORT for Abstracts guidelines (for example, reference to a publication or link to the relevant section of the CONSORT website). For those journals that mentioned the guidelines in their instructions to authors, they contacted the editor of that journal to ask when the guidance was added, whether the journal enforced the guidelines, and if so, how. They classified journals in three categories: those not mentioning the CONSORT guidelines in their instructions to authors (JAMA and NEJM); those referring to the guidelines in their instructions to authors, but with no specific policy to implement them (BMJ); and those referring to the guidelines in their instructions to authors, with a policy to implement them (Annals of Internal Medicine and the Lancet).

First surprise—JAMA and NEJM don’t even mention CONSORT in their instructions to authors. Second surprise—CONSORT published what evidologists agree to be reasonable abstract requirements in 2008, but only the Annals and Lancet now instruction authors to follow them. The study design was to evaluate the inclusion of the 9 CONSORT items omitted more than 50% of the time from abstracts (details of the trial design, generation of the allocation sequence, concealment of allocation, details of blinding, number randomized and number analyzed in each group, primary outcome results for each group and its effect size, harms data and funding source). The primary outcome was the mean number of CONSORT items reported in selected abstracts, among nine items reported in fewer than 50% of the abstracts published across the five journals in 2006. Overall, for the primary outcome, publication of the CONSORT guidelines did not lead to a significant increase in the level of the mean number of items reported (increase of 0.3035 of nine items, P=0.16) or the trend (increase of 0.0193 items per month, P=0.21). There was a significant increase in the level of the mean number of items reported after the implementation of the CONSORT guidelines (increase of 0.3882 of five items, P=0.0072) and in trends (increase of 0.0288 items per month, P=0.0025).

What follows is not really surprising—

  • After publication of the guidelines in January 2008, the authors identified a significant increase in the reporting of key items in the two journals (Annals of Internal Medicine, and Lancet) that endorsed the guidelines in their instructions to authors and that had an active editorial policy to implement them. At baseline, in January 2006, the mean number of items reported per abstract was 1.52 of nine items, which increased to 2.56 nine items during the 25 months before the intervention. In December 2009, 23 months after the publication of the guidelines, the mean number of items reported per abstract for the primary outcome in the Annals of Internal Medicine and the Lancet was 5.41 items, which represented a 53% increase compared with the expected level estimated on the basis of pre-intervention trends.
  • The authors observed no significant difference in the one journal (BMJ) that endorsed the guidelines but did not have an active implementation strategy, and in the two journals (JAMA, NEJM) that did not endorse the guidelines in their instructions to authors.

What this study shows is that without actively implementing editorial policies—i.e., requiring the use of CONSORT guidelines, improved reporting does not happen. A rather surprising finding for us was that only two of the five top journals included in this study have active implementation policies (e.g., an email to authors at time of revision that requires revision of the abstract according to CONSORT guidance). We have a long ways to go.

More details about CONSORT are available, including a few of the flow diagram, at— http://www.consort-statement.org/

References

1. http://www.delfini.org/delfiniClick_ReportingEvidence.htm#consort

2. Hopewell S, Philippe P, Baron G., Boutron I.  Effect of editors’ implementation of CONSORT on the reporting of abstracts in high impact medical journals: interrupted time series analysis. BMJ 2012;344:e4178.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

NNT from RR and OR

Status

Obtaining Absolute Risk Reduction (ARR) and Number Needed To Treat (NNT) From Relative Risk (RR) and Odds Ratios (OR) Reported in Systematic Reviews

Background
Estimates of effect in meta-analyses can be expressed as either relative effects or absolute effects. Relative risks (aka risk ratios) and odds ratios are relative measures. Absolute risk reduction (aka risk difference) and number-needed-to-treat are absolute measures.  When reviewing meta-analyses, readers will almost always see results (usually mean differences between groups) presented as relative risks or odds ratios. The reason for this is that relative risks are considered to be the most consistent statistic for study results combined from multiple studies. Meta-analysts usually avoid performing meta-analyses using absolute differences for this reason.

Fortunately we are now seeing more meta-analyses reporting both the relative risks along with ARR and NNT. The key point is that meta-analyses almost always use relative effect measures (relative risk or odds ratio) and then (hopefully) re-express the results using absolute effect measures (ARR or NNT).

You may see the term, “assumed control group risk” or “assumed control risk” (ACR).   This frequently refers to risk in a control group or subgroup of patients in a meta-analysis, but could also refer to risk in any group (i.e., patients not receiving the study intervention) being compared to an intervention group.

The Cochrane Handbook now recommends that meta-analysts provide a summary table for the main outcome and that the table include the following items—

  • The topic, population, intervention and comparison
  • The assumed risk and corresponding risk (i.e., those receiving the intervention)
  • Relative effect statistic (RR or OR)

When RR is provided, ARR can easily be calculated. Odds ratios deal with odds and not probabilities and, therefore, cannot be converted to ARR with accuracy because odds cannot account for a number within a population—only how many with, for example, as compared to how many without.  For more on “odds,” see— http://www.delfini.org/page_Glossary.htm#odds

Example 1: Antihypertensive drug therapy compared to control in elderly (60 years or older) for hypertension in the elderly

Reference: Musini VM, Tejani AM, Bassett K, Wright JM. Pharmacotherapy for hypertension in the elderly. Cochrane Database Syst Rev. 2009 Oct 7;(4):CD000028. Review. PubMed PMID: 19821263.

  • Computing ARR and NNT from Relative Risk
    When RR is reported in a meta-analysis, determine (this is a judgment) the assumed control risk (ACR)—i.e., the risk in the group being compared to the new intervention—from the control event rate or other data/source
  • Formula: ARR=100 X ACR X (1-RR)

Calculating the ARR and NNT from the Musini Meta-analysis

  • In the above meta-analysis of 12 RCTs in elderly patients with moderate hypertension, the RR for overall mortality with treatment compared to no treatment over 4.5 years was 0.90.
  • The event rate  (ACR) in the control group was 116 per 1000 or 0.116
  • ARR=100 X .116 X 0.01=1.16%
  • NNT=100/1.16=87
  • Interpretation: The relative risk with treatment compared to usual care is 90% of the control group (in this case the group of elderly patients not receiving treatment for hypertension) which translates into 1 to 2 fewer deaths per 100 treated patients over 4.5 years with treatment. In other words you would need to treat 87 elderly hypertensive people at moderate risk with antihypertensives for 4.5 years to prevent one death.

Computing ARR and NNT from Odds Ratios

In some older meta-analyses you may not be given the assumed (ACR) risk.

Example 2: Oncology Agent

Assume a meta-analysis on an oncology agent reports an estimate of effect (mortality) as an OR of 0.8 over 3 years for a new drug. In order to do the calculation, an ACR is required.  Hopefully this information will be provided in the study. If not, the reader will have to obtain the assumed control group risk (ACR) from other studies or another source. Let’s assume that the control risk in this example is 0.3.

Formula for converting OR to ARR: ARR=100 X (ACR-OR X ACR) / (1-ACR+OR X ACR)

  • ARR=100 X (0.3-0.8 X 0.3) /  (1-0.3 + 0.8 X 0.3)
  • In this example:
  • ARR=100 X (0.3-0.24) / (0.7 + 0.28)
  • ARR= 0.06/0.98
  • ARR=0.061 or 6.1%
  • Thus the ARR is 6.1% over 3 years.
  • The NNT to benefit one patient over 3 years is 100/6.1 (rounded) is 17.

Because of the limitations of odds ratios, as described above, it should be noted that when outcomes occur commonly (e.g., >5%), odds ratios may then overestimate the effect of a treatment.

For more information see The Cochrane Handbook, Part 2, Chapter 12.5.4 available at http://www.cochrane-handbook.org/

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email