Sounding the Alarm (Again) in Oncology


Sounding the Alarm (Again) in Oncology

Five years ago Fojo and Grady sounded the alarm about value in many of the new oncology drugs [1]. They raised the following issues and challenged oncologists and others to get involved in addressing these issues:

  • There is a great deal of uncertainty and confusion about what constitutes a benefit in cancer therapy; and,
  • How much should cost factor into these deliberations?

The authors review a number of oncology drug studies reporting increased overall survival (OS) ranging from a median of a few days to a few months with total new drug costs ranging from $15,000 to $90,000 plus. In some cases, there is no increase in OS, but only progression free survival (PFS) which is a weaker outcome measure due to its being prone to tumor assessment biases and is frequently assessed in studies of short duration. Adverse events associated with the new drugs are many and include higher rates of febrile neutropenia, infusion-related reactions, diarrhea, skin toxicity, infections, hypertension and other adverse events.

Fojo and Grady point out that—

“Many Americans would likely not regard a 1.2-month survival advantage as ‘significant’ progress, the much revered P value notwithstanding. But would an individual patient agree? Although we lack the answer to this question, we would suggest that the death of a mother of four at age 37 years would be no less painful were it to occur at age 37 years and 1 month, nor would the passing of a 67-year-old who planned to travel after retiring be any less difficult for the spouse were it to have occurred 1 month later.”

In a recent article [2] (thanks to Dr. Richard Lehman for drawing our attention to this article in his wonderful BMJ blog) Fojo and colleagues again point out that—

  • Cancer is the number one cause of mortality worldwide, and cancer cases are projected to rise by 75% over the next 2 decades.
  • Of the 71 therapies for solid tumors receiving FDA approval from 2002 to 2014, only 30 of the 71 approvals (42%) met the American Society of Clinical Oncology Cancer Research Committee’s “low hurdle” criteria for clinically meaningful improvement. Further, the authors tallied results from all the studies and reported very modest collective median gains of 2.5 months for PFS and 2.1 months for OS. Numerous surveys have indicated that patients expect much more.
  • Expensive therapies are stifling progress by (1) encouraging enormous expenditures of time, money, and resources on marginal therapeutic indications; and, (2) promoting a me-too mentality that is stifling innovation and creativity.

The last bullet needs a little explaining. The authors provide a number of examples of “safe bets” and argue that revenue from such safe and profitable therapies rather than true need has been a driving force for new oncology drugs. The problem is compounded by regulations—e.g., rules which require Medicare to reimburse patients for any drug used in an “anti-cancer chemotherapeutic regimen”—regardless of its incremental benefit over other drugs—as long as the use is “for a medically accepted indication” (commonly interpreted as “approved by the FDA”). This provides guaranteed revenues for me-too drugs irrespective of their marginal benefits. The authors also point out that when prices for drugs of proven efficacy fall below a certain threshold, suppliers often stop producing the drug, causing severe shortages.

What can be done? The authors acknowledge several times in their commentary that the spiraling cost of cancer therapies has no single villain; academia, professional societies, scientific journals, practicing oncologists, regulators, patient advocacy groups and the biopharmaceutical industry—all bear some responsibility. [We would add to this list physicians, P&T committees and any others who are engaged in treatment decisions for patients. Patients are not on this list (yet) because they are unlikely to really know the evidence.] This is like many other situations when many are responsible—often the end result is that “no one” takes responsibility. Fojo et al. close by making several suggestions, among which are—

  1. Academicians must avoid participating in the development of marginal therapies;
  2. Professional societies and scientific journals must raise their standards and not spotlight marginal outcomes;
  3. All of us must also insist on transparency and the sharing of all published data in a timely and enforceable manner;
  4. Actual gains of benefit must be emphasized—not hazard ratios or other measures that force readers to work hard to determine actual outcomes and benefits and risks;
  5. We need cooperative groups with adequate resources to provide leadership to ensure that trials are designed to deliver meaningful outcomes;
  6. We must find a way to avoid paying premium prices for marginal benefits; and,
  7. We must find a way [federal support?] to secure altruistic investment capital.

Delfini Comment
While the authors do not make a suggestion for specific responsibilities or actions on the part of the FDA, they do make a recommendation that an independent entity might create uniform measures of benefits for each FDA-approved drug—e.g., quality-adjusted life-years. We think the FDA could go a long way in improving this situation.

And so, as pointed out by Fojo et al., only small gains have been made in OS over the past 12 years, and costs of oncology drugs have skyrocketed. However, to make matters even worse than portrayed by Fojo et al., many of the oncology drug studies we see have major threats to validity (e.g., selection bias, lack of blinding and other performance biases, attrition and assessment bias, etc.) raising the question, “Does the approximate 2 month gain in median OS represent an overestimate?” Since bias tends to favor the new intervention in clinical trials, the PFS and OS reported in many of the recent oncology trials may be exaggerated or even absent or harms may outweigh benefits. On the other hand, if a study is valid, since a median is a midpoint in a range of results and a patient may achieve better results than indicated by the median, some patients may choose to accept a new therapy. The important thing is that patients are given information on benefits and harms in a way that allows them to have a reasonable understanding of all the issues and make the choices that are right for them.

Resources & References


  1. The URL for Dr. Lehman’s Blog is—
  2. The URL for his original blog entry about this article is—


  1. Fojo T, Grady C. How much is life worth: cetuximab, non-small cell lung cancer, and the $440 billion question. J Natl Cancer Inst. 2009 Aug 5;101(15):1044-8. Epub 2009 Jun 29. PMID: 19564563
  2. Fojo T, Mailankody S, Lo A. Unintended Consequences of Expensive Cancer Therapeutics-The Pursuit of Marginal Indications and a Me-Too Mentality That Stifles Innovation and Creativity: The John Conley Lecture. JAMA Otolaryngol Head Neck Surg. 2014 Jul 28. doi: 10.1001/jamaoto.2014.1570. [Epub ahead of print] PubMed PMID: 25068501.
Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

American College of Cardiology/American Heart Association Guidelines: Numbers-Needed-to-Treat (NNTs) for Statin Treatment in Primary Prevention of Cardiovascular Disease (CVD)


American College of Cardiology/American Heart Association Guidelines: Numbers-Needed-to-Treat (NNTs) for Statin Treatment in Primary Prevention of Cardiovascular Disease (CVD)

Following publication of the November 2013 American College of Cardiology/American Heart Association (ACC/AHA) guideline [1], concern was expressed that, in the area of primary prevention for CVD, the 10 year guideline estimates of risk were overestimated [2]. Furthermore, the ACC/AHA criteria could result in more than 45 million middle-aged Americans without cardiovascular disease being recommended for consideration of statin therapy.

While the amount of risk overestimation is still being debated, Alper and Drabkin of DynaMed, have created very nice decision-support based on their evaluation of the most current and reliable systematic reviews available for estimating the effects of statins in individuals with various 10 year risks [3].

The risk estimates below will prove quite useful for individual decision-making providing the NNTs over 5 years for the use of statins by individual risk. More detailed information regarding the evidence of statins in preventing CVD events on is available on the DynaMed website [4].

For a person with an estimated 7.5% 10-year risk, the 5-year NNT was 108 for CVD events, 186 for MI, and 606 for stroke. At 15% 10-year risk, 5-year NNTs were 54 for CVD events, 94 for MI, 204 for stroke, and 334 for overall mortality. At 20% 10-year risk, 5-year NNTs were 40 for CVD events, 70 for MI, 228 for stroke, and 250 for overall mortality.

1. Stone NJ, Robinson J, Lichtenstein AH et al. 2013 ACC/AHA Guideline on the treatment of blood cholesterol to reduce atherosclerotic cardiovascular risk in adults: a report of the American of Cardiology/American Heart Association Task Force on Practice Guidelines. J Am Coll Cardiol. 2013. [Epub ahead of print] [PMID: 24239923]

2. Ridker PM, Cook NR. Statins: new American guidelines for prevention of cardiovascular disease. Lancet. 2013 Nov 30;382(9907):1762-5. doi: 10.1016/S0140-6736(13)62388-0. Epub 2013 Nov 20. PubMed PMID: 24268611.

3. Click on the Comments Tab here:

4. Search “statins” at the link below:”


Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Involving Patients in Their Care Decisions and JAMA Editorial: The New Cholesterol and Blood Pressure Guidelines: Perspective on the Path Forward


Involving Patients in Their Care Decisions and JAMA Editorial: The New Cholesterol and Blood Pressure Guidelines: Perspective on the Path Forward

Krumholz HM. The New Cholesterol and Blood Pressure Guidelines: Perspective on the Path Forward. JAMA. 2014 Mar 29. doi: 10.1001/jama.2014.2634. [Epub ahead of print] PubMed PMID: 24682222.

Here is an excellent editorial that highlights the importance of patient decision-making.  We thank the wonderful Dr. Richard Lehman, MA, BM, BCh, Oxford, & Blogger, BMJ Journal Watch, for bringing this to our attention. [Note: Richard’s wonderful weekly review of medical journals—informative, inspiring and oh so droll—is here.]

We have often observed that evidence can be a neutralizing force. This editorial highlights for us that this means involving the patient in a meaningful way and finding ways to support decisions based on patients’ personal requirements. These personal “patient requirements” include health care needs and wants and a recognition of individual circumstances, values and preferences.

To achieve this, we believe that patients should receive the same information as clinicians including what alternatives are available, a quantified assessment of potential benefits and harms of each including the strength of evidence for each and potential consequences of making various choices including things like vitality and cost.

Decisions may differ between patients, and physicians may make incorrect assumption about what most matters to patients of which there are many examples in the literature such as in the citations below.

O’Connor A. Using patient decision aids to promote evidence-based decision making. ACP J Club. 2001 Jul-Aug;135(1):A11-2. PubMed PMID: 11471526.

O’Connor AM, Wennberg JE, Legare F, Llewellyn-Thomas HA,Moulton BW, Sepucha KR, et al. Toward the ‘tipping point’: decision aids and informed patient choice. Health Affairs 2007;26(3):716-25.

Rothwell PM. External validity of randomised controlled trials: “to whom do the results of this trial apply?”. Lancet. 2005 Jan 1-7;365(9453):82-93. PubMed PMID: 15639683.

Stacey D, Bennett CL, Barry MJ, Col NF, Eden KB, Holmes-Rovner M, Llewellyn-Thomas H, Lyddiatt A, Légaré F, Thomson R. Decision aids for people facing health treatment or screening decisions. Cochrane Database Syst Rev. 2011 Oct 5;(10):CD001431. Review. PubMed PMID: 21975733.

Wennberg JE, O’Connor AM, Collins ED, Weinstein JN. Extending the P4P agenda, part 1: how Medicare can improve patient decision making and reduce unnecessary care. Health Aff (Millwood). 2007 Nov-Dec;26(6):1564-74. PubMed PMID: 17978377.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Estimating Relative Risk Reduction from Odds Ratios


Estimating Relative Risk Reduction from Odds Ratios

Odds are hard to work with because they are the likelihood of an event occurring compared to not occurring—e.g., odds of two to one mean that likelihood of an event occurring is twice that of not occurring. Contrast this with probability which is simply the likelihood of an event occurring.

An odds ratio (OR) is a point estimate used for case-control studies which attempts to quantify a mathematical relationship between an exposure and a health outcome. Odds must be used in case-control studies because the investigator arbitrarily controls the population; therefore, probability cannot be determined because the disease rates in the study population cannot be known. The odds that a case is exposed to a certain variable are divided by the odds that a control is exposed to that same variable.

Odds are often used in other types of studies as well, such as meta-analysis, because of various properties of odds which make them easy to use mathematically. However, increasingly authors are discouraged from computing odds ratios in secondary studies because of the difficulty translating what this actually means in terms of size of benefits or harms to patients.

Readers frequently attempt to deal with this by converting the odds ratio into relative risk reduction by thinking of the odds ratio as similar to relative risk. Relative risk reduction (RRR) is computed from relative risk (RR) by simply subtracting the relative risk from one and expressing that outcome as a percentage (1-RR).

Some experts advise readers that this is safe to do if the prevalence of the event is low. While it is true that odds and probabilities of outcomes are usually similar if the event rate is low, when possible, we recommend calculating both the odds ratio reduction and the relative risk reduction in order to compare and determine if the difference is clinically meaningful. And determining if something is clinically meaningful is a judgment, and therefore whether a conversion of OR to RRR is distorted depends in part upon that judgment.

a = group 1 outcome occurred
b = group 1 outcome did not occur
c = group 2 outcome occurred
d = group 2 outcome did not occur

OR = (a/b)/(c/d)
Estimated RRR from OR (odds ratio reduction) = 1-OR

RR = (a/ group 1 n)/(c/ group 2 n)
RRR – 1-RR



Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Why Statements About Confidence Intervals Often Result in Confusion Rather Than Confidence


Why Statements About Confidence Intervals Often Result in Confusion Rather Than Confidence

A recent paper by McCormack reminds us that authors may mislead readers by making unwarranted “all-or-none” statements and that readers should be mindful of this and carefully examine confidence intervals.

When examining results of a valid study, confidence intervals (CIs) provide much more information than p-values. The results are statistically significant if a confidence interval does not touch the line of no difference (zero in the case of measures of outcomes expressed as percentages such as absolute risk reduction and relative risk reduction and 1 in the case of ratios such as relative risk and odds ratios). However, in addition to providing information about statistical significance, confidence intervals also provide a plausible range for possibly true results within a margin of chance (5 percent in the case of a 95% CI). While the actual calculated outcome (i.e., the point estimate) is “the most likely to be true” result within the confidence interval, having this range enables readers to judge, in their opinion, if statistically significant results are clinically meaningful.

However, as McCormack points out, authors frequently do not provide useful interpretation of the confidence intervals, and authors at times report different conclusions from similar data. McCormack presents several cases that illustrate this problem, and this paper is worth reading.

As an illustration, assume two hypothetical studies report very similar results. In the first study of drug A versus drug B, the relative risk for mortality was 0.9, 95% CI (0.80 to 1.05). The authors might state that there was no difference in mortality between the two drugs because the difference is not statistically significant. However, the upper confidence interval is close to the line of no difference and so the confidence interval tells us that it is possible that a difference would have been found if more people were studied, so that statement is misleading. A better statement for the first study would include the confidence intervals and a neutral interpretation of what the results for mortality might mean. Example—

“The relative risk for overall mortality with drug A compared to placebo was 0.9, 95% CI (0.80 to 1.05). The confidence intervals tell us that Drug A may reduce mortality by up to a relative 20% (i.e., the relative risk reduction), but may increase mortality, compared to Drug B, by approximately 5%.”

In a second study with similar populations and interventions, the relative risk for mortality might be 0.93, 95% CI (0.83 to 0.99). In this case, some authors might state, “Drug A reduces mortality.” A better statement for this second hypothetical study would ensure that the reader knows that the upper confidence interval is close to the line of no difference and, therefore, is close to non-significance. Example—

“Although the mortality difference is statistically significant, the confidence interval indicates that the relative risk reduction may be as great as 17% but may be as small as 1%.”

The Bottom Line

  1. Remember that p-values refer only to statistical significance and confidence intervals are needed to evaluate clinical significance.
  2. Watch out for statements containing the words “no difference” in the reporting of study results. A finding of no statistically significant difference may be a product of too few people studied (or insufficient time).
  3. Watch out for statements implying meaningful differences between groups when one of the confidence intervals approaches the line of no difference.
  4. None of this means anything unless the study is valid. Remember that bias tends to favor the intervention under study.

If authors do not provide you with confidence intervals, you may be able to compute them yourself, if they have supplied you with sufficient data, using an online confidence interval calculator. For our favorites, search “confidence intervals” at our web links page:


McCormack J, Vandermeer B, Allan GM. How confidence intervals become confusion intervals. BMC Med Res Methodol. 2013 Oct 31;13(1):134. [Epub ahead of print] PubMed PMID: 24172248.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

When Is a Measure of Outcomes Like a Coupon for a Diamond Necklace?


When Is a Measure of Outcomes Like a Coupon for a Diamond Necklace?

For those of you who struggle with the fundamental difference between absolute risk reduction (ARR) versus relative risk reduction (RRR) and their counterparts, absolute and relative risk increase (ARI/RRI), we have always explained that only knowing the RRR or the RRI without other quantitative information about the frequency of events is akin to knowing that a store is having a half-off sale—but when you walk in, you find that they aren’t posting the actual price!  And so your question is 50 percent off of what???

You should have the same question greet you whenever you are provided with a relative measure (and if you aren’t told whether the measure is relative or absolute, you may be safer off assuming that it is relative). Below is a link to a great short cartoon that turns the lens a little differently and which might help.

However, we will add that, in our opinion, ARR alone isn’t fully informative either, nor is its kin, the number-needed-to-treat or NNT, and for ARI, the number-needed-to-harm or NNH.  A 5 percent reduction in risk may be perceived very differently when “10 people out of a hundred benefit with one intervention compared to 5 with placebo” as compared to a different scenario in which “95 people out of a hundred benefit with one intervention as compared to 90 with placebo.” As a patient, I might be less likely to want to expose myself to side effects if it is highly likely I am going to improve without treatment, for example.  Providing this full information–for critically appraised studies that are deemed to be valid–of course, may best provide patients with information that helps them make choices based on their own needs and requirements including their values and preferences.

We think that anyone involved in health care decision-making—including the patient—is best helped by knowing the event rates for each of the groups studied—i.e., the numerators and denominators for the outcome of interest by group which comprise the 4 numbers that make up the 2 by 2 table which is used to calculate many statistics.

Isn’t it great when learning can be fun too!  Enjoy!

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Quickly Finding Reliable Evidence


Quickly Finding Reliable Evidence

Good clinical recommendations for various diagnostic and therapeutic interventions incorporate evidence from reliable published research evidence. Several online evidence-based textbooks are available for clinicians to use to assist them in making healthcare decisions. Large time lags in updating are a common problem for medical textbooks.  Online textbooks offer a solution to these delays.

For readers who plan to create decision support, we strongly recommend DynaMed [full disclosure: we are on the editorial board in an unpaid capacity, though a few years ago we did receive a small gift]. DynaMed is a point-of-care evidence-based medical information database created by Brian S. Alper MD, MSPH, FAAFP. It continues to grow from its current 30,000+ clinical topics that are updated frequently. DynaMed monitors the content of more than 500 medical journals and systematic evidence review databases.  Each item is thoroughly reviewed for clinical relevance and scientific reliability. DynaMed has been compared with several products, including in a new review by McMaster University. The DynaMed website is

The McMaster University maintains a Premium Literature Service (PLUS) database which is a continuously updated, searchable database of primary studies and systematic reviews. Each article from over 120 high quality clinical journals and evidence summary services is appraised by research staff for methodological quality, and articles that pass basic criteria are assessed by practicing clinicians in the corresponding discipline.  Clinical ratings are based on 7-point scales, where clinical relevance ranges from 1 (“not relevant”) to 7 (“directly and highly relevant”), and newsworthiness ranges from 1 (“not of direct clinical interest”) to 7 (“useful information, most practitioners in my discipline definitely don’t know this).

Investigators from McMaster evaluated four evidence-based textbooks—UpToDate, PIER, DynaMed and Best Practice [Jeffery 12].  For each they determined the proportion of 200 topics which had subsequent articles in PLUS with findings different from those reported in the topics. They also evaluated the number of topics available in each evidence-based textbook compared with the topic coverage in the PLUS database, and the recency of updates for these publications.  A topic was in need of an update if there was at least one newer article in PLUS that provided information that differed from the topic’s recommendations in the textbook.


The proportion of topics with potential for updates was significantly lower for DynaMed than the other three textbooks, which had statistically similar values. For DynaMed topics, updates occurred on average of 170 days prior to the study, while the other textbooks averaged from 427 to 488 days. Of all evidence-based textbooks, DynaMed missed fewer articles reporting benefit or no effect when the direction of findings (beneficial, harmful, no effect) was investigated. The proportion of topics for which there was 1 or more recently published articles found in PLUS with evidence that differed from the textbooks’ treatment recommendations was 23% (95% CI 17 to 29%) for DynaMed, 52% (95% CI 45 to 59%) for UpToDate, 55% (95% CI 48 to 61%) for PIER, and 60% (95% CI 53 to 66%) for Best Practice (?23=65.3, P<.001). The time since the last update for each textbook averaged from 170 days (range 131 to 209) for DynaMed, to 488 days (range 423 to 554) for PIER (P<.001 across all textbooks).


Healthcare topic coverage varied substantially for leading evidence-informed electronic textbooks, and generally a high proportion of the 200 common topics had potentially out-of-date conclusions and missing information from 1 or more recently published studies. PIER had the least topic coverage, while UpToDate, DynaMed, and Best Practice covered more topics in similar numbers. DynaMed’s timeline for updating was the quickest, and it had by far the least number of articles that needed to be updated, indicating that quality was not sacrificed for speed.

Note: All textbooks have access to the PLUS database to facilitate updates, and also use other sources for updates such as clinical practice guidelines.


The proportion of topics with potentially outdated treatment recommendations in on-line evidence-based textbooks varies substantially.


Jeffery R, Navarro T, Lokker C, Haynes RB, Wilczynski NL, Farjou G. How current are leading evidence-based medical textbooks? An analytic survey of four online textbooks. J Med Internet Res. 2012 Dec 10;14(6):e175. doi: 10.2196/jmir.2105. PubMed PMID: 23220465.



Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Delfini Treatment Messaging Scripts™ Update


 Messaging Scripts ™ Update

Delfini Messaging Scripts  are scripts for scripts. Years ago we were asked by a consultancy pharmacy to come up with a method to create concise evidence-based statements for various therapies.  That’s how we came up with our ideas for Messaging Scripts, which are targeted treatment messaging & decision support tools for specific clinical topics. Since working with that group, we created a template and some sample scripts which have been favorably received wherever we have shown them.  The template is available at the link below, along with several samples.  Samples recently updated: Ace Inhibitors, Alendronate, Sciatica (Low Back Pain), Statins (two scripts) and Venous Thromboembolism (VTE) Prevention in Total Hip and Total Knee Replacement.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

The Elephant is The Evidence—Epidural Steroids


The Elephant is The Evidence—Epidural Steroids: Edited & Updated 1/7/2013

Epidural steroids are commonly used to treat sciatica (pinched spinal nerve) or low back pain.  As of January 7, 2013 at least 40 deaths have been linked to fungal meningitis thought to be caused by contaminated epidural steroids, and 664 cases in 19 states have been identified with a clinical picture consistent with fungal infection [CDC]. Interim data show that all infected patients received injection with preservative-free methylprednisolone acetate (80mg/ml) prepared by New England Compounding Center, located in Framingham, MA. On October 3, 2012, the compounding center ceased all production and initiated recall of all methylprednisolone acetate and other drug products prepared for intrathecal administration.

Thousands of patients receive epidural steroids without significant side effects or problems every week. In this case, patients received steroids that were mixed by a “compounding pharmacy” and contamination of the medication appears to have occurred during manufacture. But let’s consider other patients who received epidural steroids from uncontaminated vials. How much risk and benefit are there with epidural steroids? The real issue is the effectiveness of epidural steroids. Yes, there are risks with epidural steroids beyond contamination—e.g., a type of headache that occurs when the dura (the sac around the spinal cord) is punctured and fluid leaks out. This causes a pressure change in the central nervous system and a headache. Bleeding is also a risk. But people with severe pain from sciatica are frequently willing to take those risks if there are likely to be benefits. But, in fact, for many patients who receive epidural steroids the likelihood of benefit is very low. For example, patients with bone problems (spinal stenosis) rather than lumbar disc disease are less likely to benefit. Patients who have had a long history of sciatica are less likely to benefit.

We don’t know how many of these patients were not likely to benefit from the epidural steroids, but if the infected patients had been advised about the unproven benefits of epidural steroids in certain cases and the known risks, some patients may have chosen to avoid the injections and possibly be alive today.  This is an example of the importance of good information as the basis for decision-making. Basing decisions on poor quality or incomplete information and intervening with unproven—yet potentially risky treatments puts millions of people at risk every week.

Let’s look at the evidence. Recently, a fairly large, well-conducted RCT published in the British Medical Journal (BMJ) reported that there is no meaningful benefit from epidural steroid injections in patients who have had long term (26 to 57 weeks) of sciatica [Iverson].  As pointed out in an editorial, epidural steroids have been used for more than 50 years to treat low back pain and sciatica and are the most common intervention in pain clinics throughout the world [Cohen]. And yet, despite their widespread use, their efficacy for the treatment of chronic sciatica remains unproven. (We should add here that many times lacking good evidence of benefit does not mean a treatment does not work.) Iverson et al conclude that, “Caudal epidural steroid or saline injections are not recommended for chronic lumbar radiculopathy [Iverson].”

Of more than 30 controlled studies evaluating epidural steroid injections, approximately half report some benefit. Systematic reviews also report conflicting results. Reasons for these discrepancies include differences in study quality, treatments, comparisons, co-interventions, study duration and patient selection. Results appear to be better for people with short term sciatica, but improvement should not be considered to be curative with epidural steroids. In this situation, it is very important that patients understand this fuzzy benefit-to-risk ratio. For many who are completely informed, the decision will be to avoid the risk.

With this recent problem of fungal meningitis from epidural steroids, it is important for patients to be informed about the world of uncertainty that surrounds risk, especially when science tells us that the evidence for benefit is not strong.  Since health care professionals frequently act as the eyes of the patient, we must seriously consider for every intervention we offer whether benefits clearly outweigh potential harms—and we must help patients understand details regarding the risks and benefits and be supportive when patients are “on the fence” about having a procedure. Remember Vioxx, arthroscopic lavage, vertebroplasy, encainide and flecainide, Darvon and countless other promising new drugs and other interventions? They seemed promising, but harms outweighed benefits for many patients.


1. accessed 12/10/12

2.  Cohen SP. Epidural steroid injections for low back pain. BMJ. 2011 Sep 13;343:d5310. doi: 10.1136/bmj.d5310. PubMed PMID: 21914757.

3.  Iversen T, Solberg TK, Romner B, et al.   Effect of caudal epidural steroid or saline injection  in chronic lumbar radiculopathy: multicentre, blinded, randomised controlled trial. BMJ. 2011 Sep 13;343:d5278. doi: 10.1136/bmj.d5278. PubMed PMID: 21914755.


Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Divulging Information to Patients With Poor Prognoses


Divulging Information to Patients With Poor Prognoses

We have seen several instances where our colleagues’ families have been given very little prognostic information by their physicians in situations where important decisions involving benefits versus harms, quality of life and other end of life decisions must be made. In both cases when a clinician in the family presented the evidence and prognostic information, decisions were altered.

We were happy to see a review of this topic by Mack and Smith in a recent issue of the BMJ.[1] In a nutshell the authors point out that—

  • Evidence consistently shows that healthcare professionals are hesitant to divulge prognostic information due to several underlying misconceptions. Examples of misconceptions—
    • Prognostic information will make patients depressed
    • It will take away hope
    • We can’t be sure of the patient’s prognosis anyway
    • Discussions about prognosis are uncomfortable
  • Many patients are denied discussion about code status, advance medical directives, or even hospice until there are no more treatments to give  and little time left for the patient
  • Many patients lose important  time with their families and and spend more time in the hospital and in intensive care units than would be if prognostic information had been provided and different decisions had been made.

Patients and families want prognostic information which is required to make decisions that are right for them. This together with the lack of evidence that discussing prognosis causes depression, shortens life, or takes away hope and the huge problem of unnecessary interventions at the end of life creates a strong argument for honest communication about poor prognoses.


1. Mack JW, Smith TJ. Reasons why physicians do not have discussions about poor prognosis, why it matters, and what can be improved. J Clin Oncol. 2012 Aug 1;30(22):2715-7. Epub 2012 Jul 2. PubMed PMID: 22753911.


Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email