Genomic Medicine Leaps Forward—More Drugs Targeting More Cancers


Genomic Medicine Leaps Forward—More Drugs Targeting More Cancers

Genomic Medicine Leaps Forward—More Drugs Targeting More Cancers We, like others, have been watching to see how genetic information will improve health outcomes (genomic medicine). Recently we encountered two pieces worth reading. The first is the NCI Molecular Analysis for Therapy Choice Program (MATCH) which will conduct small, phase II trials that will enroll adults with advanced solid tumors and lymphomas whose tumors are no longer responding to standard therapy and have begun to grow. Subjects will receive drugs targeting specific genetic abnormalities common across cancers. What is unique is that DNA sequencing will be used to identify individuals whose tumors of various types have specific genetic abnormalities that may respond to selected targeted drugs. Study arms (baskets) are created by cancer type, and multiple drugs can be studied. Details are available at—

The second piece titled, “A Faster Way to Try Many Drugs on Many Cancers,” by Gina Kolata and published in the New York Times ( provides examples of some of the clinical trials with basket designs, often referred to as “basket trials” because patients are also grouped by genetic abnormality rather than cancer type.

Delfini Comment
These trials will rely on surrogate markers (progression free survival and response rates), but may be useful if effect sizes are large. Investigators are interested in these trials because they can be done rapidly and are not constrained by many of the requirements of RCTs. You can quickly get the idea of the basket trial designs by looking at the first link above and the FDA site below. The FDA appears to be supportive of these initiatives and has created a PowerPoint slide deck with additional information about basket trials,including specific cancers and drugs at—

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Network Meta-analyses—More Complex Than Traditional Meta-analyses


Network Meta-analyses—More Complex Than Traditional Meta-analyses

Meta-analyses are important tools for synthesizing evidence from relevant studies. One limitation of traditional meta-analyses is that they can compare only 2 treatments at a time in what is often termed pairwise or direct comparisons. An extension of traditional meta-analysis is the “network meta-analysis” which has been increasingly used—especially with the rise of the comparative effectiveness movement—as a method of assessing the comparative effects of more than two alternative interventions for the same condition that have not been studied in head-to-head trials.

A network meta-analysis synthesizes direct and indirect evidence over the entire network of interventions that have not been directly compared in clinical trials, but have one treatment in common.

A clinical trial reports that for a given condition intervention A results in better outcomes than intervention B. Another trial reports that intervention B is better than intervention C. A network meta-analysis intervention is likely to report that intervention A results in better outcomes than intervention C based on indirect evidence.

Network meta-analyses, also known as “multiple-treatments meta-analyses” or “mixed-treatment comparisons meta-analyses” include both direct and indirect evidence. When both direct and indirect comparisons are used to estimate treatment effects, the comparison is referred to as a “mixed comparison.” The indirect evidence in network meta-analyses is derived from statistical inference which requires many assumptions and modeling. Therefore, critical appraisal of network meta-analyses is more complex than appraisal of traditional meta-analyses.

In all meta-analyses, clinical and methodological differences in studies are likely to be present. Investigators should only include valid trials. Plus they should provide sufficient detail so that readers can assess the quality of meta-analyses. These details include important variables such as PICOTS (population, intervention, comparator, outcomes, timing and study setting) and heterogeneity in any important study performance items or other contextual issues such as important biases, unique care experiences, adherence rates, etc. In addition, the effect sizes in direct comparisons should be compared to the effect sizes in indirect comparisons since indirect comparisons require statistical adjustments. Inconsistency between the direct and indirect comparisons may be due to chance, bias or heterogeneity. Remember, in direct comparisons the data come from the same trial. Indirect comparisons utilize data from separate randomized controlled trials which may vary in both clinical and methodological details.

Estimates of effect in a direct comparison trial may be lower than estimates of effect derived from indirect comparisons. Therefore, evidence from direct comparisons should be weighted more heavily than evidence from indirect comparisons in network meta-analyses. The combination of direct and indirect evidence in mixed treatment comparisons may be more likely to result in distorted estimates of effect size if there is inconsistency between effect sizes of direct and indirect comparisons.

Usually network meta-analyses rank different treatments according to the probability of being the best treatment. Readers should be aware that these rankings may be misleading because differences may be quite small or inaccurate if the quality of the meta-analysis is not high.

Delfini Comment
Network meta-analyses do provide more information about the relative effectiveness of interventions. At this time, we remain a bit cautious about the quality of many network meta-analyses because of the need for statistical adjustments. It should be emphasized that, as of this writing, methodological research has not established a preferred method for conducting network meta-analyses, assessing them for validity or assigning them an evidence grade.

Li T, Puhan MA, Vedula SS, Singh S, Dickersin K; Ad Hoc Network Meta-analysis Methods Meeting Working Group. Network meta-analysis-highly attractive but more methodological research is needed. BMC Med. 2011 Jun 27;9:79. doi: 10.1186/1741-7015-9-79. PubMed PMID: 21707969.

Salanti G, Del Giovane C, Chaimani A, Caldwell DM, Higgins JP. Evaluating the quality of evidence from a network meta-analysis. PLoS One. 2014 Jul 3;9(7):e99682. doi: 10.1371/journal.pone.0099682. eCollection 2014. PubMed PMID: 24992266.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

American College of Cardiology/American Heart Association Guidelines: Numbers-Needed-to-Treat (NNTs) for Statin Treatment in Primary Prevention of Cardiovascular Disease (CVD)


American College of Cardiology/American Heart Association Guidelines: Numbers-Needed-to-Treat (NNTs) for Statin Treatment in Primary Prevention of Cardiovascular Disease (CVD)

Following publication of the November 2013 American College of Cardiology/American Heart Association (ACC/AHA) guideline [1], concern was expressed that, in the area of primary prevention for CVD, the 10 year guideline estimates of risk were overestimated [2]. Furthermore, the ACC/AHA criteria could result in more than 45 million middle-aged Americans without cardiovascular disease being recommended for consideration of statin therapy.

While the amount of risk overestimation is still being debated, Alper and Drabkin of DynaMed, have created very nice decision-support based on their evaluation of the most current and reliable systematic reviews available for estimating the effects of statins in individuals with various 10 year risks [3].

The risk estimates below will prove quite useful for individual decision-making providing the NNTs over 5 years for the use of statins by individual risk. More detailed information regarding the evidence of statins in preventing CVD events on is available on the DynaMed website [4].

For a person with an estimated 7.5% 10-year risk, the 5-year NNT was 108 for CVD events, 186 for MI, and 606 for stroke. At 15% 10-year risk, 5-year NNTs were 54 for CVD events, 94 for MI, 204 for stroke, and 334 for overall mortality. At 20% 10-year risk, 5-year NNTs were 40 for CVD events, 70 for MI, 228 for stroke, and 250 for overall mortality.

1. Stone NJ, Robinson J, Lichtenstein AH et al. 2013 ACC/AHA Guideline on the treatment of blood cholesterol to reduce atherosclerotic cardiovascular risk in adults: a report of the American of Cardiology/American Heart Association Task Force on Practice Guidelines. J Am Coll Cardiol. 2013. [Epub ahead of print] [PMID: 24239923]

2. Ridker PM, Cook NR. Statins: new American guidelines for prevention of cardiovascular disease. Lancet. 2013 Nov 30;382(9907):1762-5. doi: 10.1016/S0140-6736(13)62388-0. Epub 2013 Nov 20. PubMed PMID: 24268611.

3. Click on the Comments Tab here:

4. Search “statins” at the link below:”


Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Patient Years


What are Patient-Years?

A participant at one of our recent conferences asked a good question—“What are patient-years?”

“Person-years” is a statistic for expressing incidence rates—it is the summing of the results of events divided by time. In many studies, the length of exposure to the treatment is different for different subjects, and the patient-year statistic is one way of dealing with this issue.

The calculation of events per patient-year(s) is the number of incident cases divided by the amount of person-time at risk. The calculation can be accomplished by adding the number of patients in the group and multiplying that number times the years that patients are in a study in order to calculate the patient-years (denominator). Then divide the number of events (numerator) by the denominator.

  • Example: 100 patients are followed for 2 years. In this case, there are 200 patient-years of follow-up.
  • If there were 8 myocardial infarctions in the group, the rate would be 8 MIs per 200 patient years or 4 MIs per 100 patient-years.

The rate can be expressed in various ways, e.g., per 100, 1,000, 100,000, or 1 million patient-years. In some cases, authors report the average follow-up period as the mean and others use the median, which may result in some variation in results between studies.

Another example: Assume we have a study reporting one event at 1 year and one event at 4 years, but no events at year 2 and 3. This same information can be expressed as 2 events/10 (1+2+3+4=10) years or an event rate of 0.2 per person-year.

An important issue is that frequently the timeframe for observation in studies reporting patient-years does not match the timeframe stated in the study. Brian Alper of Dynamed explains it this way: “If I observed a million people for 5 minutes each and nobody died, any conclusion about mortality over 1 year would be meaningless. This problem occurs whether or not we translate our outcome into a patient-years measure. The key in critical appraisal is to catch the discrepancy between timeframe of observation and timeframe of conclusion and not let the use of ‘patient-years’ mistranslate between the two or represent an inappropriate extrapolation.”[1]


1. Personal communication 9/3/13 with Brian S. Alper, MD, MSPH, FAAFP, Editor-in-Chief, DynaMed, Medical Director, EBSCO Information Services.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Review of Bias In Diabetes Randomized Controlled Trials


Review of Bias In Diabetes Randomized Controlled Trials

Healthcare professionals must evaluate the internal validity of randomized controlled trials (RCTs) as a first step in the process of considering the application of clinical findings (results) for particular patients. Bias has been repeatedly shown to increase the likelihood of distorted study results, frequently favoring the intervention.

Readers may be interested in a new systematic review of diabetes RCTs. Risk of bias (low, unclear or high) was assessed in 142 trials using the Cochrane Risk of Bias Tool.  Overall, 69 trials (49%) had at least one out of seven domains with high risk of bias. Inadequate reporting frequently hampered the risk of bias assessment: the method of producing the allocation sequence was unclear in 82 trials (58%) and allocation concealment was unclear in 78 trials (55%). There were no significant reductions in the proportion of studies at high risk of bias over time nor in the adequacy of reporting of risk of bias domains. The authors conclude that these trials have serious limitations that put the findings in question and therefore inhibit evidence-based quality improvement (QI). There is a need to limit the potential for bias when conducting QI trials and improve the quality of reporting of QI trials so that stakeholders have adequate evidence for implementation. The entire freely-available study is available at—

Ivers NM, Tricco AC, Taljaard M, Halperin I, Turner L, Moher D, Grimshaw JM. Quality improvement needed in quality improvement randomised trials: systematic review of interventions to improve care in diabetes. BMJ Open. 2013 Apr 9;3(4). doi:pii: e002727. 10.1136/bmjopen-2013-002727. Print 2013. PubMed PMID: 23576000.


Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Reliable Clinical Guidelines


Reliable Clinical Guidelines—Great Idea, Not-Such-A-Great Reality

Although clinical guideline recommendations about managing a given condition may differ, guidelines are, in general, considered to be important sources for individual clinical decision-making, protocol development, order sets, performance measures and insurance coverage. The Institute of Medicine [IOM] has created important recommendations that guideline developers should pay attention to—

  1. Transparency;
  2.  Management of conflict of interest;
  3.  Guideline development group composition;
  4. How the evidence review is used to inform clinical recommendations;
  5.  Establishing evidence foundations for making strength of recommendation ratings;
  6. Clear articulation of recommendations;
  7. External review; and,
  8. Updating.

Investigators recently evaluated 114 randomly chosen guidelines against a selection from the IOM standards and found poor adherence [Kung 12]. The group found that the overall median number of IOM standards satisfied was only 8 out of 18 (44.4%) of those standards. They also found that subspecialty societies tended to satisfy fewer IOM methodological standards. This study shows that there has been no change in guideline quality over the past decade and a half when an earlier study found similar results [Shaneyfeld 99].  This finding, of course, is likely to have the effect of leaving end-users uncertain as to how to best incorporate clinical guidelines into clinical practice and care improvements.  Further, Kung’s study found that few guidelines groups included information scientists (individuals skilled in critical appraisal of the evidence to determine the reliability of the results) and even fewer included patients or patient representatives.

An editorialist suggests that currently there are 5 things we need [Ransohoff]. We need:

1. An agreed-upon transparent, trustworthy process for developing ways to evaluate clinical guidelines and their recommendations.

2. A reliable method to express the degree of adherence to each IOM or other agreed-upon standard and a method for creating a composite measure of adherence.

From these two steps, we must create a “total trustworthiness score” which reflects adherence to all standards.

3. To accept that our current processes of developing trustworthy measures is a work in progress. Therefore, stakeholders must actively participate in accomplishing these 5 tasks.

4. To identify an institutional home that can sustain the process of developing measures of trustworthiness.

5. To develop a marketplace for trustworthy guidelines. Ratings should be displayed alongside each recommendation.

At this time, we have to agree with Shaneyfeld who wrote an accompanying commentary to Kung’s study [Shaneyfeld 12]:

What will the next decade of guideline development be like? I am not optimistic that much will improve. No one seems interested in curtailing the out-of-control guideline industry. Guideline developers seem set in their ways. I agree with the IOM that the Agency for Healthcare Research and Quality (AHRQ) should require guidelines to indicate their adherence to development standards. I think a necessary next step is for the AHRQ to certify guidelines that meet these standards and allow only certified guidelines to be published in the National Guidelines Clearinghouse. Currently, readers cannot rely on the fact that a guideline is published in the National Guidelines Clearinghouse as evidence of its trustworthiness, as demonstrated by Kung et al. I hope efforts by the Guidelines International Network are successful, but until then, in guidelines we cannot trust.


1. IOM: Graham R, Mancher M, Wolman DM,  et al; Committee on Standards for Developing Trustworthy Clinical Practice Guidelines; Board on Health Care Services.  Clinical Practice Guidelines We Can Trust. Washington, DC: National Academies Press; 2011

2. Kung J, Miller RR, Mackowiak PA. Failure of Clinical Practice Guidelines to Meet Institute of Medicine Standards: Two More Decades of Little, If Any, Progress. Arch Intern Med. 2012 Oct 22:1-6. doi: 10.1001/2013.jamainternmed.56. [Epub ahead of print] PubMed PMID: 23089902.

3.  Ransohoff DF, Pignone M, Sox HC. How to decide whether a clinical practice guideline is trustworthy. JAMA. 2013 Jan 9;309(2):139-40. doi: 10.1001/jama.2012.156703. PubMed PMID: 23299601.

4. Shaneyfelt TM, Mayo-Smith MF, Rothwangl J. Are guidelines following guidelines? The methodological quality of clinical practice guidelines in the peer-reviewed medical literature. JAMA. 1999 May 26;281(20):1900-5. PubMed PMID: 10349893.

5. Shaneyfelt T. In Guidelines We Cannot Trust: Comment on “Failure of Clinical Practice Guidelines to Meet Institute of Medicine Standards”. Arch Intern Med. 2012 Oct 22:1-2. doi: 10.1001/2013.jamainternmed.335. [Epub ahead of print] PubMed PMID: 23089851.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Interesting Comparative Effectiveness Research (CER) Case Study: “Real World Data” Hypothetical Migraine Case and Lack of PCORI Endorsement


Interesting Comparative Effectiveness Research (CER) Case Study: “Real World Data” Hypothetical Migraine Case and Lack of PCORI Endorsement

In the October issue of Health Affairs, the journal’s editorial team created a fictional set of clinical trials and observational studies to see what various stakeholders would say about comparative effectiveness evidence of two migraine drugs.[1]

The hypothetical set-up is this:

The newest drug, Hemikrane, is an FDA-approved drug that has recently come on the market. It was reported in clinical trials to reduce both the frequency and the severity of migraine headaches. Hemikrane is taken once a week. The FDA approved Hemikrane based on two randomized, double-blind, controlled clinical trials, each of which had three arms.

  • In one arm, patients who experienced multiple migraine episodes each month took Hemikrane weekly.
  • In another arm, a comparable group of patients received a different migraine drug, Cephalal, a drug which was reported to be effective in earlier, valid studies. It is taken daily.
  • In a third arm, another equivalent group of patients received placebos.

The study was powered to find a difference between Hemikrane and placebo if there was one and if it were at least as effective as Cephalal. Each of the two randomized studies enrolled approximately 2,000 patients and lasted six months. They excluded patients with uncontrolled high blood pressure, diabetes, heart disease, or kidney dysfunction. The patients received their care in a number of academic centers and clinical trial sites. All patients submitted daily diaries, recording their migraine symptoms and any side effects.

Hypothetical Case Study Findings: The trials reported that the patients who took Hemikrane had a clinically significant reduction in the frequency, severity, and duration of headaches compared to placebo, but not to Cephalal.

The trials were not designed to evaluate the comparative safety of the drugs, but there were no safety signals from the Hemikrane patients, although a small number of patients on the drug experienced nausea.

Although the above studies reported efficacy of Hemikrane in a controlled environment with highly selected patients, they did not assess patient experience in a real-world setting. Does once weekly dosing improve adherence in the real world? The monthly cost of Hemikrane to insurers is $200, whereas Cephalal costs insurers $150 per month. (In this hypothetical example, the authors assume that copayments paid by patients are the same for all of these drugs.)

A major philanthropic organization with an interest in advancing treatments for migraine sufferers funded a collaboration among researchers at Harvard; a regional health insurance company, Trident Health; and, Hemikrane’s manufacturer, Aesculapion. The insurance company, Trident Health, provided access to a database of five million people, which included information on medication use, doctor visits, emergency department evaluations and hospitalizations. Using these records, the study identified a cohort of patients with migraine who made frequent visits to doctors or hospital emergency departments. The study compared information about patients receiving Hemikrane with two comparison groups: a group of patients who received the daily prophylactic regimen with Cephalal, and a group of patients receiving no prophylactic therapy.

The investigators attempted to confirm the original randomized trial results by assessing the frequency with which all patients in the study had migraine headaches. Because the database did not contain a diary of daily symptoms, which had been collected in the trials, the researchers substituted as a proxy the amount of medications such as codeine and sumatriptan (Imitrex) that patients had used each month for treatment of acute migraines. The group receiving Hemikrane had lower use of these symptom-oriented medications than those on Cephalal or on no prophylaxis and had fewer emergency department visits than those taking Cephalal or on no prophylaxis.

Although the medication costs were higher for patients taking Hemikrane because of its higher monthly drug cost, the overall episode-of-care costs were lower than for the comparison group taking Cephalal. As hypothesized, the medication adherence was higher in the once-weekly Hemikrane patients than in the daily Cephalal patients (80 percent and 50 percent, respectively, using the metric of medication possession ratio, which is the number of days of medication dispensed as a percentage of 365 days).

The investigators were concerned that the above findings might be due to the unique characteristics of Trident Health’s population of covered patients, regional practice patterns, copayment designs for medications, and/or the study’s analytic approach. They also worried that the results could be confounded by differences in the patients receiving Hemikrane, Cephalal, or no prophylaxis. One possibility, for example, was that patients who experienced the worst migraines might be more inclined to take or be encouraged by their doctors to take the new drug, Hemikrane, since they had failed all previously available therapies. In that case, the results for a truly matched group of patients might have shown even more pronounced benefit for Hemikrane.

To see if the findings could be replicated, the investigators contacted the pharmacy benefit management company, BestScripts, that worked withTrident Health, and asked for access to additional data. A research protocol was developed before any data were examined. Statistical adjustments were also made to balance the three groups of patients to be studied as well as possible—those taking Hemikrane, those taking Cephalal, and those not on prophylaxis—using a propensity score method (which included age, sex, number of previous migraine emergency department visits, type and extent of prior medication use and selected comorbidities to estimate the probability of a person’s being in one of the three groups) to balance the groups.

The pharmacy benefit manager, BestScripts, had access to data covering more than fifty million lives. The findings in this second, much larger, database corroborated the earlier assessment. The once-weekly prophylactic therapy with Hemikrane clearly reduced the use of medications such as codeine to relieve symptoms, as well as emergency department visits compared to the daily prophylaxis and no prophylaxis groups. Similarly, the Hemikrane group had significantly better medication adherence than the Cephalal group. In addition, BestScripts had data from a subset of employers that collected work loss information about their employees. These data showed that patients on Hemikrane were out of work for fewer days each month than patients taking Cephalal.

In a commentary, Joe Selby, executive director of the Patient-Centered Outcomes Research Institute (PCORI), and colleagues provided a list of problems with these real world studies including threats to validity. They conclude that these hypothetical studies would be unlikely to have been funded or communicated by PCORI.[2]

Below are several of the problems identified by Selby et al.

  • Selection Bias
    • Patients and clinicians may have tried the more familiar, less costly Cephalal first and switched to Hemikrane only if Cephalal failed to relieve symptoms, making the Hemikrane patients a group, who on average, would be more difficult to treat.
    • Those patients who continued using Cephalal may be a selected group who tolerate the treatment well and perceived a benefit.
    • Even if the investigators had conducted the study with only new users, it is plausible that patients prescribed Hemikrane could differ from those prescribed Cephalal. They may be of higher socioeconomic status, have better insurance coverage with lower copayments, have different physicians, or differ in other ways that could affect outcomes.
  • Performance Biases or Other Differences Between Groups is possible.
  • Details of any between-group differences found in these exploratory analyses should have been presented.

Delfini Comment

These two articles are worth reading if you are interested in the difficult area of evaluating observational studies and including them in comparative effectiveness research (CER). We would add that to know if drugs really work, valid RCTs are almost always needed. In this case we don’t know if the studies were valid, because we don’t have enough information about the risk of selection, performance, attrition and assessment bias and other potential methodological problems in the studies. Database studies and other observational studies are likely to have differences in populations, interventions, comparisons, time treated and clinical settings (e.g., prognostic variables of subjects, dosing, co-interventions, other patient choices, bias from lack of blinding) and adjusting for all of these variables and more requires many assumptions. Propensity scores do not reliably adjust for differences. Thus, the risk of bias in the evidence base is unclear.

This case illustrates the difficulty of making coverage decisions for new drugs with some potential advantages for some patients when several studies report benefit compared to placebo, but we already have established treatment agents with safety records. In addition new drugs frequently are found to cause adverse events over time.

Observational data is frequently very valuable. It can be useful in identifying populations for further study, evaluating the implementation of interventions, generating hypotheses, and identifying current condition scenarios (e.g., who, what, where in QI project work; variation, etc.). It is also useful in providing safety signals and for creating economic projections (e.g., balance sheets, models). In this hypothetical set of studies, however, we have only gray zone evidence about efficacy from both RCTs and observational studies and almost no information about safety.

Much of the October issue of Health Affairs is taken up with other readers’ comments. Those of you interested in the problems with real world data in CER activities will enjoy reading how others reacted to these hypothetical drug studies.


1. Dentzer S; the Editorial Team of Health Affairs. Communicating About Comparative Effectiveness Research: A Health Affairs Symposium On The Issues. Health Aff (Millwood). 2012 Oct;31(10):2183-2187. PubMed PMID: 23048094.

2. Selby JV, Fleurence R, Lauer M, Schneeweiss S. Reviewing Hypothetical Migraine Studies Using Funding Criteria From The Patient-Centered Outcomes Research Institute. Health Aff (Millwood). 2012 Oct;31(10):2193-2199. PubMed PMID: 23048096.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

The Elephant is The Evidence—Epidural Steroids


The Elephant is The Evidence—Epidural Steroids: Edited & Updated 1/7/2013

Epidural steroids are commonly used to treat sciatica (pinched spinal nerve) or low back pain.  As of January 7, 2013 at least 40 deaths have been linked to fungal meningitis thought to be caused by contaminated epidural steroids, and 664 cases in 19 states have been identified with a clinical picture consistent with fungal infection [CDC]. Interim data show that all infected patients received injection with preservative-free methylprednisolone acetate (80mg/ml) prepared by New England Compounding Center, located in Framingham, MA. On October 3, 2012, the compounding center ceased all production and initiated recall of all methylprednisolone acetate and other drug products prepared for intrathecal administration.

Thousands of patients receive epidural steroids without significant side effects or problems every week. In this case, patients received steroids that were mixed by a “compounding pharmacy” and contamination of the medication appears to have occurred during manufacture. But let’s consider other patients who received epidural steroids from uncontaminated vials. How much risk and benefit are there with epidural steroids? The real issue is the effectiveness of epidural steroids. Yes, there are risks with epidural steroids beyond contamination—e.g., a type of headache that occurs when the dura (the sac around the spinal cord) is punctured and fluid leaks out. This causes a pressure change in the central nervous system and a headache. Bleeding is also a risk. But people with severe pain from sciatica are frequently willing to take those risks if there are likely to be benefits. But, in fact, for many patients who receive epidural steroids the likelihood of benefit is very low. For example, patients with bone problems (spinal stenosis) rather than lumbar disc disease are less likely to benefit. Patients who have had a long history of sciatica are less likely to benefit.

We don’t know how many of these patients were not likely to benefit from the epidural steroids, but if the infected patients had been advised about the unproven benefits of epidural steroids in certain cases and the known risks, some patients may have chosen to avoid the injections and possibly be alive today.  This is an example of the importance of good information as the basis for decision-making. Basing decisions on poor quality or incomplete information and intervening with unproven—yet potentially risky treatments puts millions of people at risk every week.

Let’s look at the evidence. Recently, a fairly large, well-conducted RCT published in the British Medical Journal (BMJ) reported that there is no meaningful benefit from epidural steroid injections in patients who have had long term (26 to 57 weeks) of sciatica [Iverson].  As pointed out in an editorial, epidural steroids have been used for more than 50 years to treat low back pain and sciatica and are the most common intervention in pain clinics throughout the world [Cohen]. And yet, despite their widespread use, their efficacy for the treatment of chronic sciatica remains unproven. (We should add here that many times lacking good evidence of benefit does not mean a treatment does not work.) Iverson et al conclude that, “Caudal epidural steroid or saline injections are not recommended for chronic lumbar radiculopathy [Iverson].”

Of more than 30 controlled studies evaluating epidural steroid injections, approximately half report some benefit. Systematic reviews also report conflicting results. Reasons for these discrepancies include differences in study quality, treatments, comparisons, co-interventions, study duration and patient selection. Results appear to be better for people with short term sciatica, but improvement should not be considered to be curative with epidural steroids. In this situation, it is very important that patients understand this fuzzy benefit-to-risk ratio. For many who are completely informed, the decision will be to avoid the risk.

With this recent problem of fungal meningitis from epidural steroids, it is important for patients to be informed about the world of uncertainty that surrounds risk, especially when science tells us that the evidence for benefit is not strong.  Since health care professionals frequently act as the eyes of the patient, we must seriously consider for every intervention we offer whether benefits clearly outweigh potential harms—and we must help patients understand details regarding the risks and benefits and be supportive when patients are “on the fence” about having a procedure. Remember Vioxx, arthroscopic lavage, vertebroplasy, encainide and flecainide, Darvon and countless other promising new drugs and other interventions? They seemed promising, but harms outweighed benefits for many patients.


1. accessed 12/10/12

2.  Cohen SP. Epidural steroid injections for low back pain. BMJ. 2011 Sep 13;343:d5310. doi: 10.1136/bmj.d5310. PubMed PMID: 21914757.

3.  Iversen T, Solberg TK, Romner B, et al.   Effect of caudal epidural steroid or saline injection  in chronic lumbar radiculopathy: multicentre, blinded, randomised controlled trial. BMJ. 2011 Sep 13;343:d5278. doi: 10.1136/bmj.d5278. PubMed PMID: 21914755.


Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

A Performance Measure for Overuse? The Loosening Of Tight Control In Diabetes


A Performance Measure for Overuse?  The Loosening Of Tight Control In Diabetes

Performance measures for tighter glycemic control appeared following the DCCT trial (Type 1 diabetes) in 1993 and the UKPDS trial (type 2 diabetes) in 1998.[1],[2] About 7 years ago groups recommended that glycohemoglobin concentrations be less than 7%, even though clear evidence of improved net outcomes was lacking.[3]

Now in an editorial in the online version of Archives of Internal Medicine, Pogach and Aron have nicely summarized details of this journey into overuse of hypoglycemic agents resulting in the problem of harms probably outweighing benefits—at least for some diabetics—in an editorial entitled, The Other Side of Quality Improvement in Diabetes for Seniors: A Proposal for an Overtreatment Glycemic Measure.[4]

The authors review the ACCORD, ADVANCE and VADT trials and remind readers that tight glycemic control did not yield cardiovascular benefits in these trials and that severe hypoglycemia occurred in the intensive treatment groups of all three trials. Of concern was the finding that ACCORD was terminated early because of increased mortality in the intensive glycemic treatment group. These trials appear to have increased concern about the risks of severe hypoglycemia in elderly patients and patients with existing cardiovascular disease, and the National Committee for Quality Assurance Healthcare Effectiveness Data and Information Set (HEDIS) modified its glycohemoglobin goal to less than 7% for persons younger than 65 years without cardiovascular disease or end-stage complications and diabetes and established a new, more relaxed goal of less than 8% for persons 65 to 74 years of age.

Kirsh and Aron took this a step further in 2011 and proposed a glycohemoglobin concentration of less than 7.0% as a threshold measure of potential overtreatment of hyperglycemia in  persons older than 65 years who are at high risk for hypoglycemia. They point out that the risk for hypoglycemia could be assessed by utilizing data from the electronic medical record regarding prescriptions for insulin and/or sulfonylurea medications and retrieving information on comorbidities such as chronic kidney disease, cognitive impairment or dementia, neurologic conditions that may interfere with a successful response to a hypoglycemic event.[5]

This commentary is worth reading and thinking about. We agree with them that the time has come to take more actions to prevent the risk of possible overtreatment in diabetes.

[1] The effect of intensive treatment of diabetes on the development and progression of long-term complications in insulin-dependent diabetes mellitus. The Diabetes Control and Complications Trial Research Group. N Engl J Med. 1993 Sep 30;329(14):977-86. PubMed PMID: 8366922.

[2]  Effect of intensive blood-glucose control with metformin on complications in overweight patients with type 2 diabetes (UKPDS 34). UK Prospective Diabetes Study (UKPDS) Group. Lancet. 1998 Sep 12;352(9131):854-65. Erratum in: Lancet 1998 Nov 7;352(9139):1558. PubMed PMID: 9742977.

 [3]  Pogach L, Aron DC. Sudden acceleration of diabetes quality measures. JAMA. 2011 Feb 16;305(7):709-10. PubMed PMID: 21325188.

[4] Published Online: September 10, 2012. doi:10.1001/archinternmed.2012.4392.

[5]  Kirsh SR, Aron DC. Choosing targets for glycaemia, blood pressure and low-density lipoprotein cholesterol in elderly individuals with diabetes mellitus. Drugs Aging. 2011 Dec 1;28(12):945-60. doi: 10.2165/11594750-000000000-00000. PubMed PMID: 22117094.


Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Best Care at Lower Cost: The Path to Continuously Learning Health Care in America


Best Care at Lower Cost: The Path to Continuously Learning Health Care in America

“If home building were like health care, carpenters, electricians, and plumbers each would work with different blueprints, with very little coordination.”

“If airline travel were like health care, each pilot would be free to design his or her own preflight safety check, or not to perform one at all.”

The Institute of Medicine (IOM) has just released this latest “state of our health care” report which is well worth reading. [1]  We have a long ways to go before we have a health system. The report, released September 6, 2012, concludes that our dysfunctional health care system wastes about $760 billion each year. Much of the waste is due to inefficiencies and administrative duplications, but $210 billion of the waste is due to unnecessary services (e.g., overuse, unnecessary choice of higher cost services) and $55 billion is wasted on missed primary, secondary and tertiary prevention opportunities.

Here are just a few of the interesting points and recommendations the 18 authors make:

  • The volume of the biomedical and clinical knowledge base has rapidly expanded, with research publications having risen from more than 200,000 a year in 1970 to more than 750,000 in 2010;
  • We can achieve striking improvements in safety, quality, reliability, and value through the use of systematic evidence-based process improvement methods;
  • We need digital platforms supporting real-time access to knowledge;
  • We need to  engage empowered patients;
  • We need full transparency in all we do;
  • We need improved decision support; improved patient-centered care through tools that deliver reliable, current clinical knowledge to the point of care; and, organizations’ support for, and adoption of, incentives that encourage the use of these tools.

The pre-publication issue of this IOM report is currently available free of charge at this URL.[2]

[1] Smith M, Cassell G, Ferguson B, Jones C, Redberg R; Institute of Medicine of the National Academies. Best care at lower cost: the path to continuously learning health care in America. SEP-06.aspx.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email