Sounding the Alarm (Again) in Oncology

Status

Sounding the Alarm (Again) in Oncology

Five years ago Fojo and Grady sounded the alarm about value in many of the new oncology drugs [1]. They raised the following issues and challenged oncologists and others to get involved in addressing these issues:

  • There is a great deal of uncertainty and confusion about what constitutes a benefit in cancer therapy; and,
  • How much should cost factor into these deliberations?

The authors review a number of oncology drug studies reporting increased overall survival (OS) ranging from a median of a few days to a few months with total new drug costs ranging from $15,000 to $90,000 plus. In some cases, there is no increase in OS, but only progression free survival (PFS) which is a weaker outcome measure due to its being prone to tumor assessment biases and is frequently assessed in studies of short duration. Adverse events associated with the new drugs are many and include higher rates of febrile neutropenia, infusion-related reactions, diarrhea, skin toxicity, infections, hypertension and other adverse events.

Fojo and Grady point out that—

“Many Americans would likely not regard a 1.2-month survival advantage as ‘significant’ progress, the much revered P value notwithstanding. But would an individual patient agree? Although we lack the answer to this question, we would suggest that the death of a mother of four at age 37 years would be no less painful were it to occur at age 37 years and 1 month, nor would the passing of a 67-year-old who planned to travel after retiring be any less difficult for the spouse were it to have occurred 1 month later.”

In a recent article [2] (thanks to Dr. Richard Lehman for drawing our attention to this article in his wonderful BMJ blog) Fojo and colleagues again point out that—

  • Cancer is the number one cause of mortality worldwide, and cancer cases are projected to rise by 75% over the next 2 decades.
  • Of the 71 therapies for solid tumors receiving FDA approval from 2002 to 2014, only 30 of the 71 approvals (42%) met the American Society of Clinical Oncology Cancer Research Committee’s “low hurdle” criteria for clinically meaningful improvement. Further, the authors tallied results from all the studies and reported very modest collective median gains of 2.5 months for PFS and 2.1 months for OS. Numerous surveys have indicated that patients expect much more.
  • Expensive therapies are stifling progress by (1) encouraging enormous expenditures of time, money, and resources on marginal therapeutic indications; and, (2) promoting a me-too mentality that is stifling innovation and creativity.

The last bullet needs a little explaining. The authors provide a number of examples of “safe bets” and argue that revenue from such safe and profitable therapies rather than true need has been a driving force for new oncology drugs. The problem is compounded by regulations—e.g., rules which require Medicare to reimburse patients for any drug used in an “anti-cancer chemotherapeutic regimen”—regardless of its incremental benefit over other drugs—as long as the use is “for a medically accepted indication” (commonly interpreted as “approved by the FDA”). This provides guaranteed revenues for me-too drugs irrespective of their marginal benefits. The authors also point out that when prices for drugs of proven efficacy fall below a certain threshold, suppliers often stop producing the drug, causing severe shortages.

What can be done? The authors acknowledge several times in their commentary that the spiraling cost of cancer therapies has no single villain; academia, professional societies, scientific journals, practicing oncologists, regulators, patient advocacy groups and the biopharmaceutical industry—all bear some responsibility. [We would add to this list physicians, P&T committees and any others who are engaged in treatment decisions for patients. Patients are not on this list (yet) because they are unlikely to really know the evidence.] This is like many other situations when many are responsible—often the end result is that “no one” takes responsibility. Fojo et al. close by making several suggestions, among which are—

  1. Academicians must avoid participating in the development of marginal therapies;
  2. Professional societies and scientific journals must raise their standards and not spotlight marginal outcomes;
  3. All of us must also insist on transparency and the sharing of all published data in a timely and enforceable manner;
  4. Actual gains of benefit must be emphasized—not hazard ratios or other measures that force readers to work hard to determine actual outcomes and benefits and risks;
  5. We need cooperative groups with adequate resources to provide leadership to ensure that trials are designed to deliver meaningful outcomes;
  6. We must find a way to avoid paying premium prices for marginal benefits; and,
  7. We must find a way [federal support?] to secure altruistic investment capital.

Delfini Comment
While the authors do not make a suggestion for specific responsibilities or actions on the part of the FDA, they do make a recommendation that an independent entity might create uniform measures of benefits for each FDA-approved drug—e.g., quality-adjusted life-years. We think the FDA could go a long way in improving this situation.

And so, as pointed out by Fojo et al., only small gains have been made in OS over the past 12 years, and costs of oncology drugs have skyrocketed. However, to make matters even worse than portrayed by Fojo et al., many of the oncology drug studies we see have major threats to validity (e.g., selection bias, lack of blinding and other performance biases, attrition and assessment bias, etc.) raising the question, “Does the approximate 2 month gain in median OS represent an overestimate?” Since bias tends to favor the new intervention in clinical trials, the PFS and OS reported in many of the recent oncology trials may be exaggerated or even absent or harms may outweigh benefits. On the other hand, if a study is valid, since a median is a midpoint in a range of results and a patient may achieve better results than indicated by the median, some patients may choose to accept a new therapy. The important thing is that patients are given information on benefits and harms in a way that allows them to have a reasonable understanding of all the issues and make the choices that are right for them.

Resources & References

Resource

  1. The URL for Dr. Lehman’s Blog is—
    http://blogs.bmj.com/bmj/category/richard-lehmans-weekly-review-of-medical-journals/
  2. The URL for his original blog entry about this article is—
    http://blogs.bmj.com/bmj/2014/11/24/richard-lehmans-journal-review-24-november-2014/

References

  1. Fojo T, Grady C. How much is life worth: cetuximab, non-small cell lung cancer, and the $440 billion question. J Natl Cancer Inst. 2009 Aug 5;101(15):1044-8. Epub 2009 Jun 29. PMID: 19564563
  2. Fojo T, Mailankody S, Lo A. Unintended Consequences of Expensive Cancer Therapeutics-The Pursuit of Marginal Indications and a Me-Too Mentality That Stifles Innovation and Creativity: The John Conley Lecture. JAMA Otolaryngol Head Neck Surg. 2014 Jul 28. doi: 10.1001/jamaoto.2014.1570. [Epub ahead of print] PubMed PMID: 25068501.
Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

5 “A”s of Evidence-based Medicine & PICOTS: Using “Population, Intervention, Comparison, Outcomes, Timing, Setting” (PICOTS) In Evidence-Based Quality Improvement Work

Status

5 “A”s of Evidence-based Medicine & PICOTS: Using “Population, Intervention, Comparison, Outcomes, Timing, Setting” (PICOTS) In Evidence-Based Quality Improvement Work

Much of what we do when answering key clinical questions can be summarized using the 5 “A” EBM Framework—Ask, Acquire, Appraise, Apply and “A”s Again.[1] Key clinical questions create the focus for the work and, once created, drive the work or project. In other words, the 5 “A”s form a scaffolding for us to use in doing EB quality improvement work of many types.

When healthcare professionals look to the medical literature for answers to various clinical questions or when planning comparative reviews, they frequently utilize checklists which employ the mnemonics, PICO (population, intervention, comparison, outcome)[2], PICOTS (same as PICO with the addition of timing and setting) or less frequently PICOT-SD (which also includes study design.[3]  PICOTS (patient population, intervention, comparison, outcomes, timing and setting) is a checklist that can remind us of important considerations in all of the 5 “A” areas.

PICOTS in Forming Key Clinical Questions and Searching

PICOTS is a useful framework for constructing key questions, but should be applied thoughtfully, because at times all PICOTS elements are not needed to construct a useful clinical question. For example, if I am interested in the evidence regarding prevention of venous thromboembolism in hip replacement surgery, I would want to include the population and study design and perhaps key outcomes, but I would not want to limit the question to any specific interventions in case there are some useful interventions of which I am not aware. So the question might be, “What is the evidence that thromboembolism or deep vein thrombosis (DVT) prophylaxis with various agents reduces mortality and clinically significant morbidity in hip replacement surgery?” In this case, I was somewhat specific about P (the patient population—which frequently is the condition of interest—in this case, patients undergoing  hip replacement surgery), less specific about O (mortality and morbidities) and not specific about I and C.

I could be even more specific about P if I specified patients at average risk for VTE or only patients at increased risk. If I were interested in the evidence about the effect of glycemic control on important outcomes in type II diabetes, I might pose the question as, “What is the effect of tight glycemic control on various outcomes,” and type in the terms “type 2 diabetes” AND “tight glycemic control” which would not limit the search to studies reporting outcomes of which I was unaware.

Learners are frequently taught to use PICO when developing search strategies. (When actually conducting a search, we use “condition” and not “population” because the condition is more likely to activate the MeSH headings in PubMed which produces a search with key synonyms.) As illustrated above, the PICO elements chosen for the search should frequently be limited to P (the patient population or condition) and I so as to capture all outcomes that have been studied. Therefore, it is important to remember that many of your searches are best done with using only one or two elements and using SD limits such as for clinical trials in order to increase the sensitivity of your search.

PICOTS in Assessing Studies for Validity and Synthesizing Evidence

When critically appraising studies for reliability or synthesizing evidence from multiple studies, PICOTS reminds us of the areas where heterogeneity is likely to be found. PICOTS is also useful in comparing the relevance of the evidence to our population of interest (external validity) and in creating decision support for various target groups.

PICOTS in Documenting Work

Transparency can be made easier by using PICOTS when documenting our work. You will notice that many tables found in systematic reviews and meta-analyses include PICOTS elements.

References

1. Modified by Delfini Group, LLC (www.delfini.org) from Leung GM. Evidence-based practice revisited. Asia Pac J Public Health. 2001;13(2):116-21. Review. PubMed PMID: 12597509.

2. Richardson WS, Wilson MC, Nishikawa J, Hayward RS. The well-built clinical question: a key to evidence-based decisions. ACP J Club. 1995;123:A12–3.

3. Methods Guide for Effectiveness and Comparative Effectiveness Reviews. AHRQ Publication No. 10(12)-EHC063-EF. Rockville, MD: Agency for Healthcare Research and Quality. April 2012. Chapters available at: www.effectivehealthcare.ahrq.gov.

 

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email