Canadian Knowledge Translation Website

Status

Canadian Knowledge Translation Website

The Knowledge Translation (KT) Clearinghouse is a useful website for EBM information and tools. It is funded by the Canadian Institute of Health Research (CIHR) and has a goal of improving the quality of care by developing, implementing and evaluating strategies that bridge the knowledge-to-practice gap and to research the most effective ways to translate knowledge into action. Now added to Delfini web links.

http://ktclearinghouse.ca/

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Proton Beam Therapy For Prostate Cancer

Status

Proton Beam Therapy For Prostate Cancer

As of this writing, there is insufficient evidence to conclude that proton beam is more effective in treating prostate cancer than conventional radiation therapy; and there is no evidence of significant differences between proton therapy and radiation therapy in total serious adverse events.  Readers may be interested in a recent article where the investigators point out that patients diagnosed with prostate cancer and  living in areas where proton beam therapy is readily available are more likely to be treated with this new technology than with conventional radiation therapy. The cost of treating prostate cancer with proton beam therapy can exceed $50,000 per patient which is twice the cost of radiation therapy. Increasingly, we are seeing new technologies with staggering costs. In prostate cancer, for example, as we write this, proton centers are being built all over the country at a cost of up to $200 million.

Reference

Aaronson DS, Odisho AY, Hills N, Cress R, Carroll PR, Dudley RA, Cooperberg MR. Proton beam therapy and treatment for localized prostate cancer: if you build it, they will come. Arch Intern Med. 2012 Feb 13;172(3):280-3. PubMed PMID:22332166.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Centrum—Spinning the Vitamins?

Status

Centrum—Spinning the Vitamins?

Scott K. Aberegg, MD, MPH, has written an amusing and interesting blog about a recently published randomized controlled trial (RCT) on vitamins and cancer outcomes[1]. In the blog, he critiques the Physicians’ Health Study II and points out the following:

  • Aberegg wonders why, with a trial of 14,000 people, you would adjust the baseline variables.
  • The lay press reported a statistically significant 8% reduction in subjects taking Centrum multivitamins; the unadjusted Crude Log Rank p-value, however, was 0.05—not statistically significant.
  • The adjusted p-value was 0.04 for the hazard ratio which means that the 8% was a relative risk reduction.
  • His own calculations reveals an absolute risk reduction of 1.2% and, by performing a simple sensitivity analysis—by adding 5 cancers and then 10 cancers to the placebo group—the p-value changes to 0.0768 and 0.0967, demonstrating that small changes have a big impact on the p-value.

He concludes that, “…without spin, we see that multivitamins (and other supplements) create both expensive urine and expensive studies – and both just go right down the drain.”

A reminder that, if the results had indeed been clinically meaningful, then the next step would be to perform a critical appraisal to determine if the study were valid or not.

Reference

[1] http://medicalevidence.blogspot.com/2012/10/a-centrum-day-keeps-cancer-at-bay.html accessed 10/25/12.

[2] Gaziano JM et al. Multivitamins in the Prevention of Cancer in Men The Physicians’ Health Study II Randomized Controlled Trial. JAMA. 2012;308(18):doi:10.1001/jama.2012.14641.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

5 “A”s of Evidence-based Medicine & PICOTS: Using “Population, Intervention, Comparison, Outcomes, Timing, Setting” (PICOTS) In Evidence-Based Quality Improvement Work

Status

5 “A”s of Evidence-based Medicine & PICOTS: Using “Population, Intervention, Comparison, Outcomes, Timing, Setting” (PICOTS) In Evidence-Based Quality Improvement Work

Much of what we do when answering key clinical questions can be summarized using the 5 “A” EBM Framework—Ask, Acquire, Appraise, Apply and “A”s Again.[1] Key clinical questions create the focus for the work and, once created, drive the work or project. In other words, the 5 “A”s form a scaffolding for us to use in doing EB quality improvement work of many types.

When healthcare professionals look to the medical literature for answers to various clinical questions or when planning comparative reviews, they frequently utilize checklists which employ the mnemonics, PICO (population, intervention, comparison, outcome)[2], PICOTS (same as PICO with the addition of timing and setting) or less frequently PICOT-SD (which also includes study design.[3]  PICOTS (patient population, intervention, comparison, outcomes, timing and setting) is a checklist that can remind us of important considerations in all of the 5 “A” areas.

PICOTS in Forming Key Clinical Questions and Searching

PICOTS is a useful framework for constructing key questions, but should be applied thoughtfully, because at times all PICOTS elements are not needed to construct a useful clinical question. For example, if I am interested in the evidence regarding prevention of venous thromboembolism in hip replacement surgery, I would want to include the population and study design and perhaps key outcomes, but I would not want to limit the question to any specific interventions in case there are some useful interventions of which I am not aware. So the question might be, “What is the evidence that thromboembolism or deep vein thrombosis (DVT) prophylaxis with various agents reduces mortality and clinically significant morbidity in hip replacement surgery?” In this case, I was somewhat specific about P (the patient population—which frequently is the condition of interest—in this case, patients undergoing  hip replacement surgery), less specific about O (mortality and morbidities) and not specific about I and C.

I could be even more specific about P if I specified patients at average risk for VTE or only patients at increased risk. If I were interested in the evidence about the effect of glycemic control on important outcomes in type II diabetes, I might pose the question as, “What is the effect of tight glycemic control on various outcomes,” and type in the terms “type 2 diabetes” AND “tight glycemic control” which would not limit the search to studies reporting outcomes of which I was unaware.

Learners are frequently taught to use PICO when developing search strategies. (When actually conducting a search, we use “condition” and not “population” because the condition is more likely to activate the MeSH headings in PubMed which produces a search with key synonyms.) As illustrated above, the PICO elements chosen for the search should frequently be limited to P (the patient population or condition) and I so as to capture all outcomes that have been studied. Therefore, it is important to remember that many of your searches are best done with using only one or two elements and using SD limits such as for clinical trials in order to increase the sensitivity of your search.

PICOTS in Assessing Studies for Validity and Synthesizing Evidence

When critically appraising studies for reliability or synthesizing evidence from multiple studies, PICOTS reminds us of the areas where heterogeneity is likely to be found. PICOTS is also useful in comparing the relevance of the evidence to our population of interest (external validity) and in creating decision support for various target groups.

PICOTS in Documenting Work

Transparency can be made easier by using PICOTS when documenting our work. You will notice that many tables found in systematic reviews and meta-analyses include PICOTS elements.

References

1. Modified by Delfini Group, LLC (www.delfini.org) from Leung GM. Evidence-based practice revisited. Asia Pac J Public Health. 2001;13(2):116-21. Review. PubMed PMID: 12597509.

2. Richardson WS, Wilson MC, Nishikawa J, Hayward RS. The well-built clinical question: a key to evidence-based decisions. ACP J Club. 1995;123:A12–3.

3. Methods Guide for Effectiveness and Comparative Effectiveness Reviews. AHRQ Publication No. 10(12)-EHC063-EF. Rockville, MD: Agency for Healthcare Research and Quality. April 2012. Chapters available at: www.effectivehealthcare.ahrq.gov.

 

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Institute of Medicine CEO Checklist for High-Value Healthcare

Status

Institute of Medicine CEO Checklist for High-Value Healthcare

In June, 2012 the Institute of Medicine (IOM) published a checklist for healthcare CEOs as a way of encouraging further efforts towards achieving a simultaneous reduction in costs and elimination of waste.[1] EBMers will find the case studies of great interest. Many of the success stories contain two key ingredients—reliable information to improve decision-making and successful implementation.  The full report is available at—
http://www.iom.edu/Global/Perspectives/2012/CEOChecklist.aspx.

Foundational Elements

1. Governance priority—visible and determined leadership by CEO and board

  • Culture of continuous improvement—commitment to ongoing, real-time learning
  • Infrastructure Fundamentals

2. Information technology (IT) best practices—automated, reliable information to and from the point of care

  • Evidence protocols—effective, efficient, and consistent care
  • Resource utilization—optimized use of personnel, physical space, and other resources

3. Care Delivery Priorities

  • Integrated care—right care, right setting, right providers, right teamwork
  • Shared decision making—patient-clinician collaboration on care plans
  • Targeted services—tailored community and clinic interventions for resource-intensive patients

4. Reliability and Feedback

  • Embedded safeguards—supports and prompts to reduce injury and infection
  • Internal transparency—visible progress in performance, outcomes, and costs

References

1. Cosgrove D, Fisher M, Gabow P, et al. A CEO Checklist for High-Value Health Care. Discussion paper. Washington, DC: Institute of Medicine; 2012. http://www.iom.edu/Global/Perspectives/2012/CEOChecklist.aspx (accessed 08/13/2012).

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

CONSORT Update of Abstract Guidelines 2012

Status

CONSORT Update of Abstract Guidelines 2012

We have previously described the rationale and details of The Consort Statement: Consolidated Standards of Reporting Trials (CONSORT).[1] In brief, CONSORT is a checklist, based on evidence, of 25 items that need to be addressed in reports of clinical trials in order to provide readers with a clear picture of study quality and the progress of all participants in the trial, from the time they are randomized until the end of their involvement. The intent is to make the experimental process clear, flawed or not, so that users of the data can more appropriately evaluate its validity and usefulness of the results. A recent BMJ study has assessed the use of CONSORT guidelines for abstracts in five top journals—JAMA, New England Journal of Medicine (NEJM), the British Medical Journal (BMJ), Lancet and the Annals of Internal Medicine. [2]

In this study, the authors checked each journal’s instructions to authors in January 2010 for any reference to the CONSORT for Abstracts guidelines (for example, reference to a publication or link to the relevant section of the CONSORT website). For those journals that mentioned the guidelines in their instructions to authors, they contacted the editor of that journal to ask when the guidance was added, whether the journal enforced the guidelines, and if so, how. They classified journals in three categories: those not mentioning the CONSORT guidelines in their instructions to authors (JAMA and NEJM); those referring to the guidelines in their instructions to authors, but with no specific policy to implement them (BMJ); and those referring to the guidelines in their instructions to authors, with a policy to implement them (Annals of Internal Medicine and the Lancet).

First surprise—JAMA and NEJM don’t even mention CONSORT in their instructions to authors. Second surprise—CONSORT published what evidologists agree to be reasonable abstract requirements in 2008, but only the Annals and Lancet now instruction authors to follow them. The study design was to evaluate the inclusion of the 9 CONSORT items omitted more than 50% of the time from abstracts (details of the trial design, generation of the allocation sequence, concealment of allocation, details of blinding, number randomized and number analyzed in each group, primary outcome results for each group and its effect size, harms data and funding source). The primary outcome was the mean number of CONSORT items reported in selected abstracts, among nine items reported in fewer than 50% of the abstracts published across the five journals in 2006. Overall, for the primary outcome, publication of the CONSORT guidelines did not lead to a significant increase in the level of the mean number of items reported (increase of 0.3035 of nine items, P=0.16) or the trend (increase of 0.0193 items per month, P=0.21). There was a significant increase in the level of the mean number of items reported after the implementation of the CONSORT guidelines (increase of 0.3882 of five items, P=0.0072) and in trends (increase of 0.0288 items per month, P=0.0025).

What follows is not really surprising—

  • After publication of the guidelines in January 2008, the authors identified a significant increase in the reporting of key items in the two journals (Annals of Internal Medicine, and Lancet) that endorsed the guidelines in their instructions to authors and that had an active editorial policy to implement them. At baseline, in January 2006, the mean number of items reported per abstract was 1.52 of nine items, which increased to 2.56 nine items during the 25 months before the intervention. In December 2009, 23 months after the publication of the guidelines, the mean number of items reported per abstract for the primary outcome in the Annals of Internal Medicine and the Lancet was 5.41 items, which represented a 53% increase compared with the expected level estimated on the basis of pre-intervention trends.
  • The authors observed no significant difference in the one journal (BMJ) that endorsed the guidelines but did not have an active implementation strategy, and in the two journals (JAMA, NEJM) that did not endorse the guidelines in their instructions to authors.

What this study shows is that without actively implementing editorial policies—i.e., requiring the use of CONSORT guidelines, improved reporting does not happen. A rather surprising finding for us was that only two of the five top journals included in this study have active implementation policies (e.g., an email to authors at time of revision that requires revision of the abstract according to CONSORT guidance). We have a long ways to go.

More details about CONSORT are available, including a few of the flow diagram, at— http://www.consort-statement.org/

References

1. http://www.delfini.org/delfiniClick_ReportingEvidence.htm#consort

2. Hopewell S, Philippe P, Baron G., Boutron I.  Effect of editors’ implementation of CONSORT on the reporting of abstracts in high impact medical journals: interrupted time series analysis. BMJ 2012;344:e4178.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email