When Is a Measure of Outcomes Like a Coupon for a Diamond Necklace?

Status

When Is a Measure of Outcomes Like a Coupon for a Diamond Necklace?

For those of you who struggle with the fundamental difference between absolute risk reduction (ARR) versus relative risk reduction (RRR) and their counterparts, absolute and relative risk increase (ARI/RRI), we have always explained that only knowing the RRR or the RRI without other quantitative information about the frequency of events is akin to knowing that a store is having a half-off sale—but when you walk in, you find that they aren’t posting the actual price!  And so your question is 50 percent off of what???

You should have the same question greet you whenever you are provided with a relative measure (and if you aren’t told whether the measure is relative or absolute, you may be safer off assuming that it is relative). Below is a link to a great short cartoon that turns the lens a little differently and which might help.

However, we will add that, in our opinion, ARR alone isn’t fully informative either, nor is its kin, the number-needed-to-treat or NNT, and for ARI, the number-needed-to-harm or NNH.  A 5 percent reduction in risk may be perceived very differently when “10 people out of a hundred benefit with one intervention compared to 5 with placebo” as compared to a different scenario in which “95 people out of a hundred benefit with one intervention as compared to 90 with placebo.” As a patient, I might be less likely to want to expose myself to side effects if it is highly likely I am going to improve without treatment, for example.  Providing this full information–for critically appraised studies that are deemed to be valid–of course, may best provide patients with information that helps them make choices based on their own needs and requirements including their values and preferences.

We think that anyone involved in health care decision-making—including the patient—is best helped by knowing the event rates for each of the groups studied—i.e., the numerators and denominators for the outcome of interest by group which comprise the 4 numbers that make up the 2 by 2 table which is used to calculate many statistics.

Isn’t it great when learning can be fun too!  Enjoy!

http://www.ibtimes.com/articles/347476/20120531/relative-risk-absolute-comic-health-medical-reporting.htm

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

California Pharmacist Journal: Student Evidence Review of NAVIGATOR Study

Status

California Pharmacist Journal: Student Evidence Review of NAVIGATOR Study

Klevens A, Stuart ME, Strite SA. NAVIGATOR (Effect of nateglinide on the incidence of diabetes and cardiovascular events PMID 20228402) Study Evidence Review. California Pharmacist 2012. Vol. LIX, No. 4. Fall 2012. at our California Pharmacist journal page.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

California Pharmacist Journal: Student Evidence Review of SATURN Study

Status

California Pharmacist Journal: Student Evidence Review of SATURN Study

Salman G, Stuart ME, Strite SA. The Study of Coronary Atheroma by Intravascular Ultrasound: Effect of Rosuvastatin versus Atorvastatin (SATURN) Study Evidence Review. California Pharmacist 2012. Vol. LIX, No. 3. Summer 2012. at our California Pharmacist journal page

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Our Current Thinking About Attrition Bias

Status

Delfini Thoughts on Attrition Bias

Significant attrition, whether it be due to loss of patients or discontinuation or some other reason, is a reality of many clinical trials. And, of course, the key question in any study is whether attrition significantly distorted the study results. We’ve spent a lot of time researching the evidence-on-the-evidence and have found that many researchers, biostatisticians and others struggle with this area—there appears to be no clear agreement in the clinical research community about how to best address these issues. There also is inconsistent evidence on the effects of attrition on study results.

We, therefore, believe that studies should be evaluated on a case-by-case basis and doing so often requires sleuthing and sifting through clues along with critically thinking through the unique circumstances of the study.

The key question is, “Given that attrition has occurred, are the study results likely to be true?” It is important to look at the contextual elements of the study. These contextual elements may include information about the population characteristics, potential effects of the intervention and comparator, the outcomes studied and whether patterns emerge, timing and setting. It is also important to look at the reasons for discontinuation and loss-to-follow up and to look at what data is missing and why to assess likely impact on results.

Attrition may or may not impact study outcomes depending, in part, upon the reasons for withdrawals, censoring rules and the resulting effects of applying those rules, for example. However, differential attrition issues should be looked at especially closely. Unintended differences between groups are more likely to happen when patients have not been allocated to their groups in a blinded fashion, groups are not balanced at the onset of the study and/or the study is not effectively blinded or an effect of the treatment has caused the attrition.

One piece of the puzzle, at times, may be whether prognostic characteristics remained balanced. One item that would be helpful authors could help us all out tremendously by assessing comparability between baseline characteristics at randomization and for those analyzed. However, an imbalance may be an important clue too because it might be informative about efficacy or side effects of the agent understudy.

In general, we think it is important to attempt to answer the following questions:

Examining the contextual elements of a given study—

  • What could explain the results if it is not the case that the reported findings are true?
  • What conditions would have to be present for an opposing set of results (equivalence or inferiority) to be true instead of the study findings?
  • Were those conditions met?
  • If these conditions were not met, is there any reason to believe that the estimate of effect (size of the difference) between groups is not likely to be true.
Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Reliable Clinical Guidelines

Status

Reliable Clinical Guidelines—Great Idea, Not-Such-A-Great Reality

Although clinical guideline recommendations about managing a given condition may differ, guidelines are, in general, considered to be important sources for individual clinical decision-making, protocol development, order sets, performance measures and insurance coverage. The Institute of Medicine [IOM] has created important recommendations that guideline developers should pay attention to—

  1. Transparency;
  2.  Management of conflict of interest;
  3.  Guideline development group composition;
  4. How the evidence review is used to inform clinical recommendations;
  5.  Establishing evidence foundations for making strength of recommendation ratings;
  6. Clear articulation of recommendations;
  7. External review; and,
  8. Updating.

Investigators recently evaluated 114 randomly chosen guidelines against a selection from the IOM standards and found poor adherence [Kung 12]. The group found that the overall median number of IOM standards satisfied was only 8 out of 18 (44.4%) of those standards. They also found that subspecialty societies tended to satisfy fewer IOM methodological standards. This study shows that there has been no change in guideline quality over the past decade and a half when an earlier study found similar results [Shaneyfeld 99].  This finding, of course, is likely to have the effect of leaving end-users uncertain as to how to best incorporate clinical guidelines into clinical practice and care improvements.  Further, Kung’s study found that few guidelines groups included information scientists (individuals skilled in critical appraisal of the evidence to determine the reliability of the results) and even fewer included patients or patient representatives.

An editorialist suggests that currently there are 5 things we need [Ransohoff]. We need:

1. An agreed-upon transparent, trustworthy process for developing ways to evaluate clinical guidelines and their recommendations.

2. A reliable method to express the degree of adherence to each IOM or other agreed-upon standard and a method for creating a composite measure of adherence.

From these two steps, we must create a “total trustworthiness score” which reflects adherence to all standards.

3. To accept that our current processes of developing trustworthy measures is a work in progress. Therefore, stakeholders must actively participate in accomplishing these 5 tasks.

4. To identify an institutional home that can sustain the process of developing measures of trustworthiness.

5. To develop a marketplace for trustworthy guidelines. Ratings should be displayed alongside each recommendation.

At this time, we have to agree with Shaneyfeld who wrote an accompanying commentary to Kung’s study [Shaneyfeld 12]:

What will the next decade of guideline development be like? I am not optimistic that much will improve. No one seems interested in curtailing the out-of-control guideline industry. Guideline developers seem set in their ways. I agree with the IOM that the Agency for Healthcare Research and Quality (AHRQ) should require guidelines to indicate their adherence to development standards. I think a necessary next step is for the AHRQ to certify guidelines that meet these standards and allow only certified guidelines to be published in the National Guidelines Clearinghouse. Currently, readers cannot rely on the fact that a guideline is published in the National Guidelines Clearinghouse as evidence of its trustworthiness, as demonstrated by Kung et al. I hope efforts by the Guidelines International Network are successful, but until then, in guidelines we cannot trust.

References

1. IOM: Graham R, Mancher M, Wolman DM,  et al; Committee on Standards for Developing Trustworthy Clinical Practice Guidelines; Board on Health Care Services.  Clinical Practice Guidelines We Can Trust. Washington, DC: National Academies Press; 2011 http://www.nap.edu/catalog.php?record_id=13058

2. Kung J, Miller RR, Mackowiak PA. Failure of Clinical Practice Guidelines to Meet Institute of Medicine Standards: Two More Decades of Little, If Any, Progress. Arch Intern Med. 2012 Oct 22:1-6. doi: 10.1001/2013.jamainternmed.56. [Epub ahead of print] PubMed PMID: 23089902.

3.  Ransohoff DF, Pignone M, Sox HC. How to decide whether a clinical practice guideline is trustworthy. JAMA. 2013 Jan 9;309(2):139-40. doi: 10.1001/jama.2012.156703. PubMed PMID: 23299601.

4. Shaneyfelt TM, Mayo-Smith MF, Rothwangl J. Are guidelines following guidelines? The methodological quality of clinical practice guidelines in the peer-reviewed medical literature. JAMA. 1999 May 26;281(20):1900-5. PubMed PMID: 10349893.

5. Shaneyfelt T. In Guidelines We Cannot Trust: Comment on “Failure of Clinical Practice Guidelines to Meet Institute of Medicine Standards”. Arch Intern Med. 2012 Oct 22:1-2. doi: 10.1001/2013.jamainternmed.335. [Epub ahead of print] PubMed PMID: 23089851.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Quickly Finding Reliable Evidence

Status

Quickly Finding Reliable Evidence

Good clinical recommendations for various diagnostic and therapeutic interventions incorporate evidence from reliable published research evidence. Several online evidence-based textbooks are available for clinicians to use to assist them in making healthcare decisions. Large time lags in updating are a common problem for medical textbooks.  Online textbooks offer a solution to these delays.

For readers who plan to create decision support, we strongly recommend DynaMed [full disclosure: we are on the editorial board in an unpaid capacity, though a few years ago we did receive a small gift]. DynaMed is a point-of-care evidence-based medical information database created by Brian S. Alper MD, MSPH, FAAFP. It continues to grow from its current 30,000+ clinical topics that are updated frequently. DynaMed monitors the content of more than 500 medical journals and systematic evidence review databases.  Each item is thoroughly reviewed for clinical relevance and scientific reliability. DynaMed has been compared with several products, including in a new review by McMaster University. The DynaMed website is https://dynamed.ebscohost.com/.

The McMaster University maintains a Premium Literature Service (PLUS) database which is a continuously updated, searchable database of primary studies and systematic reviews. Each article from over 120 high quality clinical journals and evidence summary services is appraised by research staff for methodological quality, and articles that pass basic criteria are assessed by practicing clinicians in the corresponding discipline.  Clinical ratings are based on 7-point scales, where clinical relevance ranges from 1 (“not relevant”) to 7 (“directly and highly relevant”), and newsworthiness ranges from 1 (“not of direct clinical interest”) to 7 (“useful information, most practitioners in my discipline definitely don’t know this).

Investigators from McMaster evaluated four evidence-based textbooks—UpToDate, PIER, DynaMed and Best Practice [Jeffery 12].  For each they determined the proportion of 200 topics which had subsequent articles in PLUS with findings different from those reported in the topics. They also evaluated the number of topics available in each evidence-based textbook compared with the topic coverage in the PLUS database, and the recency of updates for these publications.  A topic was in need of an update if there was at least one newer article in PLUS that provided information that differed from the topic’s recommendations in the textbook.

Results

The proportion of topics with potential for updates was significantly lower for DynaMed than the other three textbooks, which had statistically similar values. For DynaMed topics, updates occurred on average of 170 days prior to the study, while the other textbooks averaged from 427 to 488 days. Of all evidence-based textbooks, DynaMed missed fewer articles reporting benefit or no effect when the direction of findings (beneficial, harmful, no effect) was investigated. The proportion of topics for which there was 1 or more recently published articles found in PLUS with evidence that differed from the textbooks’ treatment recommendations was 23% (95% CI 17 to 29%) for DynaMed, 52% (95% CI 45 to 59%) for UpToDate, 55% (95% CI 48 to 61%) for PIER, and 60% (95% CI 53 to 66%) for Best Practice (?23=65.3, P<.001). The time since the last update for each textbook averaged from 170 days (range 131 to 209) for DynaMed, to 488 days (range 423 to 554) for PIER (P<.001 across all textbooks).

Summary

Healthcare topic coverage varied substantially for leading evidence-informed electronic textbooks, and generally a high proportion of the 200 common topics had potentially out-of-date conclusions and missing information from 1 or more recently published studies. PIER had the least topic coverage, while UpToDate, DynaMed, and Best Practice covered more topics in similar numbers. DynaMed’s timeline for updating was the quickest, and it had by far the least number of articles that needed to be updated, indicating that quality was not sacrificed for speed.

Note: All textbooks have access to the PLUS database to facilitate updates, and also use other sources for updates such as clinical practice guidelines.

Conclusion

The proportion of topics with potentially outdated treatment recommendations in on-line evidence-based textbooks varies substantially.

Reference

Jeffery R, Navarro T, Lokker C, Haynes RB, Wilczynski NL, Farjou G. How current are leading evidence-based medical textbooks? An analytic survey of four online textbooks. J Med Internet Res. 2012 Dec 10;14(6):e175. doi: 10.2196/jmir.2105. PubMed PMID: 23220465.

 

 

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Delfini Treatment Messaging Scripts™ Update

Status

 Messaging Scripts ™ Update

Delfini Messaging Scripts  are scripts for scripts. Years ago we were asked by a consultancy pharmacy to come up with a method to create concise evidence-based statements for various therapies.  That’s how we came up with our ideas for Messaging Scripts, which are targeted treatment messaging & decision support tools for specific clinical topics. Since working with that group, we created a template and some sample scripts which have been favorably received wherever we have shown them.  The template is available at the link below, along with several samples.  Samples recently updated: Ace Inhibitors, Alendronate, Sciatica (Low Back Pain), Statins (two scripts) and Venous Thromboembolism (VTE) Prevention in Total Hip and Total Knee Replacement.

 http://www.delfini.org/page_SamePage_RxMessagingScripts.htm

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Canadian Knowledge Translation Website

Status

Canadian Knowledge Translation Website

The Knowledge Translation (KT) Clearinghouse is a useful website for EBM information and tools. It is funded by the Canadian Institute of Health Research (CIHR) and has a goal of improving the quality of care by developing, implementing and evaluating strategies that bridge the knowledge-to-practice gap and to research the most effective ways to translate knowledge into action. Now added to Delfini web links.

http://ktclearinghouse.ca/

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Proton Beam Therapy For Prostate Cancer

Status

Proton Beam Therapy For Prostate Cancer

As of this writing, there is insufficient evidence to conclude that proton beam is more effective in treating prostate cancer than conventional radiation therapy; and there is no evidence of significant differences between proton therapy and radiation therapy in total serious adverse events.  Readers may be interested in a recent article where the investigators point out that patients diagnosed with prostate cancer and  living in areas where proton beam therapy is readily available are more likely to be treated with this new technology than with conventional radiation therapy. The cost of treating prostate cancer with proton beam therapy can exceed $50,000 per patient which is twice the cost of radiation therapy. Increasingly, we are seeing new technologies with staggering costs. In prostate cancer, for example, as we write this, proton centers are being built all over the country at a cost of up to $200 million.

Reference

Aaronson DS, Odisho AY, Hills N, Cress R, Carroll PR, Dudley RA, Cooperberg MR. Proton beam therapy and treatment for localized prostate cancer: if you build it, they will come. Arch Intern Med. 2012 Feb 13;172(3):280-3. PubMed PMID:22332166.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Critical Appraisal Matters

Status

Critical Appraisal Matters

Mike and I make it a practice to study the evidence on the evidence.  Doing effective critical appraisal to evaluate the validity and clinical usefulness of studies makes a difference.  This page on our website may be our most important one and we have now added a 1-page fact sheet for downloading: http://www.delfini.org/delfiniFactsCriticalAppraisal.htm

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email