Comparative Effectiveness Research (CER), “Big Data” & Causality

Status

Comparative Effectiveness Research (CER), “Big Data” & Causality

For a number of years now, we’ve been concerned that the CER movement and the growing love affair with “big data,” will lead to many erroneous conclusions about cause and effect.  We were pleased to see the following blog from Austin Frakt, an editor-in-chief of The Incidental Economist: Contemplating health care with a focus on research, an eye on reform

Ten impressions of big data: Claims, aspirations, hardly any causal inference

http://theincidentaleconomist.com/wordpress/ten-impressions-of-big-data-claims-aspirations-hardly-any-causal-inference/

+

Five more big data quotes: The ambitions and challenges

http://theincidentaleconomist.com/wordpress/five-more-big-data-quotes/

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Comparison of Risk of Bias Ratings in Clinical Trials—Journal Publications Versus Clinical Study Reports

Status

Comparison of Risk of Bias Ratings in Clinical Trials—Journal Publications Versus Clinical Study Reports

Many critical appraisers assess bias using tools such as the Cochrane risk of bias tool (Higgins 11) or tools freely available from us (http://www.delfini.org/delfiniTools.htm). Internal validity is assessed by evaluating important items such as generation of the randomization sequence, concealment of allocation, blinding, attrition and assessment of results.

Jefferson et al. recently compared the risk of bias in 14 oseltamivir trials using information from previous assessments based on the study publications and the newly acquired, more extensive clinical study reports (CSRs) obtained from the European Medicines Agency (EMA) and the manufacturer, Roche.

Key findings include the following:

  • Evaluations using more complete information from the CSRs resulted in no difference in the number of previous assessment of “high” risk of bias.
  • However, over half (55%, 34/62) of the previous “low” risk of bias ratings were reclassified as “high.”
  • Most of the previous “unclear” risk of bias ratings (67%, 28/32) were changed to “high” risk of bias ratings when CSRs were available.

The authors discuss the idea that the risk of bias tools are important because they facilitate the process of critical appraisal of medical evidence. They also call for greater availability of the CSRs as the basic unit available for critical appraisal.

Delfini Comment

We believe that both sponsors and researchers need to provide more study detail so that critical appraisers can provide more precise ratings of risk of bias. Study publications frequently lack information needed by critical appraisers.

We agree that CSRs should be made available so they can be used to improve their assessments of clinical trials.  However, our experience has been the opposite of that experienced by the authors.  When companies have invited us to work with them to assess the reliability of their studies and made CSRs available to us, frequently we have found important information not otherwise available in the study publication.  When this happens, studies otherwise given a rating at higher risk of bias have often been determined to be at low risk of bias and of high quality.

References

1. Higgins JP, Altman DG, Gøtzsche PC, Jüni P, Moher D, Oxman AD, Savovic J, Schulz KF, Weeks L, Sterne JA; Cochrane Bias Methods Group; Cochrane Statistical  Methods Group. The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials. BMJ. 2011 Oct 18;343:d5928. doi: 10.1136/bmj.d5928. PubMed PMID: 22008217.

2. Jefferson T, Jones MA, Doshi P, Del Mar CB, Hama R, Thompson MJ, Onakpoya I, Heneghan CJ. Risk of bias in industry-funded oseltamivir trials: comparison of core reports versus full clinical study reports. BMJ Open. 2014 Sep 30;4(9):e005253. doi: 10.1136/bmjopen-2014-005253. PubMed PMID: 25270852.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Medical Literature Searching Update

Status

Searching Update

We’ve updated our searching tips.  You can download our Searching the Medical Literature Tool, along with others freely available, at our library of Tools & Educational Materials by Delfini:

http://www.delfini.org/delfiniTools.htm

1. Quick Way To Find Drug Information On The FDA Site

If you are looking for information about a specific drug, (e.g.,  a drug recently approved by the FDA) you it may be faster to use Google to find the information you want. Type “FDA [drug name].

2.  Also see Searching With Symbols in the tool.

 

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Quickly Finding Reliable Evidence

Status

Quickly Finding Reliable Evidence

Good clinical recommendations for various diagnostic and therapeutic interventions incorporate evidence from reliable published research evidence. Several online evidence-based textbooks are available for clinicians to use to assist them in making healthcare decisions. Large time lags in updating are a common problem for medical textbooks.  Online textbooks offer a solution to these delays.

For readers who plan to create decision support, we strongly recommend DynaMed [full disclosure: we are on the editorial board in an unpaid capacity, though a few years ago we did receive a small gift]. DynaMed is a point-of-care evidence-based medical information database created by Brian S. Alper MD, MSPH, FAAFP. It continues to grow from its current 30,000+ clinical topics that are updated frequently. DynaMed monitors the content of more than 500 medical journals and systematic evidence review databases.  Each item is thoroughly reviewed for clinical relevance and scientific reliability. DynaMed has been compared with several products, including in a new review by McMaster University. The DynaMed website is https://dynamed.ebscohost.com/.

The McMaster University maintains a Premium Literature Service (PLUS) database which is a continuously updated, searchable database of primary studies and systematic reviews. Each article from over 120 high quality clinical journals and evidence summary services is appraised by research staff for methodological quality, and articles that pass basic criteria are assessed by practicing clinicians in the corresponding discipline.  Clinical ratings are based on 7-point scales, where clinical relevance ranges from 1 (“not relevant”) to 7 (“directly and highly relevant”), and newsworthiness ranges from 1 (“not of direct clinical interest”) to 7 (“useful information, most practitioners in my discipline definitely don’t know this).

Investigators from McMaster evaluated four evidence-based textbooks—UpToDate, PIER, DynaMed and Best Practice [Jeffery 12].  For each they determined the proportion of 200 topics which had subsequent articles in PLUS with findings different from those reported in the topics. They also evaluated the number of topics available in each evidence-based textbook compared with the topic coverage in the PLUS database, and the recency of updates for these publications.  A topic was in need of an update if there was at least one newer article in PLUS that provided information that differed from the topic’s recommendations in the textbook.

Results

The proportion of topics with potential for updates was significantly lower for DynaMed than the other three textbooks, which had statistically similar values. For DynaMed topics, updates occurred on average of 170 days prior to the study, while the other textbooks averaged from 427 to 488 days. Of all evidence-based textbooks, DynaMed missed fewer articles reporting benefit or no effect when the direction of findings (beneficial, harmful, no effect) was investigated. The proportion of topics for which there was 1 or more recently published articles found in PLUS with evidence that differed from the textbooks’ treatment recommendations was 23% (95% CI 17 to 29%) for DynaMed, 52% (95% CI 45 to 59%) for UpToDate, 55% (95% CI 48 to 61%) for PIER, and 60% (95% CI 53 to 66%) for Best Practice (?23=65.3, P<.001). The time since the last update for each textbook averaged from 170 days (range 131 to 209) for DynaMed, to 488 days (range 423 to 554) for PIER (P<.001 across all textbooks).

Summary

Healthcare topic coverage varied substantially for leading evidence-informed electronic textbooks, and generally a high proportion of the 200 common topics had potentially out-of-date conclusions and missing information from 1 or more recently published studies. PIER had the least topic coverage, while UpToDate, DynaMed, and Best Practice covered more topics in similar numbers. DynaMed’s timeline for updating was the quickest, and it had by far the least number of articles that needed to be updated, indicating that quality was not sacrificed for speed.

Note: All textbooks have access to the PLUS database to facilitate updates, and also use other sources for updates such as clinical practice guidelines.

Conclusion

The proportion of topics with potentially outdated treatment recommendations in on-line evidence-based textbooks varies substantially.

Reference

Jeffery R, Navarro T, Lokker C, Haynes RB, Wilczynski NL, Farjou G. How current are leading evidence-based medical textbooks? An analytic survey of four online textbooks. J Med Internet Res. 2012 Dec 10;14(6):e175. doi: 10.2196/jmir.2105. PubMed PMID: 23220465.

 

 

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Canadian Knowledge Translation Website

Status

Canadian Knowledge Translation Website

The Knowledge Translation (KT) Clearinghouse is a useful website for EBM information and tools. It is funded by the Canadian Institute of Health Research (CIHR) and has a goal of improving the quality of care by developing, implementing and evaluating strategies that bridge the knowledge-to-practice gap and to research the most effective ways to translate knowledge into action. Now added to Delfini web links.

http://ktclearinghouse.ca/

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email