Evidence-based Medicine ClickEBM Mike Stuart MDEBM Sheri Strite

A Cool Click for Evidence-based Medicine (EBM) and Evidence-based Practice (EBP) Commentaries & Health Care Quality Improvement Nibblets

The EBM Information Quest: Is it true? Is it useful? Is it usable?™

 

Valdity Detectives: Michael E Stuart MD, President & Medical Director . Sheri Ann Strite, Managing Director & Principal

Quick Picks

Delfini: Dr. Michael E. Stuart & Sheri Ann Strite
Why Critical Appraisal Matters

Services
Services

Seminars
Seminars

Delfini Group Publishing
Books

Contact Us
Updates & Contact Info

Tools
Free Online Tools

Tutorial
Free Online Tutorial

blog
Delfini Blog


EBM Dolphin
Delfini
Click
Evidence & Quality Improvement Commentaries

 

Follow & Share...

Just-in-time UpdatesFollow Delfini Group on Twitter

Like Us Like Us on Facebook  Find UsFind Us at LinkedIn

DelfiniGram™: GET ON OUR UPDATE LIST Contact Us

Volume — Use of Evidence:
Reporting the Evidence

Newest
02/26/2014: Estimating Relative Risk Reduction from Odds Ratios

Contents

Go to DelfiniClick™ for all volumes.Delfini Group EBM DolphinDelfini Group EBM Dolphin

The CONSORT Statement: Consolidated Standards of Reporting Trials—CONSORT: Update 2010
06/28/2010

CONSORT comprises a 25 item checklist and flow diagram to help improve the quality of reports of randomized controlled trials. It provides guidance for reporting all randomized, controlled trials, but focuses on the most common design type—individually randomized, 2-group, parallel trials. It offers a standard way for researchers to report trials. The checklist was created in 1996, updated in 2001 and now (June 2010) has been updated again. In this update, the authors request more explicit information about concealment of allocation and blinding, plus the authors replaced mention of “intention to treat” analysis, a widely misused term, by a more explicit request for information about retaining participants in their original assigned groups.

CONSORT addresses items, based on evidence, that need to be addressed in the report. Their recommended flow diagram provides readers with a clear picture of the progress of all participants in the trial, from the time they are randomized until the end of their involvement. The intent is to make the experimental process more clear, flawed or not, so that users of the data can more appropriately evaluate its validity for their purposes. More details are available, including a few of the flow diagram, at http://www.consort-statement.org/

Share LinkClick here to share this page. If you are at an entry URL (#title), copy URL, then click "share" button to paste into body text of your message. 

CONSORT Update of Abstract Guidelines 2012
07/10/2012

We have previously described the rationale and details of The Consort Statement: Consolidated Standards of Reporting Trials (CONSORT).[1] In brief, CONSORT is a checklist, based on evidence, of 25 items that need to be addressed in reports of clinical trials in order to provide readers with a clear picture of study quality and the progress of all participants in the trial, from the time they are randomized until the end of their involvement. The intent is to make the experimental process clear, flawed or not, so that users of the data can more appropriately evaluate its validity and usefulness of the results. A recent BMJ study has assessed the use of CONSORT guidelines for abstracts in five top journals—JAMA, New England Journal of Medicine (NEJM), the British Medical Journal (BMJ), Lancet and the Annals of Internal Medicine. [2]

In this study, the authors checked each journal’s instructions to authors in January 2010 for any reference to the CONSORT for Abstracts guidelines (for example, reference to a publication or link to the relevant section of the CONSORT website). For those journals that mentioned the guidelines in their instructions to authors, they contacted the editor of that journal to ask when the guidance was added, whether the journal enforced the guidelines, and if so, how. They classified journals in three categories: those not mentioning the CONSORT guidelines in their instructions to authors (JAMA and NEJM); those referring to the guidelines in their instructions to authors, but with no specific policy to implement them (BMJ); and those referring to the guidelines in their instructions to authors, with a policy to implement them (Annals of Internal Medicine and the Lancet).

First surprise—JAMA and NEJM don’t even mention CONSORT in their instructions to authors. Second surprise—CONSORT published what evidologists agree to be reasonable abstract requirements in 2008, but only the Annals and Lancet now instruction authors to follow them. The study design was to evaluate the inclusion of the 9 CONSORT items omitted more than 50% of the time from abstracts (details of the trial design, generation of the allocation sequence, concealment of allocation, details of blinding, number randomized and number analyzed in each group, primary outcome results for each group and its effect size, harms data and funding source). The primary outcome was the mean number of CONSORT items reported in selected abstracts, among nine items reported in fewer than 50% of the abstracts published across the five journals in 2006. Overall, for the primary outcome, publication of the CONSORT guidelines did not lead to a significant increase in the level of the mean number of items reported (increase of 0.3035 of nine items, P=0.16) or the trend (increase of 0.0193 items per month, P=0.21). There was a significant increase in the level of the mean number of items reported after the implementation of the CONSORT guidelines (increase of 0.3882 of five items, P=0.0072) and in trends (increase of 0.0288 items per month, P=0.0025).

What follows is not really surprising—

  • After publication of the guidelines in January 2008, the authors identified a significant increase in the reporting of key items in the two journals (Annals of Internal Medicine, and Lancet) that endorsed the guidelines in their instructions to authors and that had an active editorial policy to implement them. At baseline, in January 2006, the mean number of items reported per abstract was 1.52 of nine items, which increased to 2.56 nine items during the 25 months before the intervention. In December 2009, 23 months after the publication of the guidelines, the mean number of items reported per abstract for the primary outcome in the Annals of Internal Medicine and the Lancet was 5.41 items, which represented a 53% increase compared with the expected level estimated on the basis of pre-intervention trends.
  • The authors observed no significant difference in the one journal (BMJ) that endorsed the guidelines but did not have an active implementation strategy, and in the two journals (JAMA, NEJM) that did not endorse the guidelines in their instructions to authors.

What this study shows is that without actively implementing editorial policies—i.e., requiring the use of CONSORT guidelines, improved reporting does not happen. A rather surprising finding for us was that only two of the five top journals included in this study have active implementation policies (e.g., an email to authors at time of revision that requires revision of the abstract according to CONSORT guidance). We have a long ways to go.

More details about CONSORT are available, including a few of the flow diagram, at— http://www.consort-statement.org/

References

1. http://www.delfini.org/delfiniClick_ReportingEvidence.htm#consort

2. Hopewell S, Philippe P, Baron G., Boutron I.  Effect of editors’ implementation of CONSORT on the reporting of abstracts in high impact medical journals: interrupted time series analysis. BMJ 2012;344:e4178.

Share LinkClick here to share this page. If you are at an entry URL (#title), copy URL, then click "share" button to paste into body text of your message. 

Estimating Relative Risk Reduction from Odds Ratios
02/26/2014

Odds are hard to work with because they are the likelihood of an event occurring compared to not occurring—e.g., odds of two to one mean that likelihood of an event occurring is twice that of not occurring. Contrast this with probability which is simply the likelihood of an event occurring.

An odds ratio (OR) is a point estimate used for case-control studies which attempts to quantify a mathematical relationship between an exposure and a health outcome. Odds must be used in case-control studies because the investigator arbitrarily controls the population; therefore, probability cannot be determined because the disease rates in the study population cannot be known. The odds that a case is exposed to a certain variable are divided by the odds that a control is exposed to that same variable.

Odds are often used in other types of studies as well, such as meta-analysis, because of various properties of odds which make them easy to use mathematically. However, increasingly authors are discouraged from computing odds ratios in secondary studies because of the difficulty translating what this actually means in terms of size of benefits or harms to patients.

Readers frequently attempt to deal with this by converting the odds ratio into relative risk reduction by thinking of the odds ratio as similar to relative risk. Relative risk reduction (RRR) is computed from relative risk (RR) by simply subtracting the relative risk from one and expressing that outcome as a percentage (1-RR).

Some experts advise readers that this is safe to do if the prevalence of the event is low. While it is true that odds and probabilities of outcomes are usually similar if the event rate is low, when possible, we recommend calculating both the odds ratio reduction and the relative risk reduction in order to compare and determine if the difference is clinically meaningful. And determining if something is clinically meaningful is a judgment, and therefore whether a conversion of OR to RRR is distorted depends in part upon that judgment.

a = group 1 outcome occurred
b = group 1 outcome did not occur
c = group 2 outcome occurred
d = group 2 outcome did not occur

OR = (a/b)/(c/d)
Estimated RRR from OR (odds ratio reduction) = 1-OR

RR = (a/ group 1 n)/(c/ group 2 n)
RRR - 1-RR

Share LinkClick here to share this page. If you are at an entry URL (#title), copy URL, then click "share" button to paste into body text of your message. 

Pointers on Number-Needed-to-Treat
09/04/2011

Recently someone asked us to provide them with some pointers on number-needed-to-treat (NNT).  NNT means the number of people it takes to treat to benefit one person.  Related are number-needed-to-harm (NNH), number-needed-to-screen (NNS), number-needed-to-prevent (NNP), etc.  Because patients may misinterpret a high NNT as being preferable, we favor providing patients results information as risk with and without treatment, absolute risk reduction and natural frequencies such as 5 out of 100.  That said, here are some tips on NNT.

  • You should only calculate NNT for studies that pass a rigorous critical appraisal as being reliable—otherwise it is misleading as NNT implies benefit, and we can only assume this reasonably about valid trial. NNT calculations should only be applied to findings from RCTs or all-or-none observations which are likely to be reliable as, generally only these studies should be used for determining cause and effect of efficacy.  NNH could be applied to observational studies for safety issues.
  • NNT can be used for any endpoints that are dichotomous.  If the variables are not reported as dichotomous (such as continuous variables), sometimes you can make some choices to make them dichotomous (defining success or failure as an example) and apply them that way. 
  • NNT and NNH (etc) are reciprocals of ARR and ARI.  So to compute you take the ARR or the ARI out of being a percentage.  So with an ARR of 5, 5 goes into 100 20 times, so the NNT would be 20.
  • The numbers should always be rounded up to whole numbers.
  • They should always be expressed with the time period associated with them in the study.

Share LinkClick here to share this page. If you are at an entry URL (#title), copy URL, then click "share" button to paste into body text of your message.

Improving Results Reporting in Clinical Trials: Case Study—Time-to-Event Analysis and Hazard Ratio Reporting Advice
03/20/2012

We frequently see clinical trial abstracts—especially those using time-to-event analyses—that are not well-understood by readers. Fictional example for illustrative purposes:  

In a 3-year randomized controlled trial (RCT) of drug A versus placebo in women with advanced breast cancer, the investigators presented their abstract results in terms of relative risk reduction for death (19%) along with the hazard ratio (hazard ratio = 0.76, 95% confidence interval [CI] 0.56 to 0.94, P = 0.04). They also stated that, “This reduction represented a 5-month improvement in median survival (24 months in the drug A group vs. 19 months in the placebo group).” Following this information, the authors stated that the three-year survival probability was 29% in the drug A group versus 21.0% in the placebo group.

Many readers do not understand hazard ratios and will conclude that a 5 month improvement in median survival is not clinically meaningful. We believe it would have been more useful to present mortality information (which the authors frequently present in  results section, but is not easily found by many readers).

A much more meaningful abstract statement would go something like this: After 3 years, the overall mortality was 59% in the drug A group compared with 68% in the placebo group which represents an absolute risk reduction (ARR) of 9%, P=0.04, number needed to treat (NNT) 11.  This information is much more impressive and much more easily understood than a 5-month increase in median survival and uses statistics familiar to clinicians.

Share LinkClick here to share this page. If you are at an entry URL (#title), copy URL, then click "share" button to paste into body text of your message.

Some Points About Surrogate Outcomes Courtesy of Steve Simon PhD
05/17/2012

Our experience is that most healthcare professionals have difficulty understanding the appropriate place of surrogate outcomes (aka intermediate outcome measures, proxy markers or intermediate or surrogate markers, etc). For a very nice, concise round-up of some key points you can read Steve Simon’s short review. Steve has a PhD in statistics  and many years of experience in teaching statistics.  http://www.pmean.com/news/201203.html#1

Share LinkClick here to share this page. If you are at an entry URL (#title), copy URL, then click "share" button to paste into body text of your message. 

Untrustable P-values & Abstracts

One of the first things we teach our EBM learners is that although abstracts can be useful to get a sense of what an article is about and can be at times be used to exclude studies from further review, abstracts cannot reliably be used to determine if a study is valid.

Validity must be determined by examining the methods of the study (assuming it is the right study type). A little-known problem with abstracts is that the information provided in the abstract cannot be documented in the body of the paper up to 68% of the time in some of the top-tier medical journals [Pitkin, R et al. Accuracy of Data in Abstracts of Published Research Articles. JAMA. 1999; 281: 1110-1111 PMID: 10188662 — reviewing JAMA, NEJM, The Lancet, The Annuals of Internal Medicine, BMJ and the Canadian Medical Journal]. In this DelfiniClick we report another problem with abstracts—the problem of bias.

Peter C Gøtzsche in a BMJ article (Believability of relative risks and odds ratios in abstracts: cross sectional study. BMJ 2006;333;231-234; PMID: 16854948) reviews previous publications reporting biased results-reporting and biased reporting of conclusions, and he presents additional evidence of bias in reporting P values.

We do not have the expertise to evaluate all the points made in his paper; however, we present his comments and findings here for you to evaluate and draw your own conclusions. Although, we believe the assumptions upon which Gøtzsche bases his conclusions can be challenged, the following should be of interest to anyone interested in critical apppraisal of the medical literature.

Gøtzsche’s Comments

  • Significant results in abstracts should generally be disbelieved
  • Ongoing research has shown that more than 200 statistical tests are sometimes specified in trial protocols. If you compare a treatment with itself—that is, the null hypothesis of no difference is known to be true—the chance that one or more of 200 tests will be statistically significant at the 5% level is 99.996% if we assume the tests are independent
  • Thus, the investigators or sponsor can be fairly confident that “something interesting will turn up.”
  • Due allowance for multiple testing is rarely made, and it is generally not possible to discern reliably between primary and secondary outcomes
  • Recent studies that compared protocols with trial reports have shown selective publication of outcomes, depending on the obtained P values, and that at least one primary outcome was changed, introduced, or omitted in 62% of the trials.
  • The scope for bias is also large in observational studies. Many studies are underpowered and do not give any power calculations.
  • Furthermore, a survey found that 92% of articles adjusted for confounders and reported a median of seven confounders but most did not specify whether they were pre-declared.
  • Fourteen per cent of these articles reported more than 100 effect estimates, and subgroup analyses appeared in 57% of studies and were generally believed.
  • The preponderance of significant results could be reduced if the following actions were taken.
    • First, if we need a conventional significance level at all, which is doubtful, it should be set at P < 0.001
    • Second, analysis of data and writing of manuscripts should be done blind, hiding the nature of the interventions, exposures, or disease status, as applicable, until all authors have approved the two versions of the text
    • Third, journal editors should scrutinize abstracts more closely and demand that research protocols and raw data—both for randomized trials and for observational studies—be submitted with the manuscript.

In short, yet another reminder to read the methods section of papers and not rely on results or conclusions presented in abstracts.

Gøtzsche’s Findings in Brief

  • The first result in the abstract was statistically significant in 70% of the trials, 84% of cohort studies and 84% of case-control studies. Although many of these results were derived from subgroup or secondary analyses, or biased selection of results, they were presented without reservations in 98% of the trials
  • The distribution of P values in the studies he reviewed in the interval 0.04 to 0.06 was extremely skewed
  • The number of P values in the interval 0.05 <= P < 0.06 would be expected to be similar to the number in the interval 0.04 <= P < 0.05, but he found five in the first interval compared with 46 in the second, which is highly unlikely to occur (P < 0.0001) if researchers are unbiased when they analyze and report their data.
  • The distribution of P values between 0.04 and 0.06 was even more extreme for the observational studies he reviewed
    • Nine cohort studies and eight case-control studies gave P values in this interval, but in all 17 cases P values were presented as < 0.05
  • One of the nine cohort studies and two of the eight case-control studies gave a confidence interval where one of the borders was touching one; in all three studies, this was interpreted as a positive finding, although in one this seemed to be the only positive result out of six time periods the authors had reported.

Share LinkClick here to share this page. If you are at an entry URL (#title), copy URL, then click "share" button to paste into body text of your message.

The Cost of Being in a Hurry: Reading Only Abstracts May Mislead You
06/01/09

Imagine that you are participating in a Pharmacy & Therapeutics Committee meeting. The committee has pharmacist support, but lacks pharmacist staffers who are dedicated to systematically searching for studies and then critically appraising them for validity and clinical usefulness — therefore, committee decisions are usually made through opinions of committee members who refer to studies they wish to emphasize.

On the agenda is tiotropium, indicated for the long-term, once-daily, maintenance treatment of bronchospasm associated with chronic obstructive pulmonary disease (COPD), including chronic bronchitis and emphysema.

At the meeting, one of your colleagues opens an issue of the New England Journal of Medicine he pulled in preparation for the meeting and quotes their findings reported in the abstract that, “At 4 years and 30 days, tiotropium was associated with a reduction in the risks of exacerbations, related hospitalizations, and respiratory failure.”[1] The committee approves adding the agent to the formulary on the basis of this information.

Wayne Flicker, MD, Internist and Geriatrician at Healthcare Partners, points out that abstracts can be misleading, even in "good journals." A closer read of the study and a quick view of the results tables shows that what they actually REPORT in the body of the text is the time to first hospitalization. They do NOT report a decrease number of hospitalizations, hospital bed-days or number of people hospitalized.

And what can we learn from this? Firstly, it is another reminder that frequently information in an abstract cannot be verified in the body of a text. It may, in fact, be totally contradictory.[2] Second, if results of a study seem to be worthwhile, it is worth checking the body of the text to verify claims found in the abstract. Third, if those claims can be verified and they still seem important enough to change practice, it is vitally important to critically appraise the source for validity.

A final reminder about looking at results. Most published scientific studies are not valid (valid being defined as probably “true”) - even those published in journals with the best reputation. Delfini estimates, from our experience, that only 10% or less of scientific studies are valid and clinically useful - even in the best medical journals. Others have estimated that the number is less than 5 percent or even as low as only 1 percent of the literature is valid and clinically useful. [3], [4], [5] Therefore, we do not consider results of studies until AFTER we have determined that a study is valid.

Delfini thanks Dr. Wayne Flicker for his insightful contribution.


[1] Tashkin DP, Celli B, Senn S, Burkhart D, Kesten S, Menjoge S, Decramer M; UPLIFT Study Investigators. A 4-year trial of tiotropium in chronic obstructive pulmonary disease. N Engl J Med. 2008 Oct 9;359(15):1543-54. Epub 2008 Oct 5. PMID: 18836213

[2] Pitkin RM, Branagan MA, Burmeister LF. Accuracy of Data in Abstracts of Published Research Articles. JAMA. 1999; 281: 1110-1111. [PMID 10188662]

[3] Ebell M. An introduction to information mastery, July 15, 1988. http://www.poems.msu.edu/InfoMastery/default.htm. Accessed December 21, 2007.

[4] David Eddy (personal communication)

[5] Institute of Medicine concluded that it was plausible that only 4 percent of interventions used in health care have strong evidence to support them. Field MJ, Lohr KN, eds. Guidelines for Clinical Practice: From Development to Use. Washington, DC: National Academies Press; 1992.

Share LinkClick here to share this page. If you are at an entry URL (#title), copy URL, then click "share" button to paste into body text of your message.

Help With Understanding "Patient Years"
09/11/2013

What are Patient-Years?
A participant at one of our recent conferences asked a good question—“What are patient-years?”

“Person-years” is a statistic for expressing incidence rates—it is the summing of the results of events divided by time. In many studies, the length of exposure to the treatment is different for different subjects, and the patient-year statistic is one way of dealing with this issue.

The calculation of events per patient-year(s) is the number of incident cases divided by the amount of person-time at risk. The calculation can be accomplished by adding the number of patients in the group and multiplying that number times the years that patients are in a study in order to calculate the patient-years (denominator). Then divide the number of events (numerator) by the denominator.

  • Example: 100 patients are followed for 2 years. In this case, there are 200 patient-years of follow-up.
  • If there were 8 myocardial infarctions in the group, the rate would be 8 MIs per 200 patient years or 4 MIs per 100 patient-years.

The rate can be expressed in various ways, e.g., per 100, 1,000, 100,000, or 1 million patient-years. In some cases, authors report the average follow-up period as the mean and others use the median, which may result in some variation in results between studies.

Another example: Assume we have a study reporting one event at 1 year and one event at 4 years, but no events at year 2 and 3. This same information can be expressed as 2 events/10 (1+2+3+4=10) years or an event rate of 0.2 per person-year.

An important issue is that frequently the timeframe for observation in studies reporting patient-years does not match the timeframe stated in the study. Brian Alper of Dynamed explains it this way: “If I observed a million people for 5 minutes each and nobody died, any conclusion about mortality over 1 year would be meaningless. This problem occurs whether or not we translate our outcome into a patient-years measure. The key in critical appraisal is to catch the discrepancy between timeframe of observation and timeframe of conclusion and not let the use of ‘patient-years’ mistranslate between the two or represent an inappropriate extrapolation.”[1]

References

1. Personal communication 9/3/13 with Brian S. Alper, MD, MSPH, FAAFP, Editor-in-Chief, DynaMed, Medical Director, EBSCO Information Services.

Share LinkClick here to share this page. If you are at an entry URL (#title), copy URL, then click "share" button to paste into body text of your message.

CONSORT Statement on Harms

One of the main reasons for using valid, relevant evidence in health care is to more accurately predict outcomes from various interventions and thus be equipped to make informed choices. The area of harms has always been problematic because the terminology used in the literature varies greatly, adverse events are frequently rare and are often detected by observational means long after a drug or intervention has become standard of care. Searching for and finding adverse events may also require a separate search after finding quality evidence regarding benefit.


CONSORT (Consolidating Standards of Reporting Trials) is a checklist aimed at standardizing published reports of RCTs, but the CONSORT items contained only 1 item dealing with harms. Now the CONSORT group is adding a number of items dealing with harms to the checklist.


Ioannidis JP, Evans SJ, MSc; Peter C. Gøtzsche PC, et al. for the CONSORT Group have published in the Annals of Internal Medicine an article titled, “Better Reporting of Harms in Randomized Trials: An Extension of the CONSORT Statement,”; 16 November 2004; Volume 141; Issue 10; Pages 781-788. The group made 10 new recommendations (e.g., listing adverse events with definitions, stating in the title and abstracts that the study collected data about harms) about reporting harms-related issues along with examples to highlight specific aspects of proper reporting.


The 2001 CONSORT Statement (without this update) is available at (http://www.consort-statement.org). Hopefully the new items dealing with harms will help authors improve their reporting and users in finding harms-related data.

Share LinkClick here to share this page. If you are at an entry URL (#title), copy URL, then click "share" button to paste into body text of your message.

When there is No Evidence: One Perspective...

The Delfini definition of evidence-based medicine is simply the use of the scientific method and application of valid and useful science to inform health care provision, practice, evaluation and decisions. To do this, you look first to the evidence and then work to assess its scientific quality, usefulness and application. Failing useful evidence, you then have to make choices based on an assortment of what we refer to as “other triangulation issues,” which include patient preferences, community standards, legal considerations, publicity, and so forth.

Aron Sousa, MD, from the Department of Medicine at Michigan State University writes:

“I have a question about the very bottom of the evidence hierarchy. Most of my work as an educator and clinician deals with issues at the top of the evidence hierarchy, but of late I have become involved in a clinical area with no high level and little low level clinical evidence. I am an internist who has begun to care for adult patients who were born with ambiguous genitalia (intersex conditions). Most of these people underwent (and many children still undergo) surgeries designed to "normalize" the appearance of their genitals (we are not talking about urinary, sexual, or reproductive function). In terms of the available evidence, the intellectual basis of the surgeries (children with abnormal genitals become abnormal adults) is based on a fraudulent case study (John-Joan), there is no evidence of a need for these surgeries, there are a series of poorly done case series of short-term surgical outcomes, and there is a whole host of expert opinions and published MGSATs (multiple guys sitting around together). When pressed for justification, surgeons (and parents) tend to fall back to fears of future schoolyard and locker room bullying and harassment.

In general I'd say that you have to do the best you can with the evidence you have, but here is the thing. The adult patient reports of their treatment are horrific and impressive in their volume and consistency. Multiple scholars and reporters have looked for patients happy with their treatment and not found one -- not one, not even one who is happy but not willing to go public. In truth finding such a patient is a bit hard to do since a successfully treated patient would have been lied to and would not know of their condition. (There are clearly ethical problems as well.) Independent patient report does not make most hierarchies of evidence but in the Internet era is one of the most prevalent data reports we have.

In this situation there are patient opinions on the value of surgery that are nearly unanimous but uncontrolled and self selecting vs. experts with little intellectual or ethical standing. How can EBM help me deal with this? No fair punting and suggesting I get better data."

Our reaction is this:

We would consider the reports from patients to be "evidence" as well -- and of "uncertain" quality as is the "evidence" from the experts and for all the excellent reasons Dr. Sousa has raised.

"How EBM can help" is simply to say that you strive to see if valid and useful scientific information can reduce your uncertainty. At this point, with the available information, the medical literature cannot provide us with a clear answer.

After trying to round up everything that might be germane to the issue and understanding what the quality of that evidence, in a situation such as this, we would suggest one look to patient involvement as a real partner.

The Delfini model for patient decision-making gives suggested approaches where, when lack of helpful evidence leaves one uncertain, we believe it is a matter of sharing that information and assorted facts with the patient -- then engaging with them to determine what mode of decision making they desire.

http://www.delfini.org/page_SamePage_PatDM.htm#dm

Dr. Sousa writes back:
"Thanks very much for this. While I find uncertainty a motivating factor to seek better data and more understanding, my surgical colleagues appear to view uncertainty as something that can be cut out with a scalpel. The issue of risk data gets at the very heart of our problem...without evidence of need, we do not need therapy. The painful retort "absence of evidence is not evidence of absence" loses sight of the fact the burden of proof should fall on the therapy and not on the patient.

As you clearly realize, shared decision making is the only reasonable model for helping these patients.

Thanks very much for your insights. Aron"

Share LinkClick here to share this page. If you are at an entry URL (#title), copy URL, then click "share" button to paste into body text of your message.

TREND: Reporting Standards for Non-randomized Studies

In an article entitled, “Evidence-Based Public Health: Moving Beyond Randomized Trials” by Cesar G. Victora, MD, PhD, Jean-Pierre Habicht, MD, PhD and Jennifer Bryce, EdD describes the evidence-based movement in public health practices.

Victora CG, Habicht, JP, Bryce J “Evidence-Based Public Health: Moving Beyond Randomized Trials” Am J Public Health. 2004 Mar;94(3):400-5.

http://www.ncbi.nlm.nih.gov/pubmed/14998803?dopt=Abstract

The authors argue that there is an urgent need to develop evaluation standards and protocols for use of non-randomized studies in circumstances where RCTs are not appropriate or where strong plausibility support for RCTs can be provided by reporting intermediate steps along a causal pathway.

For example, a study reporting that 1 year old children in Brazil attending 14 health centers randomized to a health care training program had significantly greater weight gain over 6 months than children attending 14 matched clinics with standard care.

Victora et al. acknowledge the limited internal validity of the study, but believe the study would be less convincing if the authors had not demonstrated that –
o It was possible to train a large number of health care workers,
o Trained workers performed better,
o Mothers were receptive and understood the messages,
o Mothers in the intervention group changed their breast feeding behavior, and
o Children in the intervention group had better growth rates.

In a commentary, Des Jarlais DC, Lyles C, Crepaz N; TREND Group present the initial version of the Transparent Reporting of Evaluations with Non-randomized Designs (TREND), a checklist for reporting behavioral and public health interventions using non-randomized designs. (Am J Public Health. 2004 Mar;94(3):361-6.).

The TREND checklist will be of interest for everyone reading the behavioral and public health literature. The initial version is of the TREND checklist summarized at:

http://www.ajph.org/cgi/content/abstract/94/3/361

Share LinkClick here to share this page. If you are at an entry URL (#title), copy URL, then click "share" button to paste into body text of your message.

Poorly Written Papers

Horacio Plotkin, assistant professor of paediatrics and orthopaedics at the University of Nebraska Medical Center, Omaha, has written a spoof on how to get your paper rejected. However, in our line of work, with what we see -- we see a lot of this that gets published! Here's what's not to do...

Plotkin H. How to Get Your Paper Rejected. BMJ 2004;329:1469 (18 December), doi:10.1136/bmj.329.7480.1469

Share LinkClick here to share this page. If you are at an entry URL (#title), copy URL, then click "share" button to paste into body text of your message.

Media Heyday: Aspirin and (Potentially) Reduced Risk of Breast Cancer
We have repeatedly seen clinicians and patients make therapeutic decisions based on observational data. The hormone replacement therapy (HRT) story is the classic case. Using HRT in women with coronary artery disease became usual care based on case-control and cohort studies that reported benefit. Years later randomized controlled trials (RCTs) showed there were more harms than benefits with HRT and no cardiac protection.

It is interesting to look at some of the language in the media when a “breakthrough” publication appears. Below are some quotes from various newspapers regarding the association of aspirin (ASA) and a decreased risk of breast cancer in the JAMA case/control study (Terry MB, Gammon MD, Zhang FF, et al. Association of frequency and duration of aspirin use and hormone receptor status with breast cancer risk. JAMA. 2004;291:2433-2440—PMID: 15161893).

The ASA-breast cancer study demonstrates how —

  • Unproven interventions get attention—and then are likely to be used in medical practice
  • Then become part of the process of making unproven interventions part of “usual care”—and at times standards of care—before valid evidence of benefit has been presented.

Health-AFP

  • “Women who regularly take aspirin appear to have a reduced risk of breast cancer, a study in the May 26 issue of the Journal of the American Medical Association found.”
  • “Other studies already had shown a link between aspiring consumption and reducing breast cancer risk but this was the first to show a link between the medicine and reducing the breast cancer risk in women with hormone-receptor-positive cancers.”

Health-Associated Press

  • “An effective weapon against many women's most feared disease might be as close as their medicine cabinets, according to new research linking aspirin with a reduced risk of breast cancer.”
  • "It's a landmark study," said Dr. Sheryl Gabram, a breast specialist at Loyola University Medical Center in suburban Chicago who was not involved in the study.
  • “…The results are tantalizing and make biological sense, the researchers and other doctors said.”

Los Angeles Times

  • “An aspirin a day…may protect women against breast cancer, especially those who have gone through menopause.”
  • “The study also found that daily aspirin use reduced by 32 percent the incidence of tumors fueled by estrogen, which accounted for 70 percent to 75 percent of all breast cancers…” [A correct statement would be that the study was associated with a reduced incidence.]
  • “In an accompanying editorial, Dr. Raymond N. DuBois of Vanderbilt University in Nashville said that despite emerging evidence supporting aspirin's potential, it was too soon to recommend it for breast cancer prevention because doctors didn't know the optimal dose or regimen.”

And the headlines themselves can be very misleading. While some articles responsibly include something in their headers that indicates this is still a question, others blatantly indicate a cause/effect relationship.
Here’s the title of a National Public Radio news audio—Study: "Aspirin Cuts Breast Cancer Risk”—despite the use of “may” in the body of the text. And the AFP Title of their article is, “Aspirin can reduce breast cancer risk: study”—despite their use of the word, “appears” in the article itself. And from Reuter’s Health Information—"Hormones Affect Aspirin's Anti-breast Cancer Effect.” And the headline at KRON 4—The Bay Area's News Station and voted California's Best TV Website by the Associated Press, announces—"Aspirin Reduces Breast Cancer Risk." So now we know.

To be fair, most of the newspaper articles point out that the study is not definitive, but without further explanation of confounding, most lay (and professional) readers will assume that phrases such as “linked-to” and “appear to have a decrease risk” are to be read as statements of cause and effect.

What we might do to help—
If we could get media writers to understand, perhaps they could add something like this:

“It is important to point out that this type of study cannot show cause and effect. When people choose to take a treatment (aspirin in this case) and the researchers compare the incidence of breast cancer to people who do not choose to take aspirin, the results are very likely to be “confounded” by another factor. The biggest problem in studies of this type is that the group taking aspirin differs from the group not taking aspirin. Women who choose to take aspirin may take better care of themselves in many ways—diet, optimal weight, not smoking, good exercise, etc. They may have genetic or other differences from those who chose not to take aspirin. All of the potential differences can never be known, so 'adjusting' for these factors statistically (as is done in this type of study) will never be sufficient to eliminate potential confounders.

What should be done? Only a different type of study can tell us if aspirin truly results in a reduced incidence of breast cancer. Women would have to be blindly 'randomized' to each group in order to distribute the unknown differences (confounders) equally between the aspirin and non-aspirin groups, and then a valid study would have to be conducted with no other differences between the groups. Only then can we isolate the intervention (aspirin or placebo) and know that, if a difference is found, that the difference is truly due to aspirin and not some other factor (one of the many confounders).”

Also, we think it is important to point out harms. In the case of aspirin, the following would be responsible reporting:

"Before taking aspirin, patients should be aware of the fact that taking aspirin daily carries risks such as stomach problems and bleeding. For example, over 5 years of taking aspirin, the risk of developing a major problem with bleeding is about 1 in 500." (Ref: PS Sanmuganathan et al. Aspirin for primary prevention of coronary heart disease: safety and absolute benefit related to coronary risk derived from meta-analysis of randomised trials. Heart 2001 85: 265-271).

Share LinkClick here to share this page. If you are at an entry URL (#title), copy URL, then click "share" button to paste into body text of your message.

........

Contact UsCONTACT DELFINI Delfini Group EBM DolphinDelfini Group EBM Dolphin

At DelfiniClick™

EBM Dolphin
Delfini
Click

Read Our Blog...

Menu........
Use of our website implies agreement to our Notices. Citations for references available upon request.

Home
Best of Delfini
What's New
Blog

Seminars

Services
Delfini Group Publishing
Resources
Sample Projects
Notices
About Us & Our Work
Testimonials
Other
Site Search

Contact Info/Updates

........

Quick Navigator to Selected Resources

.......................

 


Return to Top

© 2002-2020 Delfini Group, LLC. All Rights Reserved Worldwide.
Use of this website implies your agreement to our Notices.

EBM Solutions for Evidence-based Health Care Quality