Estimating Relative Risk Reduction from Odds Ratios

Status

Estimating Relative Risk Reduction from Odds Ratios

Odds are hard to work with because they are the likelihood of an event occurring compared to not occurring—e.g., odds of two to one mean that likelihood of an event occurring is twice that of not occurring. Contrast this with probability which is simply the likelihood of an event occurring.

An odds ratio (OR) is a point estimate used for case-control studies which attempts to quantify a mathematical relationship between an exposure and a health outcome. Odds must be used in case-control studies because the investigator arbitrarily controls the population; therefore, probability cannot be determined because the disease rates in the study population cannot be known. The odds that a case is exposed to a certain variable are divided by the odds that a control is exposed to that same variable.

Odds are often used in other types of studies as well, such as meta-analysis, because of various properties of odds which make them easy to use mathematically. However, increasingly authors are discouraged from computing odds ratios in secondary studies because of the difficulty translating what this actually means in terms of size of benefits or harms to patients.

Readers frequently attempt to deal with this by converting the odds ratio into relative risk reduction by thinking of the odds ratio as similar to relative risk. Relative risk reduction (RRR) is computed from relative risk (RR) by simply subtracting the relative risk from one and expressing that outcome as a percentage (1-RR).

Some experts advise readers that this is safe to do if the prevalence of the event is low. While it is true that odds and probabilities of outcomes are usually similar if the event rate is low, when possible, we recommend calculating both the odds ratio reduction and the relative risk reduction in order to compare and determine if the difference is clinically meaningful. And determining if something is clinically meaningful is a judgment, and therefore whether a conversion of OR to RRR is distorted depends in part upon that judgment.

a = group 1 outcome occurred
b = group 1 outcome did not occur
c = group 2 outcome occurred
d = group 2 outcome did not occur

OR = (a/b)/(c/d)
Estimated RRR from OR (odds ratio reduction) = 1-OR

RR = (a/ group 1 n)/(c/ group 2 n)
RRR – 1-RR

 

 

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

More on Attrition Bias: Update on Missing Data Points: Difference or No Difference — Does it Matter?

Attrition Bias Update 01/14/2014: Missing Data Points: Difference or No Difference — Does it Matter?

A colleague recently wrote us to ask us more about attrition bias. We shared with him that the short answer is that there is less conclusive research on attrition bias than on other key biases. Attrition does not necessarily mean that attrition bias is present and distorting statistically significant results. Attrition may simply result in a smaller sample size which, depending upon how small the remaining population is, may be more prone to chance due to outliers or false non-significant findings due to lack of power.

If randomization successfully results in balanced groups, if blinding is successful including concealed allocation of patients to their study groups, if adherence is high, if protocol deviations are balanced and low, if co-interventions are balanced, if censoring rules are used which are unbiased, and if there are no differences between the groups except for the interventions studied, then it may be reasonable to conclude that attrition bias is not present even if attrition rates are large. Balanced baseline comparisons between completers provides further support for such a conclusion as does comparability in reasons for discontinuation, especially if many categories are reported.

On the other hand, other biases may result in attrition bias. For example, imagine a comparison of an active agent to a placebo in a situation in which blinding is not successful. A physician might encourage his or her patient to drop out of a study if they know the patient is on placebo, resulting in biased attrition that, in sufficient numbers, would potentially distort the results from what they would otherwise have been.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Demo: Critical Appraisal of a Randomized Controlled Trial

Status

Demo: Critical Appraisal of a Randomized Controlled Trial

We recently had a great opportunity to listen to a live demonstration of a critical appraisal of a randomized controlled trial conducted by Dr. Brian Alper, Founder of DynaMed; Vice President of EBM Research and Development, Quality & Standards at EBSCO Information Services.

Dr. Alper is extremely knowledgeable about critical appraisal and does an outstanding job clearly describing key issues concerning his selected study for review. We are fortunate to have permission to share the recorded webinar with you.

“Learn How to Critically Appraise a Randomized Trial with Brian S. Alper, MD, MSPH, FAAFP”

Below are details of how to access the study that was used in the demo and how to access the webinar itself.

The Study
The study used for the demonstration is Primary Prevention Of Cardiovascular Disease with a Mediterranean Diet.  Full citation is here—

Estruch R, Ros E, Salas-Salvadó J, Covas MI, Corella D, Arós F, Gómez-Gracia E, Ruiz-Gutiérrez V, Fiol M, Lapetra J, Lamuela-Raventos RM, Serra-Majem L, Pintó X, Basora J, Muñoz MA, Sorlí JV, Martínez JA, Martínez-González MA; PREDIMED Study Investigators. Primary prevention of cardiovascular disease with a Mediterranean diet. N Engl J Med. 2013 Apr 4;368(14):1279-90. doi: 10.1056/NEJMoa1200303. Epub 2013 Feb 25. PubMed PMID: 23432189.

Access to the study for the critical appraisal demo is available here:

http://www.ncbi.nlm.nih.gov/pubmed/?term=N+Engl+J+Med+2013%3B+368%3A1279-1290

The Webinar: 1 Hour

For those of you who have the ability to play WebEx files or can download the software to do so, the webinar can be accessed here—

https://ebsco.webex.com/ebsco/lsr.php?AT=pb&SP=TC&rID=22616757&rKey=f7e98d3414abc8ca&act=pb

Important: It takes about 60 seconds before the webinar starts. (Be sure your sound is on.)

More Chances to Learn about Critical Appraisal

There is a wealth of freely available information to help you both learn and accomplish critical appraisal tasks as well as other evidence-based quality improvement activities. Our website is www.delfini.org. We also have a little book available for purchase for which we are getting rave reviews and which is now being used to train medical and pharmacy residents and is being used in medical, pharmacy and nursing schools.

Delfini Evidence-based Practice Series Guide Book

Basics for Evaluating Medical Research Studies: A Simplified Approach (And Why Your Patients Need You to Know This)

Find our book at—http://www.delfinigrouppublishing.com/ or on our website at www.delfini.org (see Books).

Delfini Recommends DynaMed™

We highly recommend DynaMed.  Although we urge readers to be aware that there is variation in all medical information sources, as members of the DynaMed editorial board (unpaid), we have opportunity to participate in establishing review criteria as well as getting a closer look into methods, staff skills, review outcomes, etc., and we think that DynaMed is a great resource. Depending upon our clinical question and project, DynaMed is often our starting point.

About DynaMed™ from the DynaMed Website

DynaMed™ is a clinical reference tool created by physicians for physicians and other health care professionals for use at the point-of-care. With clinically-organized summaries for more than 3,200 topics, DynaMed provides the latest content and resources with validity, relevance and convenience, making DynaMed an indispensable resource for answering most clinical questions during practice.

Updated daily, DynaMed editors monitor the content of over 500 medical journals on a daily basis. Each article is evaluated for clinical relevance and scientific validity. The new evidence is then integrated with existing content, and overall conclusions are changed as appropriate, representing a synthesis of the best available evidence. Through this process of Systematic Literature Surveillance, the best available evidence determines the content of DynaMed.

Who Uses DynaMed

DynaMed is used in hospitals, medical schools, residency programs, group practices and by individual clinicians supporting physicians, physician assistants, nurses, nurse practitioners, pharmacists, physical therapists, medical researchers, students, teachers and numerous other health care professionals at the point-of-care.

https://dynamed.ebscohost.com/

 

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Why Statements About Confidence Intervals Often Result in Confusion Rather Than Confidence

Status

Why Statements About Confidence Intervals Often Result in Confusion Rather Than Confidence

A recent paper by McCormack reminds us that authors may mislead readers by making unwarranted “all-or-none” statements and that readers should be mindful of this and carefully examine confidence intervals.

When examining results of a valid study, confidence intervals (CIs) provide much more information than p-values. The results are statistically significant if a confidence interval does not touch the line of no difference (zero in the case of measures of outcomes expressed as percentages such as absolute risk reduction and relative risk reduction and 1 in the case of ratios such as relative risk and odds ratios). However, in addition to providing information about statistical significance, confidence intervals also provide a plausible range for possibly true results within a margin of chance (5 percent in the case of a 95% CI). While the actual calculated outcome (i.e., the point estimate) is “the most likely to be true” result within the confidence interval, having this range enables readers to judge, in their opinion, if statistically significant results are clinically meaningful.

However, as McCormack points out, authors frequently do not provide useful interpretation of the confidence intervals, and authors at times report different conclusions from similar data. McCormack presents several cases that illustrate this problem, and this paper is worth reading.

As an illustration, assume two hypothetical studies report very similar results. In the first study of drug A versus drug B, the relative risk for mortality was 0.9, 95% CI (0.80 to 1.05). The authors might state that there was no difference in mortality between the two drugs because the difference is not statistically significant. However, the upper confidence interval is close to the line of no difference and so the confidence interval tells us that it is possible that a difference would have been found if more people were studied, so that statement is misleading. A better statement for the first study would include the confidence intervals and a neutral interpretation of what the results for mortality might mean. Example—

“The relative risk for overall mortality with drug A compared to placebo was 0.9, 95% CI (0.80 to 1.05). The confidence intervals tell us that Drug A may reduce mortality by up to a relative 20% (i.e., the relative risk reduction), but may increase mortality, compared to Drug B, by approximately 5%.”

In a second study with similar populations and interventions, the relative risk for mortality might be 0.93, 95% CI (0.83 to 0.99). In this case, some authors might state, “Drug A reduces mortality.” A better statement for this second hypothetical study would ensure that the reader knows that the upper confidence interval is close to the line of no difference and, therefore, is close to non-significance. Example—

“Although the mortality difference is statistically significant, the confidence interval indicates that the relative risk reduction may be as great as 17% but may be as small as 1%.”

The Bottom Line

  1. Remember that p-values refer only to statistical significance and confidence intervals are needed to evaluate clinical significance.
  2. Watch out for statements containing the words “no difference” in the reporting of study results. A finding of no statistically significant difference may be a product of too few people studied (or insufficient time).
  3. Watch out for statements implying meaningful differences between groups when one of the confidence intervals approaches the line of no difference.
  4. None of this means anything unless the study is valid. Remember that bias tends to favor the intervention under study.

If authors do not provide you with confidence intervals, you may be able to compute them yourself, if they have supplied you with sufficient data, using an online confidence interval calculator. For our favorites, search “confidence intervals” at our web links page: http://www.delfini.org/delfiniWebSources.htm

Reference

McCormack J, Vandermeer B, Allan GM. How confidence intervals become confusion intervals. BMC Med Res Methodol. 2013 Oct 31;13(1):134. [Epub ahead of print] PubMed PMID: 24172248.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Patient Years

Status

What are Patient-Years?

A participant at one of our recent conferences asked a good question—“What are patient-years?”

“Person-years” is a statistic for expressing incidence rates—it is the summing of the results of events divided by time. In many studies, the length of exposure to the treatment is different for different subjects, and the patient-year statistic is one way of dealing with this issue.

The calculation of events per patient-year(s) is the number of incident cases divided by the amount of person-time at risk. The calculation can be accomplished by adding the number of patients in the group and multiplying that number times the years that patients are in a study in order to calculate the patient-years (denominator). Then divide the number of events (numerator) by the denominator.

  • Example: 100 patients are followed for 2 years. In this case, there are 200 patient-years of follow-up.
  • If there were 8 myocardial infarctions in the group, the rate would be 8 MIs per 200 patient years or 4 MIs per 100 patient-years.

The rate can be expressed in various ways, e.g., per 100, 1,000, 100,000, or 1 million patient-years. In some cases, authors report the average follow-up period as the mean and others use the median, which may result in some variation in results between studies.

Another example: Assume we have a study reporting one event at 1 year and one event at 4 years, but no events at year 2 and 3. This same information can be expressed as 2 events/10 (1+2+3+4=10) years or an event rate of 0.2 per person-year.

An important issue is that frequently the timeframe for observation in studies reporting patient-years does not match the timeframe stated in the study. Brian Alper of Dynamed explains it this way: “If I observed a million people for 5 minutes each and nobody died, any conclusion about mortality over 1 year would be meaningless. This problem occurs whether or not we translate our outcome into a patient-years measure. The key in critical appraisal is to catch the discrepancy between timeframe of observation and timeframe of conclusion and not let the use of ‘patient-years’ mistranslate between the two or represent an inappropriate extrapolation.”[1]

References

1. Personal communication 9/3/13 with Brian S. Alper, MD, MSPH, FAAFP, Editor-in-Chief, DynaMed, Medical Director, EBSCO Information Services.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Can Clinical Guidelines be Trusted?

Status

Can Clinical Guidelines be Trusted?

In a recent BMJ article, “Why we can’t trust clinical guidelines,” Jeanne Lenzer raises a number of concerns regarding clinical guidelines[1]. She begins by summarizing the conflict between 1990 guidelines recommending steroids for acute spinal injury versus 2013 cllinical recommendations against using steroids in acute spinal injury. She then asks, “Why do processes intended to prevent or reduce bias fail?

Her proposed answers to this question include the following—

  • Many doctors follow guidelines, even if not convinced about the recommendations, because they fear professional censure and possible harm to their careers.
    • Supporting this, she cites a poll of over 1000 neurosurgeons which showed that—
      • Only 11% believed the treatment was safe and effective.
      • Only 6% thought it should be a standard of care.
      • Yet when asked if they would continue prescribing the treatment, 60% said that they would. Many cited a fear of malpractice if they failed to follow “a standard of care.” (Note: the standard of care changed in March 2013 when the Congress of Neurological Surgeons stated there was no high quality evidence to support the recommendation.)
  • Clinical guideline chairs and participants frequently have financial conflicts.
    • The Cochrane reviewer for the 1990 guideline she references had strong ties to industry.

Delfini Comment

  • Fear-based Decision-making by Physicians

We believe this is a reality. In our work with administrative law judges, we have been told that if you “run with the pack,” you better be right, and if you “run outside the pack,” you really better be right. And what happens in court is not necessarily true or just. The solution is better recommendations constructed from individualized, thoughtful decisions based on valid critically appraised evidence found to be clinically useful, patient preferences and other factors. The important starting place is effective critical appraisal of the evidence.

  • Financial Conflicts of Interest & Industry Influence

It is certainly true that money can sway decisions, be it coming from industry support or potential for income. However, we think that most doctors want to do their best for patients and try to make decisions or provide recommendations with the patient’s best interest in mind. Therefore, we think this latter issue may be more complex and strongly affected in both instances by the large number of physicians and others involved in health care decision-making who 1) do not understand that many research studies are not valid or reported sufficiently to tell; and, 2) lack the skills to be able to differentiate reliable studies from those which may not be reliable.

When it comes to industry support, one of the variables traveling with money includes greater exposure to information through data or contacts with experts supporting that manufacturer’s products. We suspect that industry influence may be less due to financial incentives than this exposure coupled with lack of critical appraisal understanding. As such, we wrote a Letter to the Editor describing our theory that the major problem of low quality guidelines might stem from physicians’ and others’ lack of competency in evaluating the quality of the evidence. Our response is reproduced here.

Delfini BMJ Rapid Response [2]:

We (Delfini) believe that we have some unique insight into how ties to industry may result in advocacy for a particular intervention due to our extensive experience training health care professionals and students in critical appraisal of the medical literature. We think it is very possible that the outcomes Lenzer describes are less due to financial influence than are due to lack of knowledge. The vast majority of physicians and other health care professionals do not have even rudimentary skills in identifying science that is at high to medium risk of bias or understand when results may have a high likelihood of being due to chance. Having ties to industry would likely result in greater exposure to science supporting a particular intervention.

Without the ability to evaluate the quality of the science, we think it is likely that individuals would be swayed and/or convinced by that science. The remedy for this and for other problems with the quality of clinical guidelines is ensuring that all guideline development members have basic critical appraisal skills and there is enough transparency in guidelines so that appraisal of a guideline and the studies utilized can easily be accomplished.

References

1. Lenzer J. Why we can’t trust clinical guidelines. BMJ 2013; 346:f3830

2. Strite SA, Stuart M. BMJ Rapid Response: Why we can’t trust clinical guidelines. BMJ 2013;346:f3830; http://www.bmj.com/content/346/bmj.f3830/rr/651876

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Webinar: “Using Real-World Data & Published Evidence in Pharmacy Quality Improvement Activities”

Status

“Using Real-World Data & Published Evidence in Pharmacy Quality Improvement Activities”

On Monday, May 20, 2013, we presented a webinar on “Using Real-World Data & Published Evidence in Pharmacy Quality Improvement Activities” for the member organizations of the Alliance of Community Health Plans (ACHP).

The 80-minute discussion addressed four topic areas, all of which have unique critical appraisal challenges. Webinar goals were to discuss issues that arise when conducting quality improvement efforts using real world data, such as data from claims, surveys and observational studies and other published healthcare evidence.

Key pitfalls were cherry picked for these four mini-seminars—

  • Pitfalls to avoid when using real-world data, dealing with heterogeneity, confounding-by-indication and causality.
  • Key issues in evaluating oncology studies — outcome issues and focus on how to address large attrition rates.
  • Important issues when conducting comparative safety reviews — assessing patterns through use of RCTs, systematic reviews, observational studies and registries.
  • Key issues in evaluating studies employing Kaplan-Meier estimates — time-to-event basics with attention to the important problem of censoring.

A recording of the webinar is available at—

https://achp.webex.com/achp/lsr.php?AT=pb&SP=TC&rID=45261732&rKey=1475c8c3abed8061&act=pb

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Review of Endocrinology Guidelines

Status

Review of Endocrinology Guidelines

Decision-makers frequently rely on the body of pertinent research in making decisions regarding clinical management decisions. The goal is to critically appraise and synthesize the evidence before making recommendations, developing protocols and making other decisions. Serious attention is paid to the validity of the primary studies to determine reliability before accepting them into the review.  Brito and colleagues have described the rigor of systematic reviews (SRs) cited from 2006 until January 2012 in support of the clinical practice guidelines put forth by the Endocrine Society using the Assessment of Multiple Systematic Reviews (AMSTAR) tool [1].

The authors included 69 of 2817 studies. These 69 SRs had a mean AMSTAR score of 6.4 (standard deviation, 2.5) of a maximum score of 11, with scores improving over time. Thirty five percent of the included SRs were of low-quality (methodological AMSTAR score 1 or 2 of 5, and were cited in 24 different recommendations). These low quality SRs were the main evidentiary support for five recommendations, of which only one acknowledged the quality of SRs.

The authors conclude that few recommendations in field of endocrinology are supported by reliable SRs and that the quality of the endocrinology SRs is suboptimal and is currently not being addressed by guideline developers. SRs should reliably represent the body of relevant evidence.  The authors urge authors and journal editors to pay attention to bias and adequate reporting.

Delfini note: Once again we see a review of guideline work which suggests using caution in accepting clinical recommendations without critical appraisal of the evidence and knowing the strength of the evidence supporting clinical recommendations.

1. Brito JP, Tsapas A, Griebeler ML, Wang Z, Prutsky GJ, Domecq JP, Murad MH, Montori VM. Systematic reviews supporting practice guideline recommendations lack protection against bias. J Clin Epidemiol. 2013 Jun;66(6):633-8. doi: 10.1016/j.jclinepi.2013.01.008. Epub 2013 Mar 16. PubMed PMID: 23510557.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Critical Appraisal Tool for Clinical Guidelines & Other Secondary Sources

Status

Critical Appraisal Tool for Clinical Guidelines & Other Secondary Sources

Everything citing medical science should be appraised for validity and clinical usefulness. That includes clinical guidelines and other secondary sources. Our tool for evaluating these resources— the Delfini QI Project Appraisal Tool—has been updated and is available in the Delfini Tools & Educational Library at www.delfini.org.  For quick access to the PDF version, go to—

http://www.delfini.org/delfiniNew.htm

 

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

California Pharmacist Journal: Student Evidence Review of NAVIGATOR Study

Status

California Pharmacist Journal: Student Evidence Review of NAVIGATOR Study

Klevens A, Stuart ME, Strite SA. NAVIGATOR (Effect of nateglinide on the incidence of diabetes and cardiovascular events PMID 20228402) Study Evidence Review. California Pharmacist 2012. Vol. LIX, No. 4. Fall 2012. at our California Pharmacist journal page.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email