Demo: Critical Appraisal of a Randomized Controlled Trial

Status

Demo: Critical Appraisal of a Randomized Controlled Trial

We recently had a great opportunity to listen to a live demonstration of a critical appraisal of a randomized controlled trial conducted by Dr. Brian Alper, Founder of DynaMed; Vice President of EBM Research and Development, Quality & Standards at EBSCO Information Services.

Dr. Alper is extremely knowledgeable about critical appraisal and does an outstanding job clearly describing key issues concerning his selected study for review. We are fortunate to have permission to share the recorded webinar with you.

“Learn How to Critically Appraise a Randomized Trial with Brian S. Alper, MD, MSPH, FAAFP”

Below are details of how to access the study that was used in the demo and how to access the webinar itself.

The Study
The study used for the demonstration is Primary Prevention Of Cardiovascular Disease with a Mediterranean Diet.  Full citation is here—

Estruch R, Ros E, Salas-Salvadó J, Covas MI, Corella D, Arós F, Gómez-Gracia E, Ruiz-Gutiérrez V, Fiol M, Lapetra J, Lamuela-Raventos RM, Serra-Majem L, Pintó X, Basora J, Muñoz MA, Sorlí JV, Martínez JA, Martínez-González MA; PREDIMED Study Investigators. Primary prevention of cardiovascular disease with a Mediterranean diet. N Engl J Med. 2013 Apr 4;368(14):1279-90. doi: 10.1056/NEJMoa1200303. Epub 2013 Feb 25. PubMed PMID: 23432189.

Access to the study for the critical appraisal demo is available here:

http://www.ncbi.nlm.nih.gov/pubmed/?term=N+Engl+J+Med+2013%3B+368%3A1279-1290

The Webinar: 1 Hour

For those of you who have the ability to play WebEx files or can download the software to do so, the webinar can be accessed here—

https://ebsco.webex.com/ebsco/lsr.php?AT=pb&SP=TC&rID=22616757&rKey=f7e98d3414abc8ca&act=pb

Important: It takes about 60 seconds before the webinar starts. (Be sure your sound is on.)

More Chances to Learn about Critical Appraisal

There is a wealth of freely available information to help you both learn and accomplish critical appraisal tasks as well as other evidence-based quality improvement activities. Our website is www.delfini.org. We also have a little book available for purchase for which we are getting rave reviews and which is now being used to train medical and pharmacy residents and is being used in medical, pharmacy and nursing schools.

Delfini Evidence-based Practice Series Guide Book

Basics for Evaluating Medical Research Studies: A Simplified Approach (And Why Your Patients Need You to Know This)

Find our book at—http://www.delfinigrouppublishing.com/ or on our website at www.delfini.org (see Books).

Delfini Recommends DynaMed™

We highly recommend DynaMed.  Although we urge readers to be aware that there is variation in all medical information sources, as members of the DynaMed editorial board (unpaid), we have opportunity to participate in establishing review criteria as well as getting a closer look into methods, staff skills, review outcomes, etc., and we think that DynaMed is a great resource. Depending upon our clinical question and project, DynaMed is often our starting point.

About DynaMed™ from the DynaMed Website

DynaMed™ is a clinical reference tool created by physicians for physicians and other health care professionals for use at the point-of-care. With clinically-organized summaries for more than 3,200 topics, DynaMed provides the latest content and resources with validity, relevance and convenience, making DynaMed an indispensable resource for answering most clinical questions during practice.

Updated daily, DynaMed editors monitor the content of over 500 medical journals on a daily basis. Each article is evaluated for clinical relevance and scientific validity. The new evidence is then integrated with existing content, and overall conclusions are changed as appropriate, representing a synthesis of the best available evidence. Through this process of Systematic Literature Surveillance, the best available evidence determines the content of DynaMed.

Who Uses DynaMed

DynaMed is used in hospitals, medical schools, residency programs, group practices and by individual clinicians supporting physicians, physician assistants, nurses, nurse practitioners, pharmacists, physical therapists, medical researchers, students, teachers and numerous other health care professionals at the point-of-care.

https://dynamed.ebscohost.com/

 

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Time-related Biases

Status

Time-related Biases Including Immortality Bias

We were recently asked about the term “immortality bias.” The easiest way to explain immortality bias is to start with an example.  Imagine a study of hospitalized COPD patients undertaken to assess the impact of drug A, an inhaled corticosteroid preparation, on survival.  In our first example, people are randomized to receive a prescription to drug A post-discharge or not to receive a prescription. If someone in group A dies prior to filling their prescription, they should be analyzed as randomized and, therefore, they should be counted as a death in the drug A group even though they were never actually exposed to drug A.

Let’s imagine that drug A confers no survival advantage and that mortality for this population is 10 percent.  In a study population of 1,000 patients in each group, we would expect 100 deaths in each group. Let us say that 10 people in the drug A group died before they could receive their medication. If we did not analyze the unexposed people who died in group A as randomized, that would be 90 drug A deaths as compared to 100 comparison group deaths—making it falsely appear that drug A resulted in a survival advantage.

If drug A actually works, the time that patients are not exposed to the drug works a little against the intervention (oh, yes, and do people actually take their drug?), but as bias tends to favor the intervention, this probably evens up the playing field a bit—there is a reason why we talk about “closeness to truth” and “estimates of effect.”

“Immortality bias” is a risk in studies when there is a time period (the “immortal” or the “immune” time when the outcome is other than survival) in which patients in one group cannot experience an event.  Setting aside the myriad other biases that can plague observational studies, such as the potential for confounding through choice of treatment, to illustrate this, let us compare our randomized controlled trial (RCT) that we just described to a retrospective cohort study to study the same thing. In the observational study, we have to pick a time to start observing patients, and it is no longer randomly decided how patients are grouped for analysis, so we have to make a choice about that too.

For our example, let us say we are going to start the clock on recording outcomes (death) beginning at the date of discharge. Patients are then grouped for analysis by whether or not they filled a prescription for drug A within 90 days of discharge.  Because “being alive” is a requirement for picking up prescription, but not for the comparison group, the drug A group potentially receives a “survival advantage” if this bias isn’t taken into account in some way in the analysis.

In other words, by design, no deaths can occur in the drug A group prior to picking up a prescription.  However, in the comparison group, death never gets an opportunity to “take a holiday” as it were.  If you die before getting a prescription, you are automatically counted in the comparison group.  If you live and pick up your prescription, you are automatically counted in the drug A group.  So the outcome of “being alive” is a prerequisite to being in the drug A group. Therefore, all deaths of people not filling a prescription that occur prior to that 90 day window get counted in the comparison group.   And so yet another example of how groups being different or being treated differently other than what is being studied can bias outcomes.

Many readers will recognize the similarity between immortality bias and lead time bias. Lead time bias occurs when earlier detection of a disease, because of screening, makes it appear that the screening has conferred a survival advantage—when, in fact, the “greater length of time survived” is really an artifact resulting from the additional time counted between disease identification and when it would have been found if no screening had taken place.

Another instance where a time-dependent bias can occur is in oncology studies when intermediate markers (e.g., tumor recurrence) are assessed at the end of follow-up segments using Kaplan-Meier methodology. Recurrence may have occurred in some subjects at the beginning of the time segment rather than at the end of a time segment.

It is always good to ask if, in the course of the study, could the passing of time have had a resulting impact on any outcomes?

Other Examples —

  • Might the population under study have significantly changed during the course of the trial?
  • Might the time period of the study affect study results (e.g., studying an allergy medication, but not during allergy season)?
  • Could awareness of adverse events affect future reporting of adverse events?
  • Could test timing or a gap in testing result in misleading outcomes (e.g., in studies comparing one test to another, might discrepancies have arisen in test results if patients’ status changed in between applying the two tests)?

All of these time-dependent biases can distort study results.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Can Clinical Guidelines be Trusted?

Status

Can Clinical Guidelines be Trusted?

In a recent BMJ article, “Why we can’t trust clinical guidelines,” Jeanne Lenzer raises a number of concerns regarding clinical guidelines[1]. She begins by summarizing the conflict between 1990 guidelines recommending steroids for acute spinal injury versus 2013 cllinical recommendations against using steroids in acute spinal injury. She then asks, “Why do processes intended to prevent or reduce bias fail?

Her proposed answers to this question include the following—

  • Many doctors follow guidelines, even if not convinced about the recommendations, because they fear professional censure and possible harm to their careers.
    • Supporting this, she cites a poll of over 1000 neurosurgeons which showed that—
      • Only 11% believed the treatment was safe and effective.
      • Only 6% thought it should be a standard of care.
      • Yet when asked if they would continue prescribing the treatment, 60% said that they would. Many cited a fear of malpractice if they failed to follow “a standard of care.” (Note: the standard of care changed in March 2013 when the Congress of Neurological Surgeons stated there was no high quality evidence to support the recommendation.)
  • Clinical guideline chairs and participants frequently have financial conflicts.
    • The Cochrane reviewer for the 1990 guideline she references had strong ties to industry.

Delfini Comment

  • Fear-based Decision-making by Physicians

We believe this is a reality. In our work with administrative law judges, we have been told that if you “run with the pack,” you better be right, and if you “run outside the pack,” you really better be right. And what happens in court is not necessarily true or just. The solution is better recommendations constructed from individualized, thoughtful decisions based on valid critically appraised evidence found to be clinically useful, patient preferences and other factors. The important starting place is effective critical appraisal of the evidence.

  • Financial Conflicts of Interest & Industry Influence

It is certainly true that money can sway decisions, be it coming from industry support or potential for income. However, we think that most doctors want to do their best for patients and try to make decisions or provide recommendations with the patient’s best interest in mind. Therefore, we think this latter issue may be more complex and strongly affected in both instances by the large number of physicians and others involved in health care decision-making who 1) do not understand that many research studies are not valid or reported sufficiently to tell; and, 2) lack the skills to be able to differentiate reliable studies from those which may not be reliable.

When it comes to industry support, one of the variables traveling with money includes greater exposure to information through data or contacts with experts supporting that manufacturer’s products. We suspect that industry influence may be less due to financial incentives than this exposure coupled with lack of critical appraisal understanding. As such, we wrote a Letter to the Editor describing our theory that the major problem of low quality guidelines might stem from physicians’ and others’ lack of competency in evaluating the quality of the evidence. Our response is reproduced here.

Delfini BMJ Rapid Response [2]:

We (Delfini) believe that we have some unique insight into how ties to industry may result in advocacy for a particular intervention due to our extensive experience training health care professionals and students in critical appraisal of the medical literature. We think it is very possible that the outcomes Lenzer describes are less due to financial influence than are due to lack of knowledge. The vast majority of physicians and other health care professionals do not have even rudimentary skills in identifying science that is at high to medium risk of bias or understand when results may have a high likelihood of being due to chance. Having ties to industry would likely result in greater exposure to science supporting a particular intervention.

Without the ability to evaluate the quality of the science, we think it is likely that individuals would be swayed and/or convinced by that science. The remedy for this and for other problems with the quality of clinical guidelines is ensuring that all guideline development members have basic critical appraisal skills and there is enough transparency in guidelines so that appraisal of a guideline and the studies utilized can easily be accomplished.

References

1. Lenzer J. Why we can’t trust clinical guidelines. BMJ 2013; 346:f3830

2. Strite SA, Stuart M. BMJ Rapid Response: Why we can’t trust clinical guidelines. BMJ 2013;346:f3830; http://www.bmj.com/content/346/bmj.f3830/rr/651876

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Webinar: “Using Real-World Data & Published Evidence in Pharmacy Quality Improvement Activities”

Status

“Using Real-World Data & Published Evidence in Pharmacy Quality Improvement Activities”

On Monday, May 20, 2013, we presented a webinar on “Using Real-World Data & Published Evidence in Pharmacy Quality Improvement Activities” for the member organizations of the Alliance of Community Health Plans (ACHP).

The 80-minute discussion addressed four topic areas, all of which have unique critical appraisal challenges. Webinar goals were to discuss issues that arise when conducting quality improvement efforts using real world data, such as data from claims, surveys and observational studies and other published healthcare evidence.

Key pitfalls were cherry picked for these four mini-seminars—

  • Pitfalls to avoid when using real-world data, dealing with heterogeneity, confounding-by-indication and causality.
  • Key issues in evaluating oncology studies — outcome issues and focus on how to address large attrition rates.
  • Important issues when conducting comparative safety reviews — assessing patterns through use of RCTs, systematic reviews, observational studies and registries.
  • Key issues in evaluating studies employing Kaplan-Meier estimates — time-to-event basics with attention to the important problem of censoring.

A recording of the webinar is available at—

https://achp.webex.com/achp/lsr.php?AT=pb&SP=TC&rID=45261732&rKey=1475c8c3abed8061&act=pb

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Review of Endocrinology Guidelines

Status

Review of Endocrinology Guidelines

Decision-makers frequently rely on the body of pertinent research in making decisions regarding clinical management decisions. The goal is to critically appraise and synthesize the evidence before making recommendations, developing protocols and making other decisions. Serious attention is paid to the validity of the primary studies to determine reliability before accepting them into the review.  Brito and colleagues have described the rigor of systematic reviews (SRs) cited from 2006 until January 2012 in support of the clinical practice guidelines put forth by the Endocrine Society using the Assessment of Multiple Systematic Reviews (AMSTAR) tool [1].

The authors included 69 of 2817 studies. These 69 SRs had a mean AMSTAR score of 6.4 (standard deviation, 2.5) of a maximum score of 11, with scores improving over time. Thirty five percent of the included SRs were of low-quality (methodological AMSTAR score 1 or 2 of 5, and were cited in 24 different recommendations). These low quality SRs were the main evidentiary support for five recommendations, of which only one acknowledged the quality of SRs.

The authors conclude that few recommendations in field of endocrinology are supported by reliable SRs and that the quality of the endocrinology SRs is suboptimal and is currently not being addressed by guideline developers. SRs should reliably represent the body of relevant evidence.  The authors urge authors and journal editors to pay attention to bias and adequate reporting.

Delfini note: Once again we see a review of guideline work which suggests using caution in accepting clinical recommendations without critical appraisal of the evidence and knowing the strength of the evidence supporting clinical recommendations.

1. Brito JP, Tsapas A, Griebeler ML, Wang Z, Prutsky GJ, Domecq JP, Murad MH, Montori VM. Systematic reviews supporting practice guideline recommendations lack protection against bias. J Clin Epidemiol. 2013 Jun;66(6):633-8. doi: 10.1016/j.jclinepi.2013.01.008. Epub 2013 Mar 16. PubMed PMID: 23510557.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Review of Bias In Diabetes Randomized Controlled Trials

Status

Review of Bias In Diabetes Randomized Controlled Trials

Healthcare professionals must evaluate the internal validity of randomized controlled trials (RCTs) as a first step in the process of considering the application of clinical findings (results) for particular patients. Bias has been repeatedly shown to increase the likelihood of distorted study results, frequently favoring the intervention.

Readers may be interested in a new systematic review of diabetes RCTs. Risk of bias (low, unclear or high) was assessed in 142 trials using the Cochrane Risk of Bias Tool.  Overall, 69 trials (49%) had at least one out of seven domains with high risk of bias. Inadequate reporting frequently hampered the risk of bias assessment: the method of producing the allocation sequence was unclear in 82 trials (58%) and allocation concealment was unclear in 78 trials (55%). There were no significant reductions in the proportion of studies at high risk of bias over time nor in the adequacy of reporting of risk of bias domains. The authors conclude that these trials have serious limitations that put the findings in question and therefore inhibit evidence-based quality improvement (QI). There is a need to limit the potential for bias when conducting QI trials and improve the quality of reporting of QI trials so that stakeholders have adequate evidence for implementation. The entire freely-available study is available at—

http://bmjopen.bmj.com/content/3/4/e002727.long

Ivers NM, Tricco AC, Taljaard M, Halperin I, Turner L, Moher D, Grimshaw JM. Quality improvement needed in quality improvement randomised trials: systematic review of interventions to improve care in diabetes. BMJ Open. 2013 Apr 9;3(4). doi:pii: e002727. 10.1136/bmjopen-2013-002727. Print 2013. PubMed PMID: 23576000.

 

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

California Pharmacist Journal: Student Evidence Review of NAVIGATOR Study

Status

California Pharmacist Journal: Student Evidence Review of NAVIGATOR Study

Klevens A, Stuart ME, Strite SA. NAVIGATOR (Effect of nateglinide on the incidence of diabetes and cardiovascular events PMID 20228402) Study Evidence Review. California Pharmacist 2012. Vol. LIX, No. 4. Fall 2012. at our California Pharmacist journal page.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

California Pharmacist Journal: Student Evidence Review of SATURN Study

Status

California Pharmacist Journal: Student Evidence Review of SATURN Study

Salman G, Stuart ME, Strite SA. The Study of Coronary Atheroma by Intravascular Ultrasound: Effect of Rosuvastatin versus Atorvastatin (SATURN) Study Evidence Review. California Pharmacist 2012. Vol. LIX, No. 3. Summer 2012. at our California Pharmacist journal page

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Our Current Thinking About Attrition Bias

Status

Delfini Thoughts on Attrition Bias

Significant attrition, whether it be due to loss of patients or discontinuation or some other reason, is a reality of many clinical trials. And, of course, the key question in any study is whether attrition significantly distorted the study results. We’ve spent a lot of time researching the evidence-on-the-evidence and have found that many researchers, biostatisticians and others struggle with this area—there appears to be no clear agreement in the clinical research community about how to best address these issues. There also is inconsistent evidence on the effects of attrition on study results.

We, therefore, believe that studies should be evaluated on a case-by-case basis and doing so often requires sleuthing and sifting through clues along with critically thinking through the unique circumstances of the study.

The key question is, “Given that attrition has occurred, are the study results likely to be true?” It is important to look at the contextual elements of the study. These contextual elements may include information about the population characteristics, potential effects of the intervention and comparator, the outcomes studied and whether patterns emerge, timing and setting. It is also important to look at the reasons for discontinuation and loss-to-follow up and to look at what data is missing and why to assess likely impact on results.

Attrition may or may not impact study outcomes depending, in part, upon the reasons for withdrawals, censoring rules and the resulting effects of applying those rules, for example. However, differential attrition issues should be looked at especially closely. Unintended differences between groups are more likely to happen when patients have not been allocated to their groups in a blinded fashion, groups are not balanced at the onset of the study and/or the study is not effectively blinded or an effect of the treatment has caused the attrition.

One piece of the puzzle, at times, may be whether prognostic characteristics remained balanced. One item that would be helpful authors could help us all out tremendously by assessing comparability between baseline characteristics at randomization and for those analyzed. However, an imbalance may be an important clue too because it might be informative about efficacy or side effects of the agent understudy.

In general, we think it is important to attempt to answer the following questions:

Examining the contextual elements of a given study—

  • What could explain the results if it is not the case that the reported findings are true?
  • What conditions would have to be present for an opposing set of results (equivalence or inferiority) to be true instead of the study findings?
  • Were those conditions met?
  • If these conditions were not met, is there any reason to believe that the estimate of effect (size of the difference) between groups is not likely to be true.
Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Critical Appraisal Matters

Status

Critical Appraisal Matters

Mike and I make it a practice to study the evidence on the evidence.  Doing effective critical appraisal to evaluate the validity and clinical usefulness of studies makes a difference.  This page on our website may be our most important one and we have now added a 1-page fact sheet for downloading: http://www.delfini.org/delfiniFactsCriticalAppraisal.htm

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email