Time-related Biases

Status

Time-related Biases Including Immortality Bias

We were recently asked about the term “immortality bias.” The easiest way to explain immortality bias is to start with an example.  Imagine a study of hospitalized COPD patients undertaken to assess the impact of drug A, an inhaled corticosteroid preparation, on survival.  In our first example, people are randomized to receive a prescription to drug A post-discharge or not to receive a prescription. If someone in group A dies prior to filling their prescription, they should be analyzed as randomized and, therefore, they should be counted as a death in the drug A group even though they were never actually exposed to drug A.

Let’s imagine that drug A confers no survival advantage and that mortality for this population is 10 percent.  In a study population of 1,000 patients in each group, we would expect 100 deaths in each group. Let us say that 10 people in the drug A group died before they could receive their medication. If we did not analyze the unexposed people who died in group A as randomized, that would be 90 drug A deaths as compared to 100 comparison group deaths—making it falsely appear that drug A resulted in a survival advantage.

If drug A actually works, the time that patients are not exposed to the drug works a little against the intervention (oh, yes, and do people actually take their drug?), but as bias tends to favor the intervention, this probably evens up the playing field a bit—there is a reason why we talk about “closeness to truth” and “estimates of effect.”

“Immortality bias” is a risk in studies when there is a time period (the “immortal” or the “immune” time when the outcome is other than survival) in which patients in one group cannot experience an event.  Setting aside the myriad other biases that can plague observational studies, such as the potential for confounding through choice of treatment, to illustrate this, let us compare our randomized controlled trial (RCT) that we just described to a retrospective cohort study to study the same thing. In the observational study, we have to pick a time to start observing patients, and it is no longer randomly decided how patients are grouped for analysis, so we have to make a choice about that too.

For our example, let us say we are going to start the clock on recording outcomes (death) beginning at the date of discharge. Patients are then grouped for analysis by whether or not they filled a prescription for drug A within 90 days of discharge.  Because “being alive” is a requirement for picking up prescription, but not for the comparison group, the drug A group potentially receives a “survival advantage” if this bias isn’t taken into account in some way in the analysis.

In other words, by design, no deaths can occur in the drug A group prior to picking up a prescription.  However, in the comparison group, death never gets an opportunity to “take a holiday” as it were.  If you die before getting a prescription, you are automatically counted in the comparison group.  If you live and pick up your prescription, you are automatically counted in the drug A group.  So the outcome of “being alive” is a prerequisite to being in the drug A group. Therefore, all deaths of people not filling a prescription that occur prior to that 90 day window get counted in the comparison group.   And so yet another example of how groups being different or being treated differently other than what is being studied can bias outcomes.

Many readers will recognize the similarity between immortality bias and lead time bias. Lead time bias occurs when earlier detection of a disease, because of screening, makes it appear that the screening has conferred a survival advantage—when, in fact, the “greater length of time survived” is really an artifact resulting from the additional time counted between disease identification and when it would have been found if no screening had taken place.

Another instance where a time-dependent bias can occur is in oncology studies when intermediate markers (e.g., tumor recurrence) are assessed at the end of follow-up segments using Kaplan-Meier methodology. Recurrence may have occurred in some subjects at the beginning of the time segment rather than at the end of a time segment.

It is always good to ask if, in the course of the study, could the passing of time have had a resulting impact on any outcomes?

Other Examples —

  • Might the population under study have significantly changed during the course of the trial?
  • Might the time period of the study affect study results (e.g., studying an allergy medication, but not during allergy season)?
  • Could awareness of adverse events affect future reporting of adverse events?
  • Could test timing or a gap in testing result in misleading outcomes (e.g., in studies comparing one test to another, might discrepancies have arisen in test results if patients’ status changed in between applying the two tests)?

All of these time-dependent biases can distort study results.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Webinar: “Using Real-World Data & Published Evidence in Pharmacy Quality Improvement Activities”

Status

“Using Real-World Data & Published Evidence in Pharmacy Quality Improvement Activities”

On Monday, May 20, 2013, we presented a webinar on “Using Real-World Data & Published Evidence in Pharmacy Quality Improvement Activities” for the member organizations of the Alliance of Community Health Plans (ACHP).

The 80-minute discussion addressed four topic areas, all of which have unique critical appraisal challenges. Webinar goals were to discuss issues that arise when conducting quality improvement efforts using real world data, such as data from claims, surveys and observational studies and other published healthcare evidence.

Key pitfalls were cherry picked for these four mini-seminars—

  • Pitfalls to avoid when using real-world data, dealing with heterogeneity, confounding-by-indication and causality.
  • Key issues in evaluating oncology studies — outcome issues and focus on how to address large attrition rates.
  • Important issues when conducting comparative safety reviews — assessing patterns through use of RCTs, systematic reviews, observational studies and registries.
  • Key issues in evaluating studies employing Kaplan-Meier estimates — time-to-event basics with attention to the important problem of censoring.

A recording of the webinar is available at—

https://achp.webex.com/achp/lsr.php?AT=pb&SP=TC&rID=45261732&rKey=1475c8c3abed8061&act=pb

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Review of Endocrinology Guidelines

Status

Review of Endocrinology Guidelines

Decision-makers frequently rely on the body of pertinent research in making decisions regarding clinical management decisions. The goal is to critically appraise and synthesize the evidence before making recommendations, developing protocols and making other decisions. Serious attention is paid to the validity of the primary studies to determine reliability before accepting them into the review.  Brito and colleagues have described the rigor of systematic reviews (SRs) cited from 2006 until January 2012 in support of the clinical practice guidelines put forth by the Endocrine Society using the Assessment of Multiple Systematic Reviews (AMSTAR) tool [1].

The authors included 69 of 2817 studies. These 69 SRs had a mean AMSTAR score of 6.4 (standard deviation, 2.5) of a maximum score of 11, with scores improving over time. Thirty five percent of the included SRs were of low-quality (methodological AMSTAR score 1 or 2 of 5, and were cited in 24 different recommendations). These low quality SRs were the main evidentiary support for five recommendations, of which only one acknowledged the quality of SRs.

The authors conclude that few recommendations in field of endocrinology are supported by reliable SRs and that the quality of the endocrinology SRs is suboptimal and is currently not being addressed by guideline developers. SRs should reliably represent the body of relevant evidence.  The authors urge authors and journal editors to pay attention to bias and adequate reporting.

Delfini note: Once again we see a review of guideline work which suggests using caution in accepting clinical recommendations without critical appraisal of the evidence and knowing the strength of the evidence supporting clinical recommendations.

1. Brito JP, Tsapas A, Griebeler ML, Wang Z, Prutsky GJ, Domecq JP, Murad MH, Montori VM. Systematic reviews supporting practice guideline recommendations lack protection against bias. J Clin Epidemiol. 2013 Jun;66(6):633-8. doi: 10.1016/j.jclinepi.2013.01.008. Epub 2013 Mar 16. PubMed PMID: 23510557.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Review of Bias In Diabetes Randomized Controlled Trials

Status

Review of Bias In Diabetes Randomized Controlled Trials

Healthcare professionals must evaluate the internal validity of randomized controlled trials (RCTs) as a first step in the process of considering the application of clinical findings (results) for particular patients. Bias has been repeatedly shown to increase the likelihood of distorted study results, frequently favoring the intervention.

Readers may be interested in a new systematic review of diabetes RCTs. Risk of bias (low, unclear or high) was assessed in 142 trials using the Cochrane Risk of Bias Tool.  Overall, 69 trials (49%) had at least one out of seven domains with high risk of bias. Inadequate reporting frequently hampered the risk of bias assessment: the method of producing the allocation sequence was unclear in 82 trials (58%) and allocation concealment was unclear in 78 trials (55%). There were no significant reductions in the proportion of studies at high risk of bias over time nor in the adequacy of reporting of risk of bias domains. The authors conclude that these trials have serious limitations that put the findings in question and therefore inhibit evidence-based quality improvement (QI). There is a need to limit the potential for bias when conducting QI trials and improve the quality of reporting of QI trials so that stakeholders have adequate evidence for implementation. The entire freely-available study is available at—

http://bmjopen.bmj.com/content/3/4/e002727.long

Ivers NM, Tricco AC, Taljaard M, Halperin I, Turner L, Moher D, Grimshaw JM. Quality improvement needed in quality improvement randomised trials: systematic review of interventions to improve care in diabetes. BMJ Open. 2013 Apr 9;3(4). doi:pii: e002727. 10.1136/bmjopen-2013-002727. Print 2013. PubMed PMID: 23576000.

 

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Our Current Thinking About Attrition Bias

Status

Delfini Thoughts on Attrition Bias

Significant attrition, whether it be due to loss of patients or discontinuation or some other reason, is a reality of many clinical trials. And, of course, the key question in any study is whether attrition significantly distorted the study results. We’ve spent a lot of time researching the evidence-on-the-evidence and have found that many researchers, biostatisticians and others struggle with this area—there appears to be no clear agreement in the clinical research community about how to best address these issues. There also is inconsistent evidence on the effects of attrition on study results.

We, therefore, believe that studies should be evaluated on a case-by-case basis and doing so often requires sleuthing and sifting through clues along with critically thinking through the unique circumstances of the study.

The key question is, “Given that attrition has occurred, are the study results likely to be true?” It is important to look at the contextual elements of the study. These contextual elements may include information about the population characteristics, potential effects of the intervention and comparator, the outcomes studied and whether patterns emerge, timing and setting. It is also important to look at the reasons for discontinuation and loss-to-follow up and to look at what data is missing and why to assess likely impact on results.

Attrition may or may not impact study outcomes depending, in part, upon the reasons for withdrawals, censoring rules and the resulting effects of applying those rules, for example. However, differential attrition issues should be looked at especially closely. Unintended differences between groups are more likely to happen when patients have not been allocated to their groups in a blinded fashion, groups are not balanced at the onset of the study and/or the study is not effectively blinded or an effect of the treatment has caused the attrition.

One piece of the puzzle, at times, may be whether prognostic characteristics remained balanced. One item that would be helpful authors could help us all out tremendously by assessing comparability between baseline characteristics at randomization and for those analyzed. However, an imbalance may be an important clue too because it might be informative about efficacy or side effects of the agent understudy.

In general, we think it is important to attempt to answer the following questions:

Examining the contextual elements of a given study—

  • What could explain the results if it is not the case that the reported findings are true?
  • What conditions would have to be present for an opposing set of results (equivalence or inferiority) to be true instead of the study findings?
  • Were those conditions met?
  • If these conditions were not met, is there any reason to believe that the estimate of effect (size of the difference) between groups is not likely to be true.
Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Interesting Comparative Effectiveness Research (CER) Case Study: “Real World Data” Hypothetical Migraine Case and Lack of PCORI Endorsement

Status

Interesting Comparative Effectiveness Research (CER) Case Study: “Real World Data” Hypothetical Migraine Case and Lack of PCORI Endorsement

In the October issue of Health Affairs, the journal’s editorial team created a fictional set of clinical trials and observational studies to see what various stakeholders would say about comparative effectiveness evidence of two migraine drugs.[1]

The hypothetical set-up is this:

The newest drug, Hemikrane, is an FDA-approved drug that has recently come on the market. It was reported in clinical trials to reduce both the frequency and the severity of migraine headaches. Hemikrane is taken once a week. The FDA approved Hemikrane based on two randomized, double-blind, controlled clinical trials, each of which had three arms.

  • In one arm, patients who experienced multiple migraine episodes each month took Hemikrane weekly.
  • In another arm, a comparable group of patients received a different migraine drug, Cephalal, a drug which was reported to be effective in earlier, valid studies. It is taken daily.
  • In a third arm, another equivalent group of patients received placebos.

The study was powered to find a difference between Hemikrane and placebo if there was one and if it were at least as effective as Cephalal. Each of the two randomized studies enrolled approximately 2,000 patients and lasted six months. They excluded patients with uncontrolled high blood pressure, diabetes, heart disease, or kidney dysfunction. The patients received their care in a number of academic centers and clinical trial sites. All patients submitted daily diaries, recording their migraine symptoms and any side effects.

Hypothetical Case Study Findings: The trials reported that the patients who took Hemikrane had a clinically significant reduction in the frequency, severity, and duration of headaches compared to placebo, but not to Cephalal.

The trials were not designed to evaluate the comparative safety of the drugs, but there were no safety signals from the Hemikrane patients, although a small number of patients on the drug experienced nausea.

Although the above studies reported efficacy of Hemikrane in a controlled environment with highly selected patients, they did not assess patient experience in a real-world setting. Does once weekly dosing improve adherence in the real world? The monthly cost of Hemikrane to insurers is $200, whereas Cephalal costs insurers $150 per month. (In this hypothetical example, the authors assume that copayments paid by patients are the same for all of these drugs.)

A major philanthropic organization with an interest in advancing treatments for migraine sufferers funded a collaboration among researchers at Harvard; a regional health insurance company, Trident Health; and, Hemikrane’s manufacturer, Aesculapion. The insurance company, Trident Health, provided access to a database of five million people, which included information on medication use, doctor visits, emergency department evaluations and hospitalizations. Using these records, the study identified a cohort of patients with migraine who made frequent visits to doctors or hospital emergency departments. The study compared information about patients receiving Hemikrane with two comparison groups: a group of patients who received the daily prophylactic regimen with Cephalal, and a group of patients receiving no prophylactic therapy.

The investigators attempted to confirm the original randomized trial results by assessing the frequency with which all patients in the study had migraine headaches. Because the database did not contain a diary of daily symptoms, which had been collected in the trials, the researchers substituted as a proxy the amount of medications such as codeine and sumatriptan (Imitrex) that patients had used each month for treatment of acute migraines. The group receiving Hemikrane had lower use of these symptom-oriented medications than those on Cephalal or on no prophylaxis and had fewer emergency department visits than those taking Cephalal or on no prophylaxis.

Although the medication costs were higher for patients taking Hemikrane because of its higher monthly drug cost, the overall episode-of-care costs were lower than for the comparison group taking Cephalal. As hypothesized, the medication adherence was higher in the once-weekly Hemikrane patients than in the daily Cephalal patients (80 percent and 50 percent, respectively, using the metric of medication possession ratio, which is the number of days of medication dispensed as a percentage of 365 days).

The investigators were concerned that the above findings might be due to the unique characteristics of Trident Health’s population of covered patients, regional practice patterns, copayment designs for medications, and/or the study’s analytic approach. They also worried that the results could be confounded by differences in the patients receiving Hemikrane, Cephalal, or no prophylaxis. One possibility, for example, was that patients who experienced the worst migraines might be more inclined to take or be encouraged by their doctors to take the new drug, Hemikrane, since they had failed all previously available therapies. In that case, the results for a truly matched group of patients might have shown even more pronounced benefit for Hemikrane.

To see if the findings could be replicated, the investigators contacted the pharmacy benefit management company, BestScripts, that worked withTrident Health, and asked for access to additional data. A research protocol was developed before any data were examined. Statistical adjustments were also made to balance the three groups of patients to be studied as well as possible—those taking Hemikrane, those taking Cephalal, and those not on prophylaxis—using a propensity score method (which included age, sex, number of previous migraine emergency department visits, type and extent of prior medication use and selected comorbidities to estimate the probability of a person’s being in one of the three groups) to balance the groups.

The pharmacy benefit manager, BestScripts, had access to data covering more than fifty million lives. The findings in this second, much larger, database corroborated the earlier assessment. The once-weekly prophylactic therapy with Hemikrane clearly reduced the use of medications such as codeine to relieve symptoms, as well as emergency department visits compared to the daily prophylaxis and no prophylaxis groups. Similarly, the Hemikrane group had significantly better medication adherence than the Cephalal group. In addition, BestScripts had data from a subset of employers that collected work loss information about their employees. These data showed that patients on Hemikrane were out of work for fewer days each month than patients taking Cephalal.

In a commentary, Joe Selby, executive director of the Patient-Centered Outcomes Research Institute (PCORI), and colleagues provided a list of problems with these real world studies including threats to validity. They conclude that these hypothetical studies would be unlikely to have been funded or communicated by PCORI.[2]

Below are several of the problems identified by Selby et al.

  • Selection Bias
    • Patients and clinicians may have tried the more familiar, less costly Cephalal first and switched to Hemikrane only if Cephalal failed to relieve symptoms, making the Hemikrane patients a group, who on average, would be more difficult to treat.
    • Those patients who continued using Cephalal may be a selected group who tolerate the treatment well and perceived a benefit.
    • Even if the investigators had conducted the study with only new users, it is plausible that patients prescribed Hemikrane could differ from those prescribed Cephalal. They may be of higher socioeconomic status, have better insurance coverage with lower copayments, have different physicians, or differ in other ways that could affect outcomes.
  • Performance Biases or Other Differences Between Groups is possible.
  • Details of any between-group differences found in these exploratory analyses should have been presented.

Delfini Comment

These two articles are worth reading if you are interested in the difficult area of evaluating observational studies and including them in comparative effectiveness research (CER). We would add that to know if drugs really work, valid RCTs are almost always needed. In this case we don’t know if the studies were valid, because we don’t have enough information about the risk of selection, performance, attrition and assessment bias and other potential methodological problems in the studies. Database studies and other observational studies are likely to have differences in populations, interventions, comparisons, time treated and clinical settings (e.g., prognostic variables of subjects, dosing, co-interventions, other patient choices, bias from lack of blinding) and adjusting for all of these variables and more requires many assumptions. Propensity scores do not reliably adjust for differences. Thus, the risk of bias in the evidence base is unclear.

This case illustrates the difficulty of making coverage decisions for new drugs with some potential advantages for some patients when several studies report benefit compared to placebo, but we already have established treatment agents with safety records. In addition new drugs frequently are found to cause adverse events over time.

Observational data is frequently very valuable. It can be useful in identifying populations for further study, evaluating the implementation of interventions, generating hypotheses, and identifying current condition scenarios (e.g., who, what, where in QI project work; variation, etc.). It is also useful in providing safety signals and for creating economic projections (e.g., balance sheets, models). In this hypothetical set of studies, however, we have only gray zone evidence about efficacy from both RCTs and observational studies and almost no information about safety.

Much of the October issue of Health Affairs is taken up with other readers’ comments. Those of you interested in the problems with real world data in CER activities will enjoy reading how others reacted to these hypothetical drug studies.

References

1. Dentzer S; the Editorial Team of Health Affairs. Communicating About Comparative Effectiveness Research: A Health Affairs Symposium On The Issues. Health Aff (Millwood). 2012 Oct;31(10):2183-2187. PubMed PMID: 23048094.

2. Selby JV, Fleurence R, Lauer M, Schneeweiss S. Reviewing Hypothetical Migraine Studies Using Funding Criteria From The Patient-Centered Outcomes Research Institute. Health Aff (Millwood). 2012 Oct;31(10):2193-2199. PubMed PMID: 23048096.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Critical Appraisal Matters

Status

Critical Appraisal Matters

Most of us know that there is much variation in healthcare that is not explained by patient preference, differences in disease incidence or resource availability. We think that many of the healthcare quality problems with overuse, underuse, misuse, waste, patient harms and more stems from a broad lack of understanding by healthcare decision-makers about  what constitutes solid clinical research.

We think it’s worth visiting (or revisiting) our webpage on “Why Critical Appraisal Matters.”

http://www.delfini.org/delfiniFactsCriticalAppraisal.htm

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Loss to Follow-up Update

Status

Loss to Follow-up Update
Heads up about an important systematic review of the effects of attrition on outcomes of randomized controlled trials (RCTs) that was recently published in the BMJ.[1]

Background

  • Key Question: Would the outcomes of the trial change significantly if all persons had completed the study, and we had complete information on them?
  • Loss to follow-up in RCTs is important because it can bias study results if the balance between study groups that was established through randomization is disrupted in key prognostic variables that would otherwise result in different outcomes.  If there is no imbalance between and within various study subgroups (i.e., as randomized groups compared to completers), then loss to follow-up may not present a threat to validity, except in instances in which statistical significance is not reached because of decreased power.

BMJ Study
The aim of this review was to assess the reporting, extent and handling of loss to follow-up and its potential impact on the estimates of the effect of treatment in RCTs. The investigators evaluated 235 RCTs published between 2005 through 2007 in the five general medical journals with the highest impact factors: Annals of Internal Medicine, BMJ, JAMA, Lancet, and New England Journal of Medicine. All eligible studies reported a significant (P<0.05) primary patient-important outcome.

Methods
The investigators did several sensitivity analyses to evaluate the effect varying assumptions about the outcomes of participants lost to follow-up on the estimate of effect for the primary outcome.  Their analyses strategies were—

  • None of the participants lost to follow-up had the event
  • All the participants lost to follow-up had the event
  • None of those lost to follow-up in the treatment group had the event and all those lost to follow-up in the control group did (best case scenario)
  • All participants lost to follow-up in the treatment group had the event and none of those in the control group did (worst case scenario)
  • More plausible assumptions using various event rates which the authors call the “the event incidence:” The investigators performed sensitivity analyses using what they considered to be plausible ratios of event rates in the dropouts compared to the completers using ratios of 1, 1.5, 2, 3.5 in the intervention group compared to the control group (see Appendix 2 at the link at the end of this post below the reference). They chose an upper limit of 5 times as many dropouts for the intervention group as it represents the highest ratio reported in the literature.

Key Findings

  • Of the 235 eligible studies, 31 (13%) did not report whether or not loss to follow-up occurred.
  • In studies reporting the relevant information, the median percentage of participants lost to follow-up was 6% (interquartile range 2-14%).
  • The method by which loss to follow-up was handled was unclear in 37 studies (19%); the most commonly used method was survival analysis (66, 35%).
  • When the investigators varied assumptions about loss to follow-up, results of 19% of trials were no longer significant if they assumed no participants lost to follow-up had the event of interest, 17% if they assumed that all participants lost to follow-up had the event, and 58% if they assumed a worst case scenario (all participants lost to follow-up in the treatment group and none of those in the control group had the event).
  • Under more plausible assumptions, in which the incidence of events in those lost to follow-up relative to those followed-up was higher in the intervention than control group, 0% to 33% of trials—depending upon which plausible assumptions were used (see Appendix 2 at the link at the end of this post below the reference)— lost statistically significant differences in important endpoints.

Summary
When plausible assumptions are made about the outcomes of participants lost to follow-up in RCTs, this study reports that up to a third of positive findings in RCTs lose statistical significance. The authors recommend that authors of individual RCTs and of systematic reviews test their results against various reasonable assumptions (sensitivity analyses). Only when the results are robust with all reasonable assumptions should inferences from those study results be used by readers.

For more information see the Delfini white paper  on “missingness” at http://www.delfini.org/Delfini_WhitePaper_MissingData.pdf

Reference

1. Akl EA, Briel M, You JJ et al. Potential impact on estimated treatment effects of information lost to follow-up in randomised controlled trials (LOST-IT): systematic review BMJ 2012;344:e2809 doi: 10.1136/bmj.e2809 (Published 18 May 2012). PMID: 19519891

Article is freely available at—

http://www.bmj.com/content/344/bmj.e2809

Supplementary information is available at—

http://www.bmj.com/content/suppl/2012/05/18/bmj.e2809.DC1

For sensitivity analysis results tables, see Appendix 2 at—

http://www.bmj.com/highwire/filestream/585392/field_highwire_adjunct_files/1

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Empirical Evidence of Attrition Bias in Clinical Trials

Status

Empirical Evidence of Attrition Bias in Clinical Trials

The commentary, “Empirical evidence of attrition bias in clinical trials,” by Juni et al [1] is a nice review of what has transpired since 1970 when attrition bias received attention in a critical appraisal of a non-valid trial of extracranial bypass surgery for transient ischemic attack. [2] At about the same time Bradford Hill coined the phrase “intention-to-treat.”  He wrote that excluding patient data after “admission to the treated or control group” may affect the validity of clinical trials and that “unless the losses are very few and therefore unimportant, we may inevitably have to keep such patients in the comparison and thus measure the ‘intention-to-treat’ in a given way, rather than the actual treatment.”[3] The next major development was meta-epidemiological research which assessed trials for associations between methodological quality and effect size and found conflicting results in terms of the effect of attrition bias on effect size.  However, as the commentary points out, the studies assessing attrition bias were flawed. [4,5,6].

Finally a breakthrough in understanding the distorting effect of loss of subjects following randomization was seen by two authors evaluating attrition bias in oncology trials.[7] The investigators compared the results from their analyses which utilized individual patient data, which invariably followed the intention-to-treat principle with those done by the original investigators, which often excluded some or many patients. The results showed that pooled analyses of trials with patient exclusions reported more beneficial effects of the experimental treatment than analyses based on all or most patients who had been randomized. Tierney and Stewart showed that, in most meta-analyses they reviewed based on only “included” patients, the results favored the research treatment (P = 0.03). The commentary gives deserved credit to Tierney and Stewart for their tremendous contribution to critical appraisal and is a very nice, short read.

References

1. Jüni P, Egger M. Commentary: Empirical evidence of attrition bias in clinical  trials. Int J Epidemiol. 2005 Feb;34(1):87-8. Epub 2005 Jan 13. Erratum in: Int J Epidemiol. 2006 Dec;35(6):1595. PubMed PMID: 15649954.

2. Fields WS, Maslenikov V, Meyer JS, Hass WK, Remington RD, Macdonald M. Joint study of extracranial arterial occlusion. V. Progress report of prognosis following surgery or nonsurgical treatment for transient cerebral ischemic attacks. PubMed PMID: 5467158.

3. Bradford Hill A. Principles of Medical Statistics, 9th edn. London: The Lancet Limited, 1971.

4. Schulz KF, Chalmers I, Hayes RJ, Altman D. Empirical evidence of bias. Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA 1995;273:408–12. PMID: 7823387

5. Kjaergard LL, Villumsen J, Gluud C. Reported methodological quality and discrepancies between large and small randomized trials in metaanalyses. Ann Intern Med 2001;135:982–89. PMID 11730399

6. Balk EM, Bonis PA, Moskowitz H, Schmid CH, Ioannidis JP, Wang C, Lau J. Correlation of quality measures with estimates of treatment effect in meta-analyses of randomized controlled trials. JAMA. 2002 Jun 12;287(22):2973-82. PubMed PMID: 12052127.

7. Tierney JF, Stewart LA. Investigating patient exclusion bias in meta-analysis. Int J Epidemiol. 2005 Feb;34(1):79-87. Epub 2004 Nov 23. PubMed PMID: 15561753.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email