Divulging Information to Patients With Poor Prognoses

Status

Divulging Information to Patients With Poor Prognoses

We have seen several instances where our colleagues’ families have been given very little prognostic information by their physicians in situations where important decisions involving benefits versus harms, quality of life and other end of life decisions must be made. In both cases when a clinician in the family presented the evidence and prognostic information, decisions were altered.

We were happy to see a review of this topic by Mack and Smith in a recent issue of the BMJ.[1] In a nutshell the authors point out that—

  • Evidence consistently shows that healthcare professionals are hesitant to divulge prognostic information due to several underlying misconceptions. Examples of misconceptions—
    • Prognostic information will make patients depressed
    • It will take away hope
    • We can’t be sure of the patient’s prognosis anyway
    • Discussions about prognosis are uncomfortable
  • Many patients are denied discussion about code status, advance medical directives, or even hospice until there are no more treatments to give  and little time left for the patient
  • Many patients lose important  time with their families and and spend more time in the hospital and in intensive care units than would be if prognostic information had been provided and different decisions had been made.

Patients and families want prognostic information which is required to make decisions that are right for them. This together with the lack of evidence that discussing prognosis causes depression, shortens life, or takes away hope and the huge problem of unnecessary interventions at the end of life creates a strong argument for honest communication about poor prognoses.

Reference

1. Mack JW, Smith TJ. Reasons why physicians do not have discussions about poor prognosis, why it matters, and what can be improved. J Clin Oncol. 2012 Aug 1;30(22):2715-7. Epub 2012 Jul 2. PubMed PMID: 22753911.

 

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Update on Decision Support for Clinicians and Patients

Status

Update on Decision Support for Clinicians and Patients

We have written extensively and provide many examples of decision support materials on our website. An easy way to round them up is to go to our website search window http://www.delfini.org/index_SiteGoogleSearch.htm and type in the terms “decision support.”

A nice systematic review of the topic funded by AHRQ—Clinical Decision Support Systems (CDSSs)— has recently been published in the Annals of Internal Medicine.[1]  The aim of the review was to evaluate the effect of CDSSs on clinical outcomes, health care processes, workload and efficiency, patient satisfaction, cost, and provider use and implementation. CDSSs include alerts, reminders, order sets, drug-dose information, care summary dashboards that provide performance feedback on quality indicators, and information and other aids designed to improve clinical decision-making.

Findings:  148 randomized controlled trials were included in the review. A total of 128 (86%) assessed health care process measures, 29 (20%) assessed clinical outcomes, and 22 (15%) measured costs. Both commercially and locally developed CDSSs improved health care process measures related to performing preventive services (n =25; odds ratio [OR] 1.42, 95% CI [1.27 to 1.58]), ordering clinical studies (n=20; OR 1.72, 95% CI [1.47 to 2.00]), and prescribing therapies (n=46; OR 1.57, 95% CI [1.35 to 1.82]). There was heterogeneity in interventions, populations, settings and outcomes as would be expected. The authors conclude that commercially and locally developed CDSSs are effective at improving health care process measures across diverse settings, but evidence for clinical, economic, workload and efficiency outcomes remains sparse.

Delfini Comment: Although this review focused on decision support systems, the entire realm of decision support for end users is of great importance to all health care decision-makers. Without good decision support, we will all make suboptimal decisions. This area is huge and is worth spending time understanding how to move evidence from a synthesis to decision support. Interested readers might want to look at some examples of wonderful decision support materials created at the Mayo Clinic. The URL is—

http://webpages.charter.net/vmontori/Wiser_Choices_Program_Aids_Site/Welcome.html

Reference

1.  Bright TJ, Wong A, Dhurjati R, Bristow E, Bastian L, Coeytaux RR, Samsa G, Hasselblad V, Williams JW, Musty MD, Wing L, Kendrick AS, Sanders GD, Lobach D. Effect of clinical decision-support systems: a systematic review. Ann Intern Med. 2012 Jul 3;157(1):29-43. PubMed PMID: 22751758.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Early Termination of Clinical Trials—2012 Update

Status

Early Termination of Clinical Trials—2012 Update

Several years ago we presented the increasing evidence of problems with early termination of clinical trials for benefit after interim analyses.[1] The bottom line is that results are very likely to be distorted because of chance findings.  A useful review of this topic has been recently published.[2] Briefly, this review points out that—

  • Frequently trials stopped early for benefit report results that are not credible, e.g., in one review, relative risk reductions were over 47% in half, over 70% in a quarter. The apparent overestimates were larger in smaller trials.
  • Stopping trials early for apparent benefit is highly likely to systematically overestimate treatment effects.
  • Large overestimates were common when the total number of events was less than 200.
  • Smaller but important overestimates are likely with 200 to 500 events, and trials with over 500 events are likely to show small overestimates.
  • Stopping rules do not appear to ensure protection against distortion of results.
  • Despite the fact that stopped trials may report chance findings that overestimate true effect sizes—especially when based on a small number of events—positive results receive significant attention and can bias clinical practice, clinical guidelines and subsequent systematic reviews.
  • Trials stopped early reduce opportunities to find potential harms.

The authors provide 3 examples to illustrate the above points where harm is likely to have occurred to patients.

Case 1 is the use of preoperative beta blockers in non-cardiac surgery in 1999 a clinical trial of bisoprolol in patients with vascular disease having non-cardiac surgery with a planned sample size of 266 stopped early after enrolling 112 patients—with 20 events. Two of 59 patients in the bisoprolol group and 18 of 53 in the control group had experienced a composite endpoint event (cardiac death or myocardial infarction). The authors reported a 91% reduction in relative risk for this endpoint, 95% confidence interval (63% to 98%). In 2002, a ACC/AHA clinical practice guideline recommended perioperative use of beta blockers for this population. In 2008, a systematic review and meta-analysis, including over 12,000 patients having non-cardiac surgery, reported a 35% reduction in the odds of non-fatal myocardial infarction, 95% CI (21% to 46%), a twofold increase in non-fatal strokes, odds ratio 2.1, 95% CI (2.7 to 3.68), and a possible increase in all-cause mortality, odds ratio 1.20, 95% CI (0.95 to 1.51). Despite the results of this good quality systematic review, subsequent guidelines published in 2009 and 2012 continue to recommend beta blockers.

Case 2 is the use of Intensive insulin therapy (IIT) in critically ill patients. In 2001, a single center randomized trial of IIT in critically ill patients with raised serum glucose reported a 42% relative risk reduction in mortality, 95% CI (22% to 62%). The authors used a liberal stopping threshold (P=0.01) and took frequent looks at the data, strategies they said were “designed to allow early termination of the study.” Results were rapidly incorporated into guidelines, e.g., American College Endocrinology practice guidelines, with recommendations for an upper limit of glucose of </=8.3 mmol/L. A systematic review published in 2008 summarized the results of subsequent studies which did not confirm lower mortality with IIT and documented an increased risk of hypoglycemia.  Later, a good quality SR confirmed these later findings. Nevertheless, some guideline groups continue to advocate limits of </=8.3 mmol/L. Other guidelines utilizing the results of more recent studies, recommend a range of 7.8-10 mmol/L.15.

Case 3 is the use of  activated protein C in critically ill patients with sepsis. The original 2001 trial of recombinant human activated protein C (rhAPC) was stopped early after the second interim analysis because of an apparent difference in mortality. In 2004, the Surviving Sepsis Campaign, a global initiative to improve management, recommended use of the drug as part of a “bundle” of interventions in sepsis. A subsequent trial, published in 2005, reinforced previous concerns from studies reporting increased risk of bleeding with rhAPC and raised questions about the apparent mortality reduction in the original study. As of 2007, trials had failed to replicate the favorable results reported in the pivotal Recombinant Human Activated Protein C Worldwide Evaluation in Severe Sepsis (PROWESS) study. Nevertheless, the 2008 iteration of the Surviving Sepsis guidelines and another guideline in 2009 continued to recommend rhAPC. Finally, after further discouraging trial results, Eli Lilly withdrew the drug, activated drotrecogin alfa (Xigris) from the market 2011.

Key points about trials terminated early for benefit:

  • Truncated trials are likely to overestimate benefits.
  • Results should be confirmed in other studies.
  • Maintain a high level of scepticism regarding the findings of trials stopped early for benefit, particularly when those trials are relatively small and replication is limited or absent.
  • Stopping rules do not protect against overestimation of benefits.
  • Stringent criteria for stopping for benefit would include not stopping before approximately 500 events have accumulated.

References

1. http://www.delfini.org/delfiniClick_PrimaryStudies.htm#truncation

2. Guyatt GH, Briel M, Glasziou P, Bassler D, Montori VM. Problems of stopping trials early. BMJ. 2012 Jun 15;344:e3863. doi: 10.1136/bmj.e3863. PMID:22705814.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

NNT from RR and OR

Status

Obtaining Absolute Risk Reduction (ARR) and Number Needed To Treat (NNT) From Relative Risk (RR) and Odds Ratios (OR) Reported in Systematic Reviews

Background
Estimates of effect in meta-analyses can be expressed as either relative effects or absolute effects. Relative risks (aka risk ratios) and odds ratios are relative measures. Absolute risk reduction (aka risk difference) and number-needed-to-treat are absolute measures.  When reviewing meta-analyses, readers will almost always see results (usually mean differences between groups) presented as relative risks or odds ratios. The reason for this is that relative risks are considered to be the most consistent statistic for study results combined from multiple studies. Meta-analysts usually avoid performing meta-analyses using absolute differences for this reason.

Fortunately we are now seeing more meta-analyses reporting both the relative risks along with ARR and NNT. The key point is that meta-analyses almost always use relative effect measures (relative risk or odds ratio) and then (hopefully) re-express the results using absolute effect measures (ARR or NNT).

You may see the term, “assumed control group risk” or “assumed control risk” (ACR).   This frequently refers to risk in a control group or subgroup of patients in a meta-analysis, but could also refer to risk in any group (i.e., patients not receiving the study intervention) being compared to an intervention group.

The Cochrane Handbook now recommends that meta-analysts provide a summary table for the main outcome and that the table include the following items—

  • The topic, population, intervention and comparison
  • The assumed risk and corresponding risk (i.e., those receiving the intervention)
  • Relative effect statistic (RR or OR)

When RR is provided, ARR can easily be calculated. Odds ratios deal with odds and not probabilities and, therefore, cannot be converted to ARR with accuracy because odds cannot account for a number within a population—only how many with, for example, as compared to how many without.  For more on “odds,” see— http://www.delfini.org/page_Glossary.htm#odds

Example 1: Antihypertensive drug therapy compared to control in elderly (60 years or older) for hypertension in the elderly

Reference: Musini VM, Tejani AM, Bassett K, Wright JM. Pharmacotherapy for hypertension in the elderly. Cochrane Database Syst Rev. 2009 Oct 7;(4):CD000028. Review. PubMed PMID: 19821263.

  • Computing ARR and NNT from Relative Risk
    When RR is reported in a meta-analysis, determine (this is a judgment) the assumed control risk (ACR)—i.e., the risk in the group being compared to the new intervention—from the control event rate or other data/source
  • Formula: ARR=100 X ACR X (1-RR)

Calculating the ARR and NNT from the Musini Meta-analysis

  • In the above meta-analysis of 12 RCTs in elderly patients with moderate hypertension, the RR for overall mortality with treatment compared to no treatment over 4.5 years was 0.90.
  • The event rate  (ACR) in the control group was 116 per 1000 or 0.116
  • ARR=100 X .116 X 0.01=1.16%
  • NNT=100/1.16=87
  • Interpretation: The relative risk with treatment compared to usual care is 90% of the control group (in this case the group of elderly patients not receiving treatment for hypertension) which translates into 1 to 2 fewer deaths per 100 treated patients over 4.5 years with treatment. In other words you would need to treat 87 elderly hypertensive people at moderate risk with antihypertensives for 4.5 years to prevent one death.

Computing ARR and NNT from Odds Ratios

In some older meta-analyses you may not be given the assumed (ACR) risk.

Example 2: Oncology Agent

Assume a meta-analysis on an oncology agent reports an estimate of effect (mortality) as an OR of 0.8 over 3 years for a new drug. In order to do the calculation, an ACR is required.  Hopefully this information will be provided in the study. If not, the reader will have to obtain the assumed control group risk (ACR) from other studies or another source. Let’s assume that the control risk in this example is 0.3.

Formula for converting OR to ARR: ARR=100 X (ACR-OR X ACR) / (1-ACR+OR X ACR)

  • ARR=100 X (0.3-0.8 X 0.3) /  (1-0.3 + 0.8 X 0.3)
  • In this example:
  • ARR=100 X (0.3-0.24) / (0.7 + 0.28)
  • ARR= 0.06/0.98
  • ARR=0.061 or 6.1%
  • Thus the ARR is 6.1% over 3 years.
  • The NNT to benefit one patient over 3 years is 100/6.1 (rounded) is 17.

Because of the limitations of odds ratios, as described above, it should be noted that when outcomes occur commonly (e.g., >5%), odds ratios may then overestimate the effect of a treatment.

For more information see The Cochrane Handbook, Part 2, Chapter 12.5.4 available at http://www.cochrane-handbook.org/

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Are Adaptive Trials Ready For Primetime?

Status

Are Adaptive Trials Ready For Primetime?

It is well-known that many patients volunteer for clinical trials because they mistakenly believe that the goal of the trial is to improve outcomes for the volunteers. A type of trial that does attempt to improve outcomes for those who enter into the trial late is the adaptive trial. In adaptive trials investigators change the enrollment and treatment procedures as the study gathers data from the trial about treatment efficacy. For example, if a study compares a new drug against a placebo treatment and the drug appears to be working, subjects enrolling later will be more likely to receive it. The idea is that adaptive designs will attract more study volunteers.

As pointed out in a couple of recent commentaries, however, there are many unanswered questions about this type of trial. A major concern is the problem of unblinding that may occur with this design with resulting problems with allocation of patients to groups. Frequent peeks at the data may influence decisions made by monitoring boards, investigators and participants.  Another issue is the unknown ability to replicate adaptive trials.  Finally, there are ethical questions such as the issue of greater risk for early enrollees compared to risk for later enrollees.

For further information see—

1. Adaptive Trials in Clinical Research: Scientific and Ethical Issues to Consider
van der Graaf R, Roes KC, van Delden JJ. Adaptive Trials in Clinical Research: Scientific and Ethical Issues to ConsiderAdaptive Trials in Clinical Research. JAMA. 2012 Jun 13;307(22):2379-80. PubMed PMID: 22692169.

2. Adaptive Clinical Trials: A Partial Remedy for the Therapeutic Misconception?
Meurer WJ, Lewis RJ, Berry DA. Adaptive clinical trials: a partial remedy for the therapeutic Misconception?adaptive clinical trials. JAMA. 2012 Jun 13;307(22):2377-8. PubMed PMID: 22692168.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Adjusting for Multiple Comparisons

Status

Adjusting for Multiple Comparisons

Frequently studies report results that are not the primary or secondary outcome measures—sometimes because the finding is not anticipated, is unusual or judged to be important by the authors. How should these findings be assessed? A common belief is that if outcomes are not pre-specified, serious attention to them is not warranted. But is this the case? Kenneth J. Rothman in 1990 wrote an article that we feel is very helpful in such situations.[1]

  • Rothman points out that making statistical adjustments for multiple comparisons is similar to the problem of statistical significance testing where the investigator uses the P-value to estimate the probability of a study demonstrating an effect size as great or greater than the one found in the study, given that the null hypothesis is true—i.e., that there is truly no difference between the groups being studied (with alpha as the arbitrary cutoff for clinical significance which is frequently set at 5%).  Obviously if the risk for rejecting a truly null hypothesis is 5% for every hypothesis examined, then examining multiple hypotheses will generate a larger number of falsely positive statistically significant findings because of the increasing number of hypotheses examined.
  • Adjusting for multiple comparisons is thought by many to be desirable because it will result in a smaller probability of erroneously rejecting the null hypothesis. Rothman argues this “pay for peeking” at more data by adjusting P-values with multiple comparisons is unnecessary and can be misleading. Adjusting for multiple comparisons might be paying a penalty for simply appropriately doing more comparisons, and there is no logical reason (or good evidence) for doing statistical adjusting. Rather, the burden is on those who advocate for multiple comparison adjustments to show there is a problem requiring a statistical fix.
  • Rothman’s  conclusion: It is reasonable to consider each association on its own for the information it conveys—he believes that there is no need for adjusting P-values with multiple comparisons.

Delfini Comment: Reading his paper is a bit difficult, but he make some good points about our not really understanding what chance is all about and that evaluating study outcomes for validity requires critical appraisal for the assessment of bias and other factors as well as the use of statistics for evaluating chance effects.

Reference

Rothman KJ. No adjustments are needed for multiple comparisons. Epidemiology.  1990 Jan;1(1):43-6. PubMed PMID: 2081237.

 

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

From Richard Lehman’s Blog on Clinical Trial Quality

Status

From Richard Lehman’s Blog JAMA 2 May 2012 Vol 307:

“Here the past and present custodians of this site look at the quality of the trials registered between 2007 and 2010. They ‘are dominated by small trials and contain significant heterogeneity in methodological approaches, including reported use of randomization, blinding, and data monitoring committees.’ In other words, these trials are never going to yield clinically dependable data; most of them are futile, and therefore by definition unethical. Something is terribly wrong with the system which governs clinical trials: it is failing to protect patients and failing to generate useful knowledge. Most of what it produces is not evidence, but rubbish. And with no system in place to compel full disclosure of the data, it is often impossible to tell one from the other.”
http://jama.ama-assn.org/content/307/17/1838.abstract

For more Richard Lehman go to Journal Watch http://www.cebm.net/index.aspx?o=2320

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Dr. Otis Brawley & Overuse in Healthcare

Status

Dr. Otis Brawley & Overuse in Healthcare

Everyone will want to listen to Dr Otis Brawley, Chief Medical Officer of the American Cancer Society, discuss why overuse in healthcare is costing us money, jobs and other harms. He talks like a real person—not like a professor and is easy to listen to.  Who is at fault for all of our healthcare woes? Watch it and you will see we are all to blame. We need reliable information to make good choices and very few people are getting it.

https://www.youtube.com/watch?v=LOdDS8rd4-8

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

“Move More” Packets for Cancer Patients

Status

“Move More” Packets for Cancer Patients

Macmillan Cancer Support is a London-based organization providing practical, medical and financial support to cancer patients in Britain. It is on the shortlist of the BMJ Group award for healthcare communication because of its “Move More” packet— a  physical activity and cancer information initiative, urging patients to become more active during and after cancer. The impetus for this project is the ongoing problem of cancer patients still being told to rest, rather than keep active, during and after cancer treatment. The packs, for patients and care givers outline the benefits of gentle activity and suggest ways to introduce activity into their lives. For example, one very popular inclusion was packs of seeds, to encourage people to get outside into their gardens. People loved the seeds and looking forward to seeing the flowers bloom and the veggies grow. For more information see BMJ 2012;341:e2866 doi: 10.1136/bmj.e2866

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Comparative Effectiveness Research (CER) Warning

Status

Comparative Effectiveness Research (CER) Warning—Using Observational Studies to Draw Conclusions About Effectiveness May Give You The Wrong Answer
Case Study: Losartan

This past week we saw five CER studies—all observational. Can we trust the results of these studies? The following is a case study that helps answer that question:

Numerous clinical trials have reported decreased mortality in heart failure patients treated with ARBs, but no head-to-head randomized trials have compared individual ARBs. In 2007, an administrative database study comparing various ARBs concluded that, “elderly patients with heart failure who were prescribed losartan had worse survival rates compared with those prescribed other commonly used ARBs.”[1] This study used hospital discharge data and information from physician claims and pharmacy databases to construct an observational study. The information on prescriptions included type of drug, dose category, frequency and duration. The authors used several methods to estimate adherence.

Unadjusted mortality for users of each ARB was calculated by using Kaplan-Meier curves. To account for differences in follow-up and to control for differences among patient characteristics, a multivariable Cox proportional hazards model was used.

The main outcome was time to all-cause death in patients with heart failure who were prescribed losartan, valsartan, irbesartan, candesartan or telmisartan. Losartan was the most frequently prescribed ARB (61% of patients). Other ARBs included irbesartan (14%), valsartan (13%), candesartan (10%) and telmisartan (2%). In this scenario, losartan loses. Using losartan as the reference, adjusted hazard ratios (HRs) for mortality among the 6876 patients were 0.63 (95% confidence interval [CI] 0.51 to 0.79) for patients who filled a prescription for valsartan, 0.65 (95% CI 0.53 to 0.79) for irbesartan, and 0.71 (95% CI 0.57 to 0.90) for candesartan. Compared with losartan, adjusted HR for patients prescribed telmisartan was 0.92 (95% CI 0.55 to 1.54). Being at or above the target dose was a predictor of survival (adjusted HR 0.72, 95% CI 0.63 to 0.83).

The authors of this observational study point out that head-to-head comparisons are unlikely to be undertaken in trial settings because of the enormous size and expense that such comparative trials of survival would entail. They state that their results represent the best available evidence that some ARBs may be more effective in increasing the survival rate than others and that their results should be useful to guide clinicians in their choice of drugs to treat patients with heart failure.

In 2011, a retrospective analysis of the Swedish Heart Failure Registry reported a survival benefit of candesartan over losartan in patients with heart failure (HF) at 1 and 5 years.[2] Survival by ARB agent was analyzed by Kaplan-Meier estimates and predictors of survival were determined by univariate and multivariate proportional hazard regression models, with and without adjustment for propensity scores and interactions. Stratified analyses and quantification of residual confounding analyses were also performed. In this scenario, losartan loses again. One-year survival was 90% (95% confidence interval [CI] 89% to 91%) for patients receiving candesartan and 83% (95% CI 81% to 84%) for patients receiving losartan, and 5-year survival was 61% (95% CI 54% to 68%) and 44% (95% CI 41% to 48%), respectively (log-rank P<.001). In multivariate analysis with adjustment for propensity scores, the hazard ratio for mortality for losartan compared with candesartan was 1.43 (95% CI 1.23 to 1.65, P<.001). The results persisted in stratified analyses.

But wait!

In March 2012, a nationwide Danish registry–based cohort study, linking individual-level information on patients aged 45 years and older reported all-cause mortality in users of losartan and candesartan.[3] Cox proportional hazards regression were used to compare outcomes. In 4,397 users of losartan, 1,212 deaths occurred during 11,347 person years of follow-up (unadjusted incidence rate [IR]/100 person-years, 10.7; 95% CI 10.1 to 11.3) compared with 330 deaths during 3,675 person-years among 2,082 users of candesartan (unadjusted IR/100 person-years, 9.0; 95% CI 8.1 to 10.0). Compared with candesartan, losartan was not associated with increased all-cause mortality (adjusted hazard ratio [HR] 1.10; 95% CI 0.9 to 1.25) or cardiovascular mortality (adjusted HR 1.14; 95% CI 0.96-1.36). Compared with high doses of candesartan (16-32 mg), low-dose (12.5 mg) and medium-dose losartan (50 mg) were associated with increased mortality (HR 2.79; 95% CI 2.19 to 3.55 and HR 1.39; 95% CI 1.11 to 1.73, respectively) but use of high-dose losartan (100 mg) was similar in risk (HR 0.71; 95% CI 0.50 to 1.00).

Another small cohort study found no difference in all-cause mortality between 4 different ARBs, including candesartan and losartan.[4] Can we tell who is the winner and who is the loser? It is impossible to know. Different results are likely to be due to different populations (different co-morbidities/prognostic variables), dosages of ARBs, co-interventions, analytic methods, etc. Svanström et al point out that, unlike the study by Eklind-Cervenka, they were able to include a wide range of comorbidities (including noncardiovascular disease), co-medications and health status markers in order to better account for baseline treatment group differences with respect to frailty and general health. As an alternative explanation they state that, given that their findings stem from observational data, their results could be due to unmeasured confounding because of frailty (e.g., patients with frailty and advanced heart failure tolerating only low doses of losartan and because of the severity of heart failure being more likely to die than patients who tolerate high candesartan doses). The higher average relative dose among candesartan users may have led to an overestimation of the overall comparative effectiveness of candesartan.

Our position is that, without randomization, investigators cannot be sure that their adjustments (e.g., use of propensity scoring and modeling) will eliminate selection bias. Adjustments can only account for the factors that can be measured, that have been measured and only as well as the instruments can measure them. Other problems in observational studies include drug dosages and other care experiences which cannot be reliably adjusted (performance and assessment bias).

Get ready for more observational studies claiming to show comparative differences between interventions. But remember, even the best observational studies may have only about a 20% chance of telling you the truth.[5]

References

1. Hudson M, Humphries K, Tu JV, Behlouli H, Sheppard R, Pilote L. Angiotensin II receptor blockers for the treatment of heart failure: a class effect? Pharmacotherapy. 2007 Apr;27(4):526-34. PubMed PMID: 17381379.

2. Eklind-Cervenka M, Benson L, Dahlström U, Edner M, Rosenqvist M, Lund LH. Association of candesartan vs losartan with all-cause mortality in patients with heart failure. JAMA. 2011 Jan 12;305(2):175-82. PubMed PMID: 21224459.

3. Svanström H, Pasternak B, Hviid A. Association of treatment with losartan vs candesartan and mortality among patients with heart failure. JAMA. 2012 Apr 11;307(14):1506-12. PubMed PMID: 22496265.

4. Desai RJ, Ashton CM, Deswal A, et al. Comparative effectiveness of individual angiotensin receptor blockers on risk of mortality in patients with chronic heart failure [published online ahead of print July 22, 2011]. Pharmacoepidemiol Drug Saf. doi: 10.1002/pds.2175.

5. Ioannidis JPA. Why Most Published Research Findings are False. PLoS Med 2005; 2(8):696-701. PMID: 16060722

 

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email