How Do We Heal Medicine: TED Talk by Atul Gawande

Status

How Do We Heal Medicine: TED Talk by Atul Gawande

Mike and I liked this 20-minute talk.  There are some learnings here about how medical practice has changed, problem-solving, systems thinking, implementation and about cowboys!  Complexity requires group success, Gawande tells us, and making systems work is the great task of our generation (although we would amend that and say it is the work of us all).

http://www.facebook.com/l.php?u=http%3A%2F%2Fwww.ted.com%2Ftalks%2
Fatul_gawande_how_do_we_heal_medicine.html&h=
TAQEorNBKAQGe5994XRttGHDb9_5s5QZx_IX8CcA2KZO6fQ

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Comparative Effectiveness Research (CER) Warning

Status

Comparative Effectiveness Research (CER) Warning—Using Observational Studies to Draw Conclusions About Effectiveness May Give You The Wrong Answer
Case Study: Losartan

This past week we saw five CER studies—all observational. Can we trust the results of these studies? The following is a case study that helps answer that question:

Numerous clinical trials have reported decreased mortality in heart failure patients treated with ARBs, but no head-to-head randomized trials have compared individual ARBs. In 2007, an administrative database study comparing various ARBs concluded that, “elderly patients with heart failure who were prescribed losartan had worse survival rates compared with those prescribed other commonly used ARBs.”[1] This study used hospital discharge data and information from physician claims and pharmacy databases to construct an observational study. The information on prescriptions included type of drug, dose category, frequency and duration. The authors used several methods to estimate adherence.

Unadjusted mortality for users of each ARB was calculated by using Kaplan-Meier curves. To account for differences in follow-up and to control for differences among patient characteristics, a multivariable Cox proportional hazards model was used.

The main outcome was time to all-cause death in patients with heart failure who were prescribed losartan, valsartan, irbesartan, candesartan or telmisartan. Losartan was the most frequently prescribed ARB (61% of patients). Other ARBs included irbesartan (14%), valsartan (13%), candesartan (10%) and telmisartan (2%). In this scenario, losartan loses. Using losartan as the reference, adjusted hazard ratios (HRs) for mortality among the 6876 patients were 0.63 (95% confidence interval [CI] 0.51 to 0.79) for patients who filled a prescription for valsartan, 0.65 (95% CI 0.53 to 0.79) for irbesartan, and 0.71 (95% CI 0.57 to 0.90) for candesartan. Compared with losartan, adjusted HR for patients prescribed telmisartan was 0.92 (95% CI 0.55 to 1.54). Being at or above the target dose was a predictor of survival (adjusted HR 0.72, 95% CI 0.63 to 0.83).

The authors of this observational study point out that head-to-head comparisons are unlikely to be undertaken in trial settings because of the enormous size and expense that such comparative trials of survival would entail. They state that their results represent the best available evidence that some ARBs may be more effective in increasing the survival rate than others and that their results should be useful to guide clinicians in their choice of drugs to treat patients with heart failure.

In 2011, a retrospective analysis of the Swedish Heart Failure Registry reported a survival benefit of candesartan over losartan in patients with heart failure (HF) at 1 and 5 years.[2] Survival by ARB agent was analyzed by Kaplan-Meier estimates and predictors of survival were determined by univariate and multivariate proportional hazard regression models, with and without adjustment for propensity scores and interactions. Stratified analyses and quantification of residual confounding analyses were also performed. In this scenario, losartan loses again. One-year survival was 90% (95% confidence interval [CI] 89% to 91%) for patients receiving candesartan and 83% (95% CI 81% to 84%) for patients receiving losartan, and 5-year survival was 61% (95% CI 54% to 68%) and 44% (95% CI 41% to 48%), respectively (log-rank P<.001). In multivariate analysis with adjustment for propensity scores, the hazard ratio for mortality for losartan compared with candesartan was 1.43 (95% CI 1.23 to 1.65, P<.001). The results persisted in stratified analyses.

But wait!

In March 2012, a nationwide Danish registry–based cohort study, linking individual-level information on patients aged 45 years and older reported all-cause mortality in users of losartan and candesartan.[3] Cox proportional hazards regression were used to compare outcomes. In 4,397 users of losartan, 1,212 deaths occurred during 11,347 person years of follow-up (unadjusted incidence rate [IR]/100 person-years, 10.7; 95% CI 10.1 to 11.3) compared with 330 deaths during 3,675 person-years among 2,082 users of candesartan (unadjusted IR/100 person-years, 9.0; 95% CI 8.1 to 10.0). Compared with candesartan, losartan was not associated with increased all-cause mortality (adjusted hazard ratio [HR] 1.10; 95% CI 0.9 to 1.25) or cardiovascular mortality (adjusted HR 1.14; 95% CI 0.96-1.36). Compared with high doses of candesartan (16-32 mg), low-dose (12.5 mg) and medium-dose losartan (50 mg) were associated with increased mortality (HR 2.79; 95% CI 2.19 to 3.55 and HR 1.39; 95% CI 1.11 to 1.73, respectively) but use of high-dose losartan (100 mg) was similar in risk (HR 0.71; 95% CI 0.50 to 1.00).

Another small cohort study found no difference in all-cause mortality between 4 different ARBs, including candesartan and losartan.[4] Can we tell who is the winner and who is the loser? It is impossible to know. Different results are likely to be due to different populations (different co-morbidities/prognostic variables), dosages of ARBs, co-interventions, analytic methods, etc. Svanström et al point out that, unlike the study by Eklind-Cervenka, they were able to include a wide range of comorbidities (including noncardiovascular disease), co-medications and health status markers in order to better account for baseline treatment group differences with respect to frailty and general health. As an alternative explanation they state that, given that their findings stem from observational data, their results could be due to unmeasured confounding because of frailty (e.g., patients with frailty and advanced heart failure tolerating only low doses of losartan and because of the severity of heart failure being more likely to die than patients who tolerate high candesartan doses). The higher average relative dose among candesartan users may have led to an overestimation of the overall comparative effectiveness of candesartan.

Our position is that, without randomization, investigators cannot be sure that their adjustments (e.g., use of propensity scoring and modeling) will eliminate selection bias. Adjustments can only account for the factors that can be measured, that have been measured and only as well as the instruments can measure them. Other problems in observational studies include drug dosages and other care experiences which cannot be reliably adjusted (performance and assessment bias).

Get ready for more observational studies claiming to show comparative differences between interventions. But remember, even the best observational studies may have only about a 20% chance of telling you the truth.[5]

References

1. Hudson M, Humphries K, Tu JV, Behlouli H, Sheppard R, Pilote L. Angiotensin II receptor blockers for the treatment of heart failure: a class effect? Pharmacotherapy. 2007 Apr;27(4):526-34. PubMed PMID: 17381379.

2. Eklind-Cervenka M, Benson L, Dahlström U, Edner M, Rosenqvist M, Lund LH. Association of candesartan vs losartan with all-cause mortality in patients with heart failure. JAMA. 2011 Jan 12;305(2):175-82. PubMed PMID: 21224459.

3. Svanström H, Pasternak B, Hviid A. Association of treatment with losartan vs candesartan and mortality among patients with heart failure. JAMA. 2012 Apr 11;307(14):1506-12. PubMed PMID: 22496265.

4. Desai RJ, Ashton CM, Deswal A, et al. Comparative effectiveness of individual angiotensin receptor blockers on risk of mortality in patients with chronic heart failure [published online ahead of print July 22, 2011]. Pharmacoepidemiol Drug Saf. doi: 10.1002/pds.2175.

5. Ioannidis JPA. Why Most Published Research Findings are False. PLoS Med 2005; 2(8):696-701. PMID: 16060722

 

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Appendicitis 1889 to 2012: What, No Surgery?

Status

Appendicitis 1889 to 2012: What, No Surgery?

All medical students learn about McBurney’s point—that’s the spot, named for McBurney, in the right lower quadrant of the abdomen where classical appendicitis pain finally localizes.[1] If the patient’s history fits the classic history of appendicitis with vague abdominal pain eventually localizing to McBurney’s point, the norm has been—at least in the U.S. —to take the appendix out. However, as pointed out in a new systematic review done as a meta-analysis, starting in the late 1950s there were reports of success in treating appendicitis with conservative therapy (antibiotics) and good outcomes without resorting to appendectomy.[2]

This systematic review presents a review of our traditions and lack of conclusive evidence about best practices in managing appendicitis and suggests that, for many patients, avoiding appendectomy may be a reasonable option. The current meta-analysis of four selected randomized controlled trials from 59 eligible trials with a total of 900 patients, reported a relative risk reduction for complications (perforation, peritonitis, wound infection) from appendicitis of 31% for antibiotic treatment compared with appendectomy (risk ratio 0.69 (95% confidence interval 0.54 to 0.89); I2=0%; P=0.004). There were no significant differences between antibiotic treatment and appendectomy for length of hospital stay, efficacy of treatment, or risk of complicated appendicitis.

The biggest problem in this meta-analysis is that the results are based on trials with significant threats to validity. Randomization sequence was computer generated in one trial, by “external randomization” in one trial, by date of birth in one trial and unclear in one trial. Concealment of allocation was by sealed envelopes in two trials and not reported in the other two trials. All trials were unblinded. Withdrawal rates are unclear. Therefore, it is uncertain how much the results of this meta-analysis may have been distorted by bias. In addition, as pointed out by an editorialist, in patients who have persistent problems despite antibiotic treatment, delayed appendectomy might be necessary.[3] Delayed appendectomy has been associated with a high complication rate. Also, if a patient develops an inflammatory phlegmon—a palpable mass at clinical examination or an inflammatory mass or abscess at imaging or at surgical exploration—appendectomy sometimes has to be converted to an ileocecal resection—a much more involved operation. Another important issue with antibiotic treatment is the chance of recurrence. The current meta-analysis found a 20% chance of recurrence of appendicitis after conservative treatment within one year. Of the recurrences, 20% of patients presented with a perforated or gangrenous appendicitis. The editorialist questions whether a failure rate of 20% within one year is acceptable.

These four trials and this meta-analysis suggest that antibiotics may be safe for some patients with uncomplicated appendicitis. If this option is considered, we believe detailed information about the uncertainties regarding benefits and risks should be made known to patients. Details are available at http://www.bmj.com/content/344/bmj.e2156

References

1. Thomas CG Jr. Experiences with Early Operative Interference in Cases of Disease of the Vermiform Appendix by Charles McBurney, M.D., Visiting Surgeon to the Roosevelt Hospital, New York City. Rev Surg. 1969 May-Jun;26(3):153-66. PubMed PMID: 4893208.

2. Varadhan KK, Neal KR, Lobo DN. Safety and efficacy of antibiotics compared with appendicectomy for treatment of uncomplicated acute appendicitis: meta-analysis of randomised controlled trials. BMJ. 2012 Apr 5;344:e2156. doi: 10.1136/bmj.e2156. PubMed PMID: 22491789.

3. BMJ 2012;344:e2546 (Published 5 April 2012).

 

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Empirical Evidence of Attrition Bias in Clinical Trials

Status

Empirical Evidence of Attrition Bias in Clinical Trials

The commentary, “Empirical evidence of attrition bias in clinical trials,” by Juni et al [1] is a nice review of what has transpired since 1970 when attrition bias received attention in a critical appraisal of a non-valid trial of extracranial bypass surgery for transient ischemic attack. [2] At about the same time Bradford Hill coined the phrase “intention-to-treat.”  He wrote that excluding patient data after “admission to the treated or control group” may affect the validity of clinical trials and that “unless the losses are very few and therefore unimportant, we may inevitably have to keep such patients in the comparison and thus measure the ‘intention-to-treat’ in a given way, rather than the actual treatment.”[3] The next major development was meta-epidemiological research which assessed trials for associations between methodological quality and effect size and found conflicting results in terms of the effect of attrition bias on effect size.  However, as the commentary points out, the studies assessing attrition bias were flawed. [4,5,6].

Finally a breakthrough in understanding the distorting effect of loss of subjects following randomization was seen by two authors evaluating attrition bias in oncology trials.[7] The investigators compared the results from their analyses which utilized individual patient data, which invariably followed the intention-to-treat principle with those done by the original investigators, which often excluded some or many patients. The results showed that pooled analyses of trials with patient exclusions reported more beneficial effects of the experimental treatment than analyses based on all or most patients who had been randomized. Tierney and Stewart showed that, in most meta-analyses they reviewed based on only “included” patients, the results favored the research treatment (P = 0.03). The commentary gives deserved credit to Tierney and Stewart for their tremendous contribution to critical appraisal and is a very nice, short read.

References

1. Jüni P, Egger M. Commentary: Empirical evidence of attrition bias in clinical  trials. Int J Epidemiol. 2005 Feb;34(1):87-8. Epub 2005 Jan 13. Erratum in: Int J Epidemiol. 2006 Dec;35(6):1595. PubMed PMID: 15649954.

2. Fields WS, Maslenikov V, Meyer JS, Hass WK, Remington RD, Macdonald M. Joint study of extracranial arterial occlusion. V. Progress report of prognosis following surgery or nonsurgical treatment for transient cerebral ischemic attacks. PubMed PMID: 5467158.

3. Bradford Hill A. Principles of Medical Statistics, 9th edn. London: The Lancet Limited, 1971.

4. Schulz KF, Chalmers I, Hayes RJ, Altman D. Empirical evidence of bias. Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA 1995;273:408–12. PMID: 7823387

5. Kjaergard LL, Villumsen J, Gluud C. Reported methodological quality and discrepancies between large and small randomized trials in metaanalyses. Ann Intern Med 2001;135:982–89. PMID 11730399

6. Balk EM, Bonis PA, Moskowitz H, Schmid CH, Ioannidis JP, Wang C, Lau J. Correlation of quality measures with estimates of treatment effect in meta-analyses of randomized controlled trials. JAMA. 2002 Jun 12;287(22):2973-82. PubMed PMID: 12052127.

7. Tierney JF, Stewart LA. Investigating patient exclusion bias in meta-analysis. Int J Epidemiol. 2005 Feb;34(1):79-87. Epub 2004 Nov 23. PubMed PMID: 15561753.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Unnecessary or Harmful Tests and Treatments You May Wish To Avoid

Status

Unnecessary or Harmful Tests and Treatments You May Wish To Avoid

The Dartmouth Atlas of Healthcare and others have estimated that at least 30% of US healthcare spending is unnecessary. The American Board of Internal Medicine, along with nine prominent physician groups, announced on April 4, 2012 released lists of 45 common tests and treatments they say are often unnecessary and may even harm patients. For example the American Board of Family Practice recommended against imaging for low back pain unless red flags are present. Other items on the lists included avoiding antibiotics for most acute mild to moderate sinusitis symptoms, screening EKGs (or other cardiac screenings) in people without symptoms, DEXA screening for osteoporosis in women younger than 65 and many more. For details go to Kaiser Health News—http://www.kaiserhealthnews.org/Stories/2012/April/04/physicians-unnecessary-treatments.aspx

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

Have You Seen PRISMA?

Status

Have You Seen PRISMA?

Systematic reviews and meta-analyses are needed to synthesize evidence regarding clinical questions. Unfortunately the quality of these reviews varies greatly. As part of a movement to improve the transparency and reporting of important details in meta-analyses of randomized controlled trials (RCTs), the QUOROM (quality of reporting of meta-analysis) statement was developed in 1999.[1] In 2009, that guidance was updated and expanded by a group of 29 review authors, methodologists, clinicians, medical editors, and consumers, and the  name was changed to PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses).[2] Although some authors have used PRISMA to improve the reporting of systematic reviews, and thereby assisting critical appraisers assess the benefits and harms of a healthcare intervention, we (and others) continue to see systematic reviews that include RCTs at high-risk-of-bias in their analyses. Critical appraisers might want to be aware of the PRISMA statement.

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2714672/?tool=pubmed

1. Moher D, Cook DJ, Eastwood S, Olkin I, Rennie D, et al. Improving the 8 quality of reports of meta-analyses of randomised controlled trials: The QUOROM statement. Quality of Reporting of Meta-analyses. Lancet 1999;354:1896-1900. PMID: 10584742.

2. Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JP, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration. BMJ. 2009 Jul 21;339:b2700. doi: 10.1136/bmj.b2700. PubMed PMID: 19622552.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email

A Caution When Evaluating Systematic Reviews and Meta-analyses

Status

A Caution When Evaluating Systematic Reviews and Meta-analyses

We would like to draw critical appraisers’ attention to an infrequent but important problem encountered in some systematic reviews—the accuracy of standardized mean differences in some reviews. Meta-analysis of trials that have used different scales to record outcomes of a similar nature requires data transformation to a uniform scale, the standardized mean difference (SMD). Gøtzsche and colleagues, in a review of 27 meta-analyses utilizing SMD found that a high proportion of meta-analyses based on SMDs contained meaningful errors in data extraction and calculation of point estimates.[1] Gøtzsche et al. audited two trials from each review and found that, in 17 meta-analyses (63%), there were errors for at least 1 of the 2 trials examined. We recommend that critical appraisers be aware of this issue.

1. Gøtzsche PC, Hróbjartsson A, Maric K, Tendal B. Data extraction errors in meta-analyses that use standardized mean differences. JAMA. 2007 Jul 25;298(4):430-7. Erratum in: JAMA. 2007 Nov 21;298(19):2264. PubMed PMID:17652297.

Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr Email