Archive for January, 2011

Do Drugs Save Money? A Plausibility Test

In a previous post, I commented on a study that examined the correlation between medication compliance and use of other healthcare services over three years for a large population of commercially insured members with a diabetes or cardiovascular-related diagnosis.  The study found a strong association between greater medication adherence (defined as a medication possession ratio of 0.80 or higher) and lower utilization of other medical services, primarily hospitalizations.  Considering all healthcare costs, not just disease-related, they reported an ROI of 3:1 for medication adherence in dyslipidemia and 10:1 for hypertension.

While my previous post highlighted a key methodological concern about the study, one of the most powerful tools for quickly stress-testing a study’s findings is to conduct a plausibility test to see if the results match up with what other rigorous research would suggest.   For this plausibility comparison, I selected a meta-analysis of data from 90,056 individuals in 14 randomized trials of statins.  There are other meta-analyses and randomized controlled trials that I could have chosen, which would have led to similar conclusions.

The adherence study included all patients with dyslipidemia as evidenced by a diagnosis in the medical claims.  To be conservative, I selected patients from the meta-analysis who were taking statins for secondary prevention and looked at 5-year effectiveness, as shown in the table below.  Given the absolute risk reductions observed for hospitalizations for MI, revascularization, and strokes, the estimated number of hospital days avoided across all the patients was 0.33.   In contrast, the adherence study reported an average of 1.18 fewer hospital days for adherent patients versus non-adherent patients with dyslipidemia.

It is not plausible that the nearly 4-fold greater hospital reduction reported in the adherence study (1.18 versus 0.33 days) was due to greater medication compliance.  The implausibility is compounded by the fact that I included more severe patients, followed them for a much longer period of time, and examined the full effect of statins versus placebo rather than the effect of differences in adherence, all of which inflated our hospital days avoided.

Patients with Previous MI or CAD        
Event Statin No Statin Absolute Reduction Length of Stay Hospital days avoided
MI 12.00% 15.00% 3.00% 5 0.15
Revascularization 11.00% 13.70% 2.70% 5 0.14
Stroke 3.20% 4.00% 0.80% 5 0.04
        Total hospital days avoided 0.33

Plausibility tests are quick and powerful and can be used to test the ROI claims from disease management vendors, medication compliance programs, and many other healthcare services.  Recognizing the need for such tools and plan sponsors’ limited time to examine vendors’ savings claims, we designed plausibility calculators specific to disease management and value-based insurance to help plan sponsors discern fact from fiction.  They are free so the next time you are listening to a vendor’s sales pitch, you can do a quick reality check.

Advertisements

, , ,

1 Comment

Off-Label Use in Medicare Part D Protected Classes

Headlines recently reported on a study finding significant overuse of atypical antipsychotics in the U.S. population.  Using the IMS Health National Disease and Therapeutic Index, the authors reported that 60% of prescriptions written for atypical antipsychotics had no FDA-approved indication, with 90% of off-label uses lacking moderate or good evidence of effectiveness.  Among seniors, the rate of off-label use was even higher at 75%.  Given the adverse event profile for atypical antipsychotics, the lack of an efficacy advantage in recent comparative effectiveness studies, and their considerably greater expense, the high rate of off-label use of atypical antipsychotics is a real concern.

While once an “untouchable” therapy class, this study provides further impetus for commercial plan sponsors to more actively manage their atypical antipsychotic use to improve safety and cost-effectiveness.   These findings also stand in stark contrast to Medicare Part D’s continued inclusion of antipsychotics as a protected class, which requires plans to include “all or substantially all” drugs on the formulary.  Looking more broadly at the CNS-related protected classes, which also includes antidepressants and anticonvulsants, the magnitude of off-label use is equally alarming.  Consider the following data points:

Antidepressants

  • 75% of use off-label in Medicaid in 2001 (Chen)
  • 61% of use off-label or unexplained in 2002 in marketscan data (Larson)

Anticonvulsants

  • 84% of second generation use off-label in large MCO in 2005 (Patel)
  • 80% of use off-label in Medicaid in 2001 (Chen)

Antipsychotics

  • 60% of use off-label in VA in 2007 (Leslie)
  • 64% of use off-label in Medicaid in 2001 (Chen)
  • 86% of use off-label in nursing homes in 2004 (Kamble)

Ironically, these three protected classes under Part D have some of the highest rates of off-label use of all commonly used classes and have specifically been identified as classes of concern with a need for further research on the clinical and economic consequences of off-label use.

Yes, off-label use stimulates innovation when practiced on a smaller scale but it also brings with it safety and cost-effectiveness concerns.  Protected classes under Part D were mandated to ensure that the most vulnerable Medicaid recipients transferring into Medicare would have coverage for the exact same medications when transferred to Medicare, but the time has come to revisit this policy.  Section 176 of the Medicare Improvements for Patients and Providers Act of 2008 appears to have opened the door for this reconsideration, as CMS establishes formal criteria for classes of concern.

, , , , ,

Leave a comment

Disease Management: Why Doesn’t It Save Money?

The honeymoon for disease management (DM) has clearly passed.  Health plans and employers are frustrated with the lack of value provided by their DM vendors but they often struggle to understand why their DM program isn’t working as intended or what to do to correct it.  The combination of frustration and confusion has resulted in a wave of market experimentation fueled by vendors who claim to have discovered the allusive secret ingredient that makes DM work.

Published today in the American Journal of Managed Care is a paper I wrote to help plan sponsors better understand what is really known about savings from DM, why telephone-based DM does not save money and what it takes to generate real savings.  This work is based on my market experiences, an extensive review of the literature, both in DM and other related disciplines, and a detailed assessment of what is known about cost-saving healthcare services and treatments.

The answer to why DM does not provide short-term savings lies partly in the myriad of cost-effectiveness assessments that have been conducted on the treatment for chronic disease over the last 30 years. Cohen and Neumann reported that less than 20 percent of preventive measures or treatments for chronic conditions are cost-saving, even for a 30-year time horizon.  Kahn found that aggressive implementation of nationally recommended medical activities would increase costs over a 30-year period for all activities except smoking cessation.  Looking more specifically at the individual components of DM programs, the evidence is equally compelling. At the level of the individual program activity, cost-savings has not been shown for treatment of these chronic conditions, the exception being heart failure.  Accordingly, one should not necessarily expect cost savings for the program as a whole, absent a belief that DM can independently improve the outcomes of patients with chronic disease without affecting the key clinical goals for each disease.

Cost-Effectiveness of Common Disease Management Activities/Goals

Disease/Condition Treatment/Service Cost-Savings
Diabetes A1C < 7% No
LDL Cholesterol < 100 mg/dl No
Blood pressure < 130/80 mmHG No
Feet examination No
Retinal screening No
Microalbuminuria screening No
CAD Antihyperlipidemics (LDL < 100 mg/dl) No
Beta blocker No
Lifestyle changes No/Unknown
Asthma Inhaled anti-inflammatory use No
Asthma education on symptom monitoring and/or trigger avoidance Unknown
HF ACE inhibitor use Yes
Beta blocker use Yes
Structured remote monitoring (weight, blood pressure, etc.) Yes
Daily exercise No/Unknown

While the cost-effectiveness literature does not bode well for the future of telephone-based DM as currently designed, cost-effectiveness research suggests that better targeting of treatment activities and patients may provide opportunity for cost-savings for some disease states.  Although a targeted approach is both intuitive and supported by the evidence, it has two practical problems that will prevent it from being widely adopted in the marketplace.

  • First, a more targeted approach is not a revenue-optimizing model for DM vendors.  For example, only about 5% of asthmatics would be targeted for intervention if the criteria were real cost-savings, hardly a desirable approach for DM vendors whose revenue model is based on volume due to their large fixed cost structure.
  • A second barrier to adoption is the marketability of the more realistic return-on-investment (ROI).  An employer is unlikely to select a vendor that is offering a 1.85 ROI over a vendor with an inflated ROI (that also includes a much larger group of patients since it is not targeted).  The same problem holds true for health plans—even though they often understand the methodological problems of the vendors’ ROIs, ultimately they too must market their program to the employers who sometimes believe all ROIs are created equal.

To learn more about proven strategies for short-term savings, take a look at the full published article.


, ,

Leave a comment

Prescription Copay Waivers Plus Educational Mailings for Asthma Improve Compliance—Causality or Correlation?

In recent weeks, the market has been inundated with evaluations of value-based insurance design (VBID) so it’s no surprise that I was asked to comment on a recent study of VBID published in American Health and Drug Benefits.

For one large employer, asthma patients who volunteered to participate in a care management program that included copay waivers for asthma controller medications as well as 3 educational mailings were compared to asthma patients who chose not to participate in the program.  The study found a 10 percentage point higher medication possession ratio (MPR) in the year after program implementation for the treatment group control (54% versus 44%).

The critical question surrounding this study’s validity is the comparability of the control and intervention group.  Research has shown that patients who choose to enroll in a behavioral change program are more motivated to improve their behavior than are patients who do not enroll.  As evidence of the differences in motivation between the two groups, 99% of patients in the intervention group and 25% in the control group were enrolled in a traditional disease management program.  A second point of evidence of the presence of selection bias is that 74% of intervention patients were using an inhaled steroid versus 64% of control group subjects prior to program enrollment.  Given these differences, the most one can confidently conclude from the study is that patients who chose to enroll in the program were more compliant with their steroids than patients who did not enroll.  That difference is likely explained in part by selection bias and in part, by the copay waivers; but it is not possible to determine from the data how much each component contributed.

Viewing these results in light of other studies of VBID provides further support of the flawed control group.  VBID evaluations have found between a 2 to 4 percentage point increase in MPR following a copay waiver, depending on the therapy class.  Yet, this study reported a 10 percentage point improvement, which is a 2.5 to 5-fold greater effect than previous studies.  Advocates will likely argue that this improved effect size was due to the combination of copay waivers and education, but research on educational mailings suggests otherwise.

To the authors’ credit, they acknowledge the study’s key limitation and the fact that a stronger study would have compared this employer’s asthma patients (both enrolled and not enrolled in the program) to another employer population having comparable clinical and sociodemographics.  Finding a comparable group can be challenging but not impossible and provides a much stronger study design for making a causal interpretation.  I would expect this more robust type of comparison to show a compliance improvement anywhere from zero to 4 percentage points given what has been seen to date in other research.

The study also examined asthma-related pharmacy and medical costs.  After controlling for covariates and baseline differences in costs, the intervention group had a lower (but not significant) adjusted monthly cost at 12 months of follow-up compared with the control group ($18 vs. $23, respectively; p = .067).  Asthma-related pharmacy costs were higher ($89 vs. $53, respectively; p <.001).  Summing these two measures, total monthly asthma-related costs for the year after program implementation were higher for the intervention ($107) than for for the control group ($76).  However, the authors never reported total asthma-related costs as I just did; and study abstract only mentions all-cause medical costs, reporting that pharmacy costs increased, other medical costs decreased, and there was no difference in overall costs.

The use of non-participants as a control group, while known to be a weak research design, sometimes occurs due to its convenience and employer preference but simply prohibits making any causal conclusions.  The discussion of overall medical costs in the abstract as the primary endpoint rather than asthma-related medical costs may reflect a classic reporting bias or “spin” as others have called it.  It is a questionable practice to watch for as I have seen it used in other pharmaceutical policy literature, such as step therapy evaluations, in the absence of any plausible explanation.

, ,

Leave a comment

Do Drugs Save Money? Check The Assumptions

Econometrics has finally discovered the hidden truth about pharmaceuticals—they save money, a lot of money.  A recent study found that compliance with pharmaceuticals demonstrated a 8.4:1 ROI for congestive heart failure, 10.1:1 for hypertension, 6.7:1 for diabetes, and 3.1:1 for dyslipidemia.  This is certainly a surprising result given the contradictory findings from other credible research on the cost-effectiveness of pharmaceuticals.  However, it’s likely to capture some headlines given its intuitive appeal to many so I took a closer look.

This study, published by researchers at one of the large PBMs, examined the correlation between medication compliance and use of other healthcare services over several years for a large population of commercially insured members with a diabetes or cardiovascular-related diagnosis.  It is well-known in the published literature that making causal conclusions from the cross-sectional examination of medication compliance against total medical spend is highly problematic due to the Healthy Adherer Effect, which is the tendency of people who are adherent to their medications to also engage in other healthy behaviors, such as exercising regularly and eating a healthy diet.  Reason being, it is difficult, if not impossible, to control for differences in patient behavior that one cannot measure in the claims data. In this particular study, the Healthy Adherer Effect was likely compounded by assigning a compliance score of zero to patients having a diagnosis but no prescription claim for the condition.

The study authors stated that they overcame the systematic selection bias created by the Healthy Adherer Effect through the use of an econometrics technique called fixed-effect regression—basically an alternative form of the more commonly used ordinary least squares regression (OLS).  As is often the case with econometric models, to believe in the superiority of fixed-effect modeling, you have to believe a key underlying assumption that the model makes, which is that confounders, such as eating healthy and exercise, do not vary over time in conjunction with medication compliance.  As the authors acknowledge in the references, “fixed-effects modeling does not allow for the control of confounders that vary over time. Thus, for example, if patients who become adherent simultaneously start exercising regularly (assuming that both of these behavioral changes reduce health services use and spending), the estimated impact of adherence would remain biased.”  The authors make no case for why this assumption would hold true and it hardly seems plausible given what is already known on the subject.

While such fixed-effects estimators may be an improvement on basic cross-sectional methods, they are still quite limited when it comes to uncovering a true causal effect when  the confounder(s) varies over time; and like OLS, will tend to overestimate the causal effect of pharmaceuticals on medical spending in the presence of the Healthy Adherer Effect.  Some evidence to that effect lies in the manuscript’s appendix—for most of the conditions, the difference between the estimate for the OLS regression, which the authors acknowledge does not address the selection bias problem, and the fixed-effect model was minimal (e.g., marginal effects estimate for inpatient days for heart failure of 5.731 for OLS vs. 5.715 for fixed-effects).

However, setting aside the more technical discussion, one of the most informative and practical ways to test the strength of these results is to conduct a plausibility test of the ROI, which basically means to compare these findings against credible randomized controlled trials of medication efficacy.  I’ll show you those results in an upcoming post.  Of course, none of this discussion is meant to minimize the importance of improved medication compliance.  It continues to be a critically important gap to address but employers and other plan sponsors should be provided realistic expectations about the economic value of improved compliance.

, ,

3 Comments