Posts Tagged Medication compliance

When Are Prescription Copay Waivers a Good Idea?

I recently received a question as to which drug class or population/disease state, if ever, would I recommend a zero dollar member cost share.   This is a great question, and there actually are some situations in which I would recommend use of a zero dollar copay.

First, let me briefly review why I do not recommend $0 copays as a standard part of the benefit design, even for classes such as diabetes and cardiovascular disease.

  • The primary reason is that most patients do not stop taking medication because of cost, particularly in a commercially insured population (see figure for common reasons for non-adherence).  Accordingly, copays are being waived without any possibility of benefit for the vast majority of patients.  This known fact is evidenced in the small improvements that are seen in compliance after implementing a copay waiver program—averaging 2-4 percentage points and representing 1-2 weeks of additional therapy.  Targeting copay waivers to patients at high risk for adverse events and who truly have cost as a financial barrier would be an ideal approach, but it raises HR and equity issues that are not easily resolved.
  • The second consideration to keep in mind is the potential for fraud.  I have heard from frustrated employers who implemented zero dollar copays for chronic conditions only to find that employees began sharing medications with family and friends with the same condition.  Quantity limits can help control this potential problem partially but not entirely.  I have not studied this phenomenon personally so I cannot speak to the magnitude of the problem.

I would however, recommend use of a zero dollar copay program under certain conditions.  First, zero dollar generic copays are a great tool for promoting use of lower cost, therapeutic alternatives for patients currently using brand medications. I would consider them for therapy classes for which you have step therapy in place as the two programs are complementary.  Step therapy will promote generic use for new users, and the $0 generic copay program is a carrot strategy for promoting generics with current brand users.  The $0 generic copay helps to grab the member’s attention and provides an extra little incentive for making the switch.  I would not make the copay waiver indefinite, however.  Six months of free generics is sufficient.  Based on my experience and rigorous evaluations, these programs have a solid ROI; and the patient saves money too of course.

Second, a population for which I MIGHT consider a $0 generic copay is hypertension and cholesterol, and other cardiovascular medications in seniors. The rate of adverse cardiovascular events (absent treatment) is much higher in the senior than in the commercial population; so IF price elasticity is at least as high as we see with commercial members, there is the potential to materially reduce the rate of adverse cardiovascular events and to achieve a net savings from reduced hospitalizations.  The key to this decision is determining the actual price elasticity of demand within your senior population.  As little contemporary public data is available on price elasticity within the senior population, individual vendors will have to assess the elasticity within their own data.  Once identified, a simple analytic tool, like the VBID calculator, can be used to determine the potential reduction in hospitalizations and medical spend that can be achieved.  Of course, implementing copay waivers in the Medicare Part D program is a greater administrative challenge than a commercial plan or retiree plan, which would be a relevant consideration.

A third population for which I would PILOT a $0 generic program is patients at HIGH risk for adverse cardiovascular events but who have NOT initiated pharmacotherapy. The classic example is the patient with a recent myocardial infarction who has not initiated a statin and/or beta-blocker.  While cost is not likely to be the reason for non-initiation for most patients, it might be a useful short-term incentive, when combined with the right intervention, for encouraging use.  I would say pilot first because it simply may not be effective and there is the risk that a zero price could actually deter use if it serves as a quality indicator for non-initiating patients.  Another growing challenge is that many patients will appear as non-users because of the $4 generic programs which do not always result in a claim being submitted.

For those of you looking for more information on copay waivers, see the recent Fairman editorial in JMCP and a paper Steve Melnick and I published last year on the potential financial savings from copay waivers.

, ,

4 Comments

Persistency Is The Best Measure of Medication Adherence

In an earlier post, I discussed that while medication possession ratios (MPRs) tend to dominate the marketplace reporting tools for medication adherence programs, MPR is not, in fact, the best measure of medication adherence.  Reason being, the results of an MPR analysis depend heavily upon the methodological choices made in defining MPR and the quality of the days supply figured provided by the pharmacist.  Mostly importantly, MPRs allow for little to no clinical interpretation. 

Of all the measurement options that exist, I find persistency to be the most useful.  Persistency, which is a dichotomous yes/no measure that is based on the length of therapy, tells whether a patient’s length of therapy meets or exceeds a certain threshold.  For example, it is common for published studies to report the percent of newly treated patients who were persistent with their medication one year after starting therapy.  There are two key reasons why I prefer persistency over other adherence measures:

1.  It measures the most significant medication adherence problem, i.e., whether patients stop taking their medication altogether.  While small gaps in therapy are not desirable, they are far less prevalent or clinically impactful than discontinuing treatment altogether.  Studies have shown that between 40 and 60% of newly treated patients stop taking their chronic medications after one year.  The figure below, from a BMJ study of antihypertensive users, reports both persistency over time and non-adherence due to poor execution of the dosing regimen.  As the authors state, “non-execution of the dosing regimen created a shortfall in drug intake that is an order of magnitude smaller than the shortfall created by early discontinuation/short persistence.”  Discontinuation has to be the top priority, and organizations will manage what they measure–so measure persistency.

 

 2. The clinical interpretation is unequivocal.  As I discussed in the previous post, improvements in MPR are very difficult to interpret in terms of their clinical impact due to the lack of data on the relationship between differences in MPR or gaps and clinical outcomes.  However, when a patient discontinues their chronic medication altogether, the clinical impact is far clearer (as long as the patient was an appropriate candidate for the medication to begin with—a topic for another day).  The negative clinical and economic impact of discontinuation can be forecasted for a population based on published randomized trials.

Another advantage of a persistency measure is that it is less susceptible to errors in days supply figures provided by the pharmacist because the analysis typically provides a 30 or more day gap in therapy before labeling someone as non-persistent.  That said, persistency is still vulnerable, albeit less than MPR, to differences in methodological approaches.  The two key decisions in a persistency analysis are 1) how long to follow patients;  and 2) what gap in therapy will be considered non-persistence (e.g., 30-day, 60-day, or 90-day gap).  Obviously, the longer you follow patients, the lower the persistency rate will be and the larger the gap in therapy required to be labeled non-persistent, the higher the persistency rate will be.  Accordingly, if comparing vendors, make sure you have the same follow-up length and gap criteria.  In addition, persistency rates for new versus ongoing users look dramatically different so you should always ask that the two groups be reported separately to prevent changes in the mix of members from artificially influencing your results.

Perhaps the biggest reason persistency is not reported as frequently as MPR is because it requires member-level analysis over time, controlling for eligibility.  This involves more complex analytics and processing time for large groups of patients, but given the reasons I’ve outlined above, it is worth the effort.

, ,

Leave a comment

What is the best measure of medication adherence?

Over the last year, one of the most common questions I have received is how to best measure medication compliance or adherence for routine program reporting.  Given the slowed growth in utilization and the need to differentiate, it seems that most PBMs and health plans have placed a renewed emphasis on improving medication adherence.  

While medication possession ratios (MPRs) or versions of MPR tend to dominate the reporting tools, MPR is not, in fact, the best measure of medication adherence.  Why?  The results of an MPR analysis depend heavily upon the methodological choices made in defining MPR and the quality of the days supply figured provided by the pharmacist; and MPRs allow for little clinical interpretation.  Each of these issues is discussed briefly below.

Methodological Choices 

  • First as background, MPR is calculated as the sum of the days supply for all claims during a defined period of time divided by the number of days elapsed during the period.  MPRs can change significantly based on how the denominator is calculated.  In a previously published example in JMCP, a patient’s MPR when the denominator was based on the time between the first and last fill was 0.75; but when the denominator was the entire time period, the MPR was only 0.53.  Reason being, in the first approach, the MPR is affected solely by gaps between fills. When the entire calendar period is used, the MPR is affected both by gaps and treatment discontinuation.
  • MPRs defined over longer time frames using fixed time periods will, by definition, be lower due to decreases in persistency over time so you cannot do a head-to-head comparison of a vendor who reports MPRs on quarterly basis to another vendor who reports MPR on an annual basis.   
  • Third, MPRs are highly sensitive to the population included.  If the report includes both new and ongoing users, an influx of new patients into the program will artificially lower the MPR when it is based on a fixed time period as the denominator.  Reason being, new users have lower persistency rates than ongoing users. 

 

Quality of days supply

MPRs rely on the accuracy of the days supply figure provided by the pharmacist.  In the case of inhalers, injectables and liquids, these figures are notoriously unreliable so the reporting of an MPR is simply not appropriate for many medications.  For oral pills, the problem is less significant but comes into play when different drugs dosages have price parity and/or pill-splitting is common.

Little clinical interpretation

The most significant limitation of the MPR is the lack of ability to assess the clinical meaning of an observed improvement.  When programs claim to improve MPR by 3-5 percentage points, it is simply unknown what clinical impact, if any, will be seen from this increase in MPR.  Research examining the relationship between changes in compliance and clinical outcomes, is sorely lacking.  While researchers have historically used an MPR of 80% or better as the benchmark for good adherence, it is well-known that this is a somewhat arbitrary cut-off, driven more by precedence than clinical rationale.

Is there a better alternative to MPR?  Yes, and I’ll share some thoughts on this alternative later this week.

, ,

2 Comments

JMCP Publishes Editorial on VBID: What Do We Really Know?

Published today in the Journal of Managed Care Pharmacy (JMCP) is an editorial on value-based insurance design, written by Kathleen Fairman and Fred Curtiss.  In the article titled “What Do We Really Know About VBID? Quality of the Evidence and Ethical Considerations for Health Plan Sponsors”, Fairman and Curtiss review the recently published studies of copay waivers and discusses the implications for payers, both private and public.

 The paper notes that use of copay waivers is estimated at about 20% of plan sponsors, but plan deductibles are actually trending higher, not lower.  While the reasons for the still minority uptake have not been formally studied, the authors point to market data suggesting the “potential for short-term increases in utilization and cost” with “uncertain” health benefits and the possibility of “unintended incentives” that could reduce generic drug utilization if brand drug copayments are reduced too much.

The editorial also provides an extensive review of the challenges associated with defining low-value services, which under the original intent of value-based benefits, are those for which copays should be raised, rather than lowered.

In their review of the recent literature, the authors discuss five major weaknesses of the studies published to date, including:

  • No information about generic utilization (the concern being that copay waivers for brands may discourage use of lower cost, clinically appropriate generic alternatives)
  • No information about payer cost, despite claiming a positive ROI from copay waivers in some recent studies
  • Problems in study design and/or reporting (e.g., inability to control for the Healthy Adherer effect)
  • Lack of randomized trials
  • Lack of plausibility in cost-benefit analysis (e.g.,  a VBID copayment reduction would have to increase the effectiveness of statin treatment by approximately 41%-49% in secondary prevention  and 68%-79% in primary prevention—despite increasing medication adherence by only about 4%-6%, a clear implausibility.)

For future research, the authors warn plan sponsors to watch out for: 1) isolated significant findings; 2) causal linkages (or lack thereof); 3) reporting of total cost rather than health plan or sponsor cost; and 4) overextension of results from one study to other populations and benefit designs.

Fairman and Curtiss conclude that “Because VBID has been associated with only minimal medication adherence increases documented only in observational research, and because no health or medical utilization outcomes (e.g., ER or hospital use) have yet been reported for VBID programs, the evidence is insufficient to support expanding its use at the present time.”

VBID is a topic that I have written about extensively, both in the blog and in the published literature, and I found this review both thorough and insightful.  For those of you trying to keep pace with the research in this space as well as the ongoing ethical and practical challenges associated with VBID, this paper is a great resource.

, ,

2 Comments

Do Drugs Save Money? A Plausibility Test

In a previous post, I commented on a study that examined the correlation between medication compliance and use of other healthcare services over three years for a large population of commercially insured members with a diabetes or cardiovascular-related diagnosis.  The study found a strong association between greater medication adherence (defined as a medication possession ratio of 0.80 or higher) and lower utilization of other medical services, primarily hospitalizations.  Considering all healthcare costs, not just disease-related, they reported an ROI of 3:1 for medication adherence in dyslipidemia and 10:1 for hypertension.

While my previous post highlighted a key methodological concern about the study, one of the most powerful tools for quickly stress-testing a study’s findings is to conduct a plausibility test to see if the results match up with what other rigorous research would suggest.   For this plausibility comparison, I selected a meta-analysis of data from 90,056 individuals in 14 randomized trials of statins.  There are other meta-analyses and randomized controlled trials that I could have chosen, which would have led to similar conclusions.

The adherence study included all patients with dyslipidemia as evidenced by a diagnosis in the medical claims.  To be conservative, I selected patients from the meta-analysis who were taking statins for secondary prevention and looked at 5-year effectiveness, as shown in the table below.  Given the absolute risk reductions observed for hospitalizations for MI, revascularization, and strokes, the estimated number of hospital days avoided across all the patients was 0.33.   In contrast, the adherence study reported an average of 1.18 fewer hospital days for adherent patients versus non-adherent patients with dyslipidemia.

It is not plausible that the nearly 4-fold greater hospital reduction reported in the adherence study (1.18 versus 0.33 days) was due to greater medication compliance.  The implausibility is compounded by the fact that I included more severe patients, followed them for a much longer period of time, and examined the full effect of statins versus placebo rather than the effect of differences in adherence, all of which inflated our hospital days avoided.

Patients with Previous MI or CAD        
Event Statin No Statin Absolute Reduction Length of Stay Hospital days avoided
MI 12.00% 15.00% 3.00% 5 0.15
Revascularization 11.00% 13.70% 2.70% 5 0.14
Stroke 3.20% 4.00% 0.80% 5 0.04
        Total hospital days avoided 0.33

Plausibility tests are quick and powerful and can be used to test the ROI claims from disease management vendors, medication compliance programs, and many other healthcare services.  Recognizing the need for such tools and plan sponsors’ limited time to examine vendors’ savings claims, we designed plausibility calculators specific to disease management and value-based insurance to help plan sponsors discern fact from fiction.  They are free so the next time you are listening to a vendor’s sales pitch, you can do a quick reality check.

, , ,

1 Comment

Disease Management: Why Doesn’t It Save Money?

The honeymoon for disease management (DM) has clearly passed.  Health plans and employers are frustrated with the lack of value provided by their DM vendors but they often struggle to understand why their DM program isn’t working as intended or what to do to correct it.  The combination of frustration and confusion has resulted in a wave of market experimentation fueled by vendors who claim to have discovered the allusive secret ingredient that makes DM work.

Published today in the American Journal of Managed Care is a paper I wrote to help plan sponsors better understand what is really known about savings from DM, why telephone-based DM does not save money and what it takes to generate real savings.  This work is based on my market experiences, an extensive review of the literature, both in DM and other related disciplines, and a detailed assessment of what is known about cost-saving healthcare services and treatments.

The answer to why DM does not provide short-term savings lies partly in the myriad of cost-effectiveness assessments that have been conducted on the treatment for chronic disease over the last 30 years. Cohen and Neumann reported that less than 20 percent of preventive measures or treatments for chronic conditions are cost-saving, even for a 30-year time horizon.  Kahn found that aggressive implementation of nationally recommended medical activities would increase costs over a 30-year period for all activities except smoking cessation.  Looking more specifically at the individual components of DM programs, the evidence is equally compelling. At the level of the individual program activity, cost-savings has not been shown for treatment of these chronic conditions, the exception being heart failure.  Accordingly, one should not necessarily expect cost savings for the program as a whole, absent a belief that DM can independently improve the outcomes of patients with chronic disease without affecting the key clinical goals for each disease.

Cost-Effectiveness of Common Disease Management Activities/Goals

Disease/Condition Treatment/Service Cost-Savings
Diabetes A1C < 7% No
LDL Cholesterol < 100 mg/dl No
Blood pressure < 130/80 mmHG No
Feet examination No
Retinal screening No
Microalbuminuria screening No
CAD Antihyperlipidemics (LDL < 100 mg/dl) No
Beta blocker No
Lifestyle changes No/Unknown
Asthma Inhaled anti-inflammatory use No
Asthma education on symptom monitoring and/or trigger avoidance Unknown
HF ACE inhibitor use Yes
Beta blocker use Yes
Structured remote monitoring (weight, blood pressure, etc.) Yes
Daily exercise No/Unknown

While the cost-effectiveness literature does not bode well for the future of telephone-based DM as currently designed, cost-effectiveness research suggests that better targeting of treatment activities and patients may provide opportunity for cost-savings for some disease states.  Although a targeted approach is both intuitive and supported by the evidence, it has two practical problems that will prevent it from being widely adopted in the marketplace.

  • First, a more targeted approach is not a revenue-optimizing model for DM vendors.  For example, only about 5% of asthmatics would be targeted for intervention if the criteria were real cost-savings, hardly a desirable approach for DM vendors whose revenue model is based on volume due to their large fixed cost structure.
  • A second barrier to adoption is the marketability of the more realistic return-on-investment (ROI).  An employer is unlikely to select a vendor that is offering a 1.85 ROI over a vendor with an inflated ROI (that also includes a much larger group of patients since it is not targeted).  The same problem holds true for health plans—even though they often understand the methodological problems of the vendors’ ROIs, ultimately they too must market their program to the employers who sometimes believe all ROIs are created equal.

To learn more about proven strategies for short-term savings, take a look at the full published article.


, ,

Leave a comment

Prescription Copay Waivers Plus Educational Mailings for Asthma Improve Compliance—Causality or Correlation?

In recent weeks, the market has been inundated with evaluations of value-based insurance design (VBID) so it’s no surprise that I was asked to comment on a recent study of VBID published in American Health and Drug Benefits.

For one large employer, asthma patients who volunteered to participate in a care management program that included copay waivers for asthma controller medications as well as 3 educational mailings were compared to asthma patients who chose not to participate in the program.  The study found a 10 percentage point higher medication possession ratio (MPR) in the year after program implementation for the treatment group control (54% versus 44%).

The critical question surrounding this study’s validity is the comparability of the control and intervention group.  Research has shown that patients who choose to enroll in a behavioral change program are more motivated to improve their behavior than are patients who do not enroll.  As evidence of the differences in motivation between the two groups, 99% of patients in the intervention group and 25% in the control group were enrolled in a traditional disease management program.  A second point of evidence of the presence of selection bias is that 74% of intervention patients were using an inhaled steroid versus 64% of control group subjects prior to program enrollment.  Given these differences, the most one can confidently conclude from the study is that patients who chose to enroll in the program were more compliant with their steroids than patients who did not enroll.  That difference is likely explained in part by selection bias and in part, by the copay waivers; but it is not possible to determine from the data how much each component contributed.

Viewing these results in light of other studies of VBID provides further support of the flawed control group.  VBID evaluations have found between a 2 to 4 percentage point increase in MPR following a copay waiver, depending on the therapy class.  Yet, this study reported a 10 percentage point improvement, which is a 2.5 to 5-fold greater effect than previous studies.  Advocates will likely argue that this improved effect size was due to the combination of copay waivers and education, but research on educational mailings suggests otherwise.

To the authors’ credit, they acknowledge the study’s key limitation and the fact that a stronger study would have compared this employer’s asthma patients (both enrolled and not enrolled in the program) to another employer population having comparable clinical and sociodemographics.  Finding a comparable group can be challenging but not impossible and provides a much stronger study design for making a causal interpretation.  I would expect this more robust type of comparison to show a compliance improvement anywhere from zero to 4 percentage points given what has been seen to date in other research.

The study also examined asthma-related pharmacy and medical costs.  After controlling for covariates and baseline differences in costs, the intervention group had a lower (but not significant) adjusted monthly cost at 12 months of follow-up compared with the control group ($18 vs. $23, respectively; p = .067).  Asthma-related pharmacy costs were higher ($89 vs. $53, respectively; p <.001).  Summing these two measures, total monthly asthma-related costs for the year after program implementation were higher for the intervention ($107) than for for the control group ($76).  However, the authors never reported total asthma-related costs as I just did; and study abstract only mentions all-cause medical costs, reporting that pharmacy costs increased, other medical costs decreased, and there was no difference in overall costs.

The use of non-participants as a control group, while known to be a weak research design, sometimes occurs due to its convenience and employer preference but simply prohibits making any causal conclusions.  The discussion of overall medical costs in the abstract as the primary endpoint rather than asthma-related medical costs may reflect a classic reporting bias or “spin” as others have called it.  It is a questionable practice to watch for as I have seen it used in other pharmaceutical policy literature, such as step therapy evaluations, in the absence of any plausible explanation.

, ,

Leave a comment

Do Drugs Save Money? Check The Assumptions

Econometrics has finally discovered the hidden truth about pharmaceuticals—they save money, a lot of money.  A recent study found that compliance with pharmaceuticals demonstrated a 8.4:1 ROI for congestive heart failure, 10.1:1 for hypertension, 6.7:1 for diabetes, and 3.1:1 for dyslipidemia.  This is certainly a surprising result given the contradictory findings from other credible research on the cost-effectiveness of pharmaceuticals.  However, it’s likely to capture some headlines given its intuitive appeal to many so I took a closer look.

This study, published by researchers at one of the large PBMs, examined the correlation between medication compliance and use of other healthcare services over several years for a large population of commercially insured members with a diabetes or cardiovascular-related diagnosis.  It is well-known in the published literature that making causal conclusions from the cross-sectional examination of medication compliance against total medical spend is highly problematic due to the Healthy Adherer Effect, which is the tendency of people who are adherent to their medications to also engage in other healthy behaviors, such as exercising regularly and eating a healthy diet.  Reason being, it is difficult, if not impossible, to control for differences in patient behavior that one cannot measure in the claims data. In this particular study, the Healthy Adherer Effect was likely compounded by assigning a compliance score of zero to patients having a diagnosis but no prescription claim for the condition.

The study authors stated that they overcame the systematic selection bias created by the Healthy Adherer Effect through the use of an econometrics technique called fixed-effect regression—basically an alternative form of the more commonly used ordinary least squares regression (OLS).  As is often the case with econometric models, to believe in the superiority of fixed-effect modeling, you have to believe a key underlying assumption that the model makes, which is that confounders, such as eating healthy and exercise, do not vary over time in conjunction with medication compliance.  As the authors acknowledge in the references, “fixed-effects modeling does not allow for the control of confounders that vary over time. Thus, for example, if patients who become adherent simultaneously start exercising regularly (assuming that both of these behavioral changes reduce health services use and spending), the estimated impact of adherence would remain biased.”  The authors make no case for why this assumption would hold true and it hardly seems plausible given what is already known on the subject.

While such fixed-effects estimators may be an improvement on basic cross-sectional methods, they are still quite limited when it comes to uncovering a true causal effect when  the confounder(s) varies over time; and like OLS, will tend to overestimate the causal effect of pharmaceuticals on medical spending in the presence of the Healthy Adherer Effect.  Some evidence to that effect lies in the manuscript’s appendix—for most of the conditions, the difference between the estimate for the OLS regression, which the authors acknowledge does not address the selection bias problem, and the fixed-effect model was minimal (e.g., marginal effects estimate for inpatient days for heart failure of 5.731 for OLS vs. 5.715 for fixed-effects).

However, setting aside the more technical discussion, one of the most informative and practical ways to test the strength of these results is to conduct a plausibility test of the ROI, which basically means to compare these findings against credible randomized controlled trials of medication efficacy.  I’ll show you those results in an upcoming post.  Of course, none of this discussion is meant to minimize the importance of improved medication compliance.  It continues to be a critically important gap to address but employers and other plan sponsors should be provided realistic expectations about the economic value of improved compliance.

, ,

3 Comments