Archive for category Medication Compliance
I recently received a question as to which drug class or population/disease state, if ever, would I recommend a zero dollar member cost share. This is a great question, and there actually are some situations in which I would recommend use of a zero dollar copay.
First, let me briefly review why I do not recommend $0 copays as a standard part of the benefit design, even for classes such as diabetes and cardiovascular disease.
- The primary reason is that most patients do not stop taking medication because of cost, particularly in a commercially insured population (see figure for common reasons for non-adherence). Accordingly, copays are being waived without any possibility of benefit for the vast majority of patients. This known fact is evidenced in the small improvements that are seen in compliance after implementing a copay waiver program—averaging 2-4 percentage points and representing 1-2 weeks of additional therapy. Targeting copay waivers to patients at high risk for adverse events and who truly have cost as a financial barrier would be an ideal approach, but it raises HR and equity issues that are not easily resolved.
- The second consideration to keep in mind is the potential for fraud. I have heard from frustrated employers who implemented zero dollar copays for chronic conditions only to find that employees began sharing medications with family and friends with the same condition. Quantity limits can help control this potential problem partially but not entirely. I have not studied this phenomenon personally so I cannot speak to the magnitude of the problem.
I would however, recommend use of a zero dollar copay program under certain conditions. First, zero dollar generic copays are a great tool for promoting use of lower cost, therapeutic alternatives for patients currently using brand medications. I would consider them for therapy classes for which you have step therapy in place as the two programs are complementary. Step therapy will promote generic use for new users, and the $0 generic copay program is a carrot strategy for promoting generics with current brand users. The $0 generic copay helps to grab the member’s attention and provides an extra little incentive for making the switch. I would not make the copay waiver indefinite, however. Six months of free generics is sufficient. Based on my experience and rigorous evaluations, these programs have a solid ROI; and the patient saves money too of course.
Second, a population for which I MIGHT consider a $0 generic copay is hypertension and cholesterol, and other cardiovascular medications in seniors. The rate of adverse cardiovascular events (absent treatment) is much higher in the senior than in the commercial population; so IF price elasticity is at least as high as we see with commercial members, there is the potential to materially reduce the rate of adverse cardiovascular events and to achieve a net savings from reduced hospitalizations. The key to this decision is determining the actual price elasticity of demand within your senior population. As little contemporary public data is available on price elasticity within the senior population, individual vendors will have to assess the elasticity within their own data. Once identified, a simple analytic tool, like the VBID calculator, can be used to determine the potential reduction in hospitalizations and medical spend that can be achieved. Of course, implementing copay waivers in the Medicare Part D program is a greater administrative challenge than a commercial plan or retiree plan, which would be a relevant consideration.
A third population for which I would PILOT a $0 generic program is patients at HIGH risk for adverse cardiovascular events but who have NOT initiated pharmacotherapy. The classic example is the patient with a recent myocardial infarction who has not initiated a statin and/or beta-blocker. While cost is not likely to be the reason for non-initiation for most patients, it might be a useful short-term incentive, when combined with the right intervention, for encouraging use. I would say pilot first because it simply may not be effective and there is the risk that a zero price could actually deter use if it serves as a quality indicator for non-initiating patients. Another growing challenge is that many patients will appear as non-users because of the $4 generic programs which do not always result in a claim being submitted.
For those of you looking for more information on copay waivers, see the recent Fairman editorial in JMCP and a paper Steve Melnick and I published last year on the potential financial savings from copay waivers.
In an earlier post, I discussed that while medication possession ratios (MPRs) tend to dominate the marketplace reporting tools for medication adherence programs, MPR is not, in fact, the best measure of medication adherence. Reason being, the results of an MPR analysis depend heavily upon the methodological choices made in defining MPR and the quality of the days supply figured provided by the pharmacist. Mostly importantly, MPRs allow for little to no clinical interpretation.
Of all the measurement options that exist, I find persistency to be the most useful. Persistency, which is a dichotomous yes/no measure that is based on the length of therapy, tells whether a patient’s length of therapy meets or exceeds a certain threshold. For example, it is common for published studies to report the percent of newly treated patients who were persistent with their medication one year after starting therapy. There are two key reasons why I prefer persistency over other adherence measures:
1. It measures the most significant medication adherence problem, i.e., whether patients stop taking their medication altogether. While small gaps in therapy are not desirable, they are far less prevalent or clinically impactful than discontinuing treatment altogether. Studies have shown that between 40 and 60% of newly treated patients stop taking their chronic medications after one year. The figure below, from a BMJ study of antihypertensive users, reports both persistency over time and non-adherence due to poor execution of the dosing regimen. As the authors state, “non-execution of the dosing regimen created a shortfall in drug intake that is an order of magnitude smaller than the shortfall created by early discontinuation/short persistence.” Discontinuation has to be the top priority, and organizations will manage what they measure–so measure persistency.
2. The clinical interpretation is unequivocal. As I discussed in the previous post, improvements in MPR are very difficult to interpret in terms of their clinical impact due to the lack of data on the relationship between differences in MPR or gaps and clinical outcomes. However, when a patient discontinues their chronic medication altogether, the clinical impact is far clearer (as long as the patient was an appropriate candidate for the medication to begin with—a topic for another day). The negative clinical and economic impact of discontinuation can be forecasted for a population based on published randomized trials.
Another advantage of a persistency measure is that it is less susceptible to errors in days supply figures provided by the pharmacist because the analysis typically provides a 30 or more day gap in therapy before labeling someone as non-persistent. That said, persistency is still vulnerable, albeit less than MPR, to differences in methodological approaches. The two key decisions in a persistency analysis are 1) how long to follow patients; and 2) what gap in therapy will be considered non-persistence (e.g., 30-day, 60-day, or 90-day gap). Obviously, the longer you follow patients, the lower the persistency rate will be and the larger the gap in therapy required to be labeled non-persistent, the higher the persistency rate will be. Accordingly, if comparing vendors, make sure you have the same follow-up length and gap criteria. In addition, persistency rates for new versus ongoing users look dramatically different so you should always ask that the two groups be reported separately to prevent changes in the mix of members from artificially influencing your results.
Perhaps the biggest reason persistency is not reported as frequently as MPR is because it requires member-level analysis over time, controlling for eligibility. This involves more complex analytics and processing time for large groups of patients, but given the reasons I’ve outlined above, it is worth the effort.
Over the last year, one of the most common questions I have received is how to best measure medication compliance or adherence for routine program reporting. Given the slowed growth in utilization and the need to differentiate, it seems that most PBMs and health plans have placed a renewed emphasis on improving medication adherence.
While medication possession ratios (MPRs) or versions of MPR tend to dominate the reporting tools, MPR is not, in fact, the best measure of medication adherence. Why? The results of an MPR analysis depend heavily upon the methodological choices made in defining MPR and the quality of the days supply figured provided by the pharmacist; and MPRs allow for little clinical interpretation. Each of these issues is discussed briefly below.
- First as background, MPR is calculated as the sum of the days supply for all claims during a defined period of time divided by the number of days elapsed during the period. MPRs can change significantly based on how the denominator is calculated. In a previously published example in JMCP, a patient’s MPR when the denominator was based on the time between the first and last fill was 0.75; but when the denominator was the entire time period, the MPR was only 0.53. Reason being, in the first approach, the MPR is affected solely by gaps between fills. When the entire calendar period is used, the MPR is affected both by gaps and treatment discontinuation.
- MPRs defined over longer time frames using fixed time periods will, by definition, be lower due to decreases in persistency over time so you cannot do a head-to-head comparison of a vendor who reports MPRs on quarterly basis to another vendor who reports MPR on an annual basis.
- Third, MPRs are highly sensitive to the population included. If the report includes both new and ongoing users, an influx of new patients into the program will artificially lower the MPR when it is based on a fixed time period as the denominator. Reason being, new users have lower persistency rates than ongoing users.
Quality of days supply
MPRs rely on the accuracy of the days supply figure provided by the pharmacist. In the case of inhalers, injectables and liquids, these figures are notoriously unreliable so the reporting of an MPR is simply not appropriate for many medications. For oral pills, the problem is less significant but comes into play when different drugs dosages have price parity and/or pill-splitting is common.
Little clinical interpretation
The most significant limitation of the MPR is the lack of ability to assess the clinical meaning of an observed improvement. When programs claim to improve MPR by 3-5 percentage points, it is simply unknown what clinical impact, if any, will be seen from this increase in MPR. Research examining the relationship between changes in compliance and clinical outcomes, is sorely lacking. While researchers have historically used an MPR of 80% or better as the benchmark for good adherence, it is well-known that this is a somewhat arbitrary cut-off, driven more by precedence than clinical rationale.
Is there a better alternative to MPR? Yes, and I’ll share some thoughts on this alternative later this week.
Published today in the Journal of Managed Care Pharmacy (JMCP) is an editorial on value-based insurance design, written by Kathleen Fairman and Fred Curtiss. In the article titled “What Do We Really Know About VBID? Quality of the Evidence and Ethical Considerations for Health Plan Sponsors”, Fairman and Curtiss review the recently published studies of copay waivers and discusses the implications for payers, both private and public.
The paper notes that use of copay waivers is estimated at about 20% of plan sponsors, but plan deductibles are actually trending higher, not lower. While the reasons for the still minority uptake have not been formally studied, the authors point to market data suggesting the “potential for short-term increases in utilization and cost” with “uncertain” health benefits and the possibility of “unintended incentives” that could reduce generic drug utilization if brand drug copayments are reduced too much.
The editorial also provides an extensive review of the challenges associated with defining low-value services, which under the original intent of value-based benefits, are those for which copays should be raised, rather than lowered.
In their review of the recent literature, the authors discuss five major weaknesses of the studies published to date, including:
- No information about generic utilization (the concern being that copay waivers for brands may discourage use of lower cost, clinically appropriate generic alternatives)
- No information about payer cost, despite claiming a positive ROI from copay waivers in some recent studies
- Problems in study design and/or reporting (e.g., inability to control for the Healthy Adherer effect)
- Lack of randomized trials
- Lack of plausibility in cost-benefit analysis (e.g., a VBID copayment reduction would have to increase the effectiveness of statin treatment by approximately 41%-49% in secondary prevention and 68%-79% in primary prevention—despite increasing medication adherence by only about 4%-6%, a clear implausibility.)
For future research, the authors warn plan sponsors to watch out for: 1) isolated significant findings; 2) causal linkages (or lack thereof); 3) reporting of total cost rather than health plan or sponsor cost; and 4) overextension of results from one study to other populations and benefit designs.
Fairman and Curtiss conclude that “Because VBID has been associated with only minimal medication adherence increases documented only in observational research, and because no health or medical utilization outcomes (e.g., ER or hospital use) have yet been reported for VBID programs, the evidence is insufficient to support expanding its use at the present time.”
VBID is a topic that I have written about extensively, both in the blog and in the published literature, and I found this review both thorough and insightful. For those of you trying to keep pace with the research in this space as well as the ongoing ethical and practical challenges associated with VBID, this paper is a great resource.
In a previous post, I commented on a study that examined the correlation between medication compliance and use of other healthcare services over three years for a large population of commercially insured members with a diabetes or cardiovascular-related diagnosis. The study found a strong association between greater medication adherence (defined as a medication possession ratio of 0.80 or higher) and lower utilization of other medical services, primarily hospitalizations. Considering all healthcare costs, not just disease-related, they reported an ROI of 3:1 for medication adherence in dyslipidemia and 10:1 for hypertension.
While my previous post highlighted a key methodological concern about the study, one of the most powerful tools for quickly stress-testing a study’s findings is to conduct a plausibility test to see if the results match up with what other rigorous research would suggest. For this plausibility comparison, I selected a meta-analysis of data from 90,056 individuals in 14 randomized trials of statins. There are other meta-analyses and randomized controlled trials that I could have chosen, which would have led to similar conclusions.
The adherence study included all patients with dyslipidemia as evidenced by a diagnosis in the medical claims. To be conservative, I selected patients from the meta-analysis who were taking statins for secondary prevention and looked at 5-year effectiveness, as shown in the table below. Given the absolute risk reductions observed for hospitalizations for MI, revascularization, and strokes, the estimated number of hospital days avoided across all the patients was 0.33. In contrast, the adherence study reported an average of 1.18 fewer hospital days for adherent patients versus non-adherent patients with dyslipidemia.
It is not plausible that the nearly 4-fold greater hospital reduction reported in the adherence study (1.18 versus 0.33 days) was due to greater medication compliance. The implausibility is compounded by the fact that I included more severe patients, followed them for a much longer period of time, and examined the full effect of statins versus placebo rather than the effect of differences in adherence, all of which inflated our hospital days avoided.
|Patients with Previous MI or CAD|
|Event||Statin||No Statin||Absolute Reduction||Length of Stay||Hospital days avoided|
|Total hospital days avoided||0.33|
Plausibility tests are quick and powerful and can be used to test the ROI claims from disease management vendors, medication compliance programs, and many other healthcare services. Recognizing the need for such tools and plan sponsors’ limited time to examine vendors’ savings claims, we designed plausibility calculators specific to disease management and value-based insurance to help plan sponsors discern fact from fiction. They are free so the next time you are listening to a vendor’s sales pitch, you can do a quick reality check.
The honeymoon for disease management (DM) has clearly passed. Health plans and employers are frustrated with the lack of value provided by their DM vendors but they often struggle to understand why their DM program isn’t working as intended or what to do to correct it. The combination of frustration and confusion has resulted in a wave of market experimentation fueled by vendors who claim to have discovered the allusive secret ingredient that makes DM work.
Published today in the American Journal of Managed Care is a paper I wrote to help plan sponsors better understand what is really known about savings from DM, why telephone-based DM does not save money and what it takes to generate real savings. This work is based on my market experiences, an extensive review of the literature, both in DM and other related disciplines, and a detailed assessment of what is known about cost-saving healthcare services and treatments.
The answer to why DM does not provide short-term savings lies partly in the myriad of cost-effectiveness assessments that have been conducted on the treatment for chronic disease over the last 30 years. Cohen and Neumann reported that less than 20 percent of preventive measures or treatments for chronic conditions are cost-saving, even for a 30-year time horizon. Kahn found that aggressive implementation of nationally recommended medical activities would increase costs over a 30-year period for all activities except smoking cessation. Looking more specifically at the individual components of DM programs, the evidence is equally compelling. At the level of the individual program activity, cost-savings has not been shown for treatment of these chronic conditions, the exception being heart failure. Accordingly, one should not necessarily expect cost savings for the program as a whole, absent a belief that DM can independently improve the outcomes of patients with chronic disease without affecting the key clinical goals for each disease.
Cost-Effectiveness of Common Disease Management Activities/Goals
|Diabetes||A1C < 7%||No|
|LDL Cholesterol < 100 mg/dl||No|
|Blood pressure < 130/80 mmHG||No|
|CAD||Antihyperlipidemics (LDL < 100 mg/dl)||No|
|Asthma||Inhaled anti-inflammatory use||No|
|Asthma education on symptom monitoring and/or trigger avoidance||Unknown|
|HF||ACE inhibitor use||Yes|
|Beta blocker use||Yes|
|Structured remote monitoring (weight, blood pressure, etc.)||Yes|
While the cost-effectiveness literature does not bode well for the future of telephone-based DM as currently designed, cost-effectiveness research suggests that better targeting of treatment activities and patients may provide opportunity for cost-savings for some disease states. Although a targeted approach is both intuitive and supported by the evidence, it has two practical problems that will prevent it from being widely adopted in the marketplace.
- First, a more targeted approach is not a revenue-optimizing model for DM vendors. For example, only about 5% of asthmatics would be targeted for intervention if the criteria were real cost-savings, hardly a desirable approach for DM vendors whose revenue model is based on volume due to their large fixed cost structure.
- A second barrier to adoption is the marketability of the more realistic return-on-investment (ROI). An employer is unlikely to select a vendor that is offering a 1.85 ROI over a vendor with an inflated ROI (that also includes a much larger group of patients since it is not targeted). The same problem holds true for health plans—even though they often understand the methodological problems of the vendors’ ROIs, ultimately they too must market their program to the employers who sometimes believe all ROIs are created equal.
To learn more about proven strategies for short-term savings, take a look at the full published article.
Prescription Copay Waivers Plus Educational Mailings for Asthma Improve Compliance—Causality or Correlation?
In recent weeks, the market has been inundated with evaluations of value-based insurance design (VBID) so it’s no surprise that I was asked to comment on a recent study of VBID published in American Health and Drug Benefits.
For one large employer, asthma patients who volunteered to participate in a care management program that included copay waivers for asthma controller medications as well as 3 educational mailings were compared to asthma patients who chose not to participate in the program. The study found a 10 percentage point higher medication possession ratio (MPR) in the year after program implementation for the treatment group control (54% versus 44%).
The critical question surrounding this study’s validity is the comparability of the control and intervention group. Research has shown that patients who choose to enroll in a behavioral change program are more motivated to improve their behavior than are patients who do not enroll. As evidence of the differences in motivation between the two groups, 99% of patients in the intervention group and 25% in the control group were enrolled in a traditional disease management program. A second point of evidence of the presence of selection bias is that 74% of intervention patients were using an inhaled steroid versus 64% of control group subjects prior to program enrollment. Given these differences, the most one can confidently conclude from the study is that patients who chose to enroll in the program were more compliant with their steroids than patients who did not enroll. That difference is likely explained in part by selection bias and in part, by the copay waivers; but it is not possible to determine from the data how much each component contributed.
Viewing these results in light of other studies of VBID provides further support of the flawed control group. VBID evaluations have found between a 2 to 4 percentage point increase in MPR following a copay waiver, depending on the therapy class. Yet, this study reported a 10 percentage point improvement, which is a 2.5 to 5-fold greater effect than previous studies. Advocates will likely argue that this improved effect size was due to the combination of copay waivers and education, but research on educational mailings suggests otherwise.
To the authors’ credit, they acknowledge the study’s key limitation and the fact that a stronger study would have compared this employer’s asthma patients (both enrolled and not enrolled in the program) to another employer population having comparable clinical and sociodemographics. Finding a comparable group can be challenging but not impossible and provides a much stronger study design for making a causal interpretation. I would expect this more robust type of comparison to show a compliance improvement anywhere from zero to 4 percentage points given what has been seen to date in other research.
The study also examined asthma-related pharmacy and medical costs. After controlling for covariates and baseline differences in costs, the intervention group had a lower (but not significant) adjusted monthly cost at 12 months of follow-up compared with the control group ($18 vs. $23, respectively; p = .067). Asthma-related pharmacy costs were higher ($89 vs. $53, respectively; p <.001). Summing these two measures, total monthly asthma-related costs for the year after program implementation were higher for the intervention ($107) than for for the control group ($76). However, the authors never reported total asthma-related costs as I just did; and study abstract only mentions all-cause medical costs, reporting that pharmacy costs increased, other medical costs decreased, and there was no difference in overall costs.
The use of non-participants as a control group, while known to be a weak research design, sometimes occurs due to its convenience and employer preference but simply prohibits making any causal conclusions. The discussion of overall medical costs in the abstract as the primary endpoint rather than asthma-related medical costs may reflect a classic reporting bias or “spin” as others have called it. It is a questionable practice to watch for as I have seen it used in other pharmaceutical policy literature, such as step therapy evaluations, in the absence of any plausible explanation.