Archive for March, 2011
I recently received a question as to which drug class or population/disease state, if ever, would I recommend a zero dollar member cost share. This is a great question, and there actually are some situations in which I would recommend use of a zero dollar copay.
First, let me briefly review why I do not recommend $0 copays as a standard part of the benefit design, even for classes such as diabetes and cardiovascular disease.
- The primary reason is that most patients do not stop taking medication because of cost, particularly in a commercially insured population (see figure for common reasons for non-adherence). Accordingly, copays are being waived without any possibility of benefit for the vast majority of patients. This known fact is evidenced in the small improvements that are seen in compliance after implementing a copay waiver program—averaging 2-4 percentage points and representing 1-2 weeks of additional therapy. Targeting copay waivers to patients at high risk for adverse events and who truly have cost as a financial barrier would be an ideal approach, but it raises HR and equity issues that are not easily resolved.
- The second consideration to keep in mind is the potential for fraud. I have heard from frustrated employers who implemented zero dollar copays for chronic conditions only to find that employees began sharing medications with family and friends with the same condition. Quantity limits can help control this potential problem partially but not entirely. I have not studied this phenomenon personally so I cannot speak to the magnitude of the problem.
I would however, recommend use of a zero dollar copay program under certain conditions. First, zero dollar generic copays are a great tool for promoting use of lower cost, therapeutic alternatives for patients currently using brand medications. I would consider them for therapy classes for which you have step therapy in place as the two programs are complementary. Step therapy will promote generic use for new users, and the $0 generic copay program is a carrot strategy for promoting generics with current brand users. The $0 generic copay helps to grab the member’s attention and provides an extra little incentive for making the switch. I would not make the copay waiver indefinite, however. Six months of free generics is sufficient. Based on my experience and rigorous evaluations, these programs have a solid ROI; and the patient saves money too of course.
Second, a population for which I MIGHT consider a $0 generic copay is hypertension and cholesterol, and other cardiovascular medications in seniors. The rate of adverse cardiovascular events (absent treatment) is much higher in the senior than in the commercial population; so IF price elasticity is at least as high as we see with commercial members, there is the potential to materially reduce the rate of adverse cardiovascular events and to achieve a net savings from reduced hospitalizations. The key to this decision is determining the actual price elasticity of demand within your senior population. As little contemporary public data is available on price elasticity within the senior population, individual vendors will have to assess the elasticity within their own data. Once identified, a simple analytic tool, like the VBID calculator, can be used to determine the potential reduction in hospitalizations and medical spend that can be achieved. Of course, implementing copay waivers in the Medicare Part D program is a greater administrative challenge than a commercial plan or retiree plan, which would be a relevant consideration.
A third population for which I would PILOT a $0 generic program is patients at HIGH risk for adverse cardiovascular events but who have NOT initiated pharmacotherapy. The classic example is the patient with a recent myocardial infarction who has not initiated a statin and/or beta-blocker. While cost is not likely to be the reason for non-initiation for most patients, it might be a useful short-term incentive, when combined with the right intervention, for encouraging use. I would say pilot first because it simply may not be effective and there is the risk that a zero price could actually deter use if it serves as a quality indicator for non-initiating patients. Another growing challenge is that many patients will appear as non-users because of the $4 generic programs which do not always result in a claim being submitted.
For those of you looking for more information on copay waivers, see the recent Fairman editorial in JMCP and a paper Steve Melnick and I published last year on the potential financial savings from copay waivers.
While there has been no shortage of press about the recent FDA approval of MakenaTM, made by K-V Pharmaceutical Company, I think the recent recently published letter in the New England Journal of Medicine about the pharmacoeconomics of the drug is quite useful.
Separate from the business and ethical questions of the $30,000 plus price tag for Makena (a drug that previously cost around $300 as a generic), one could argue that the pharmacoeconomics of the drug at the new price may still render the medication a cost-effective therapy. A physician at Aetna examined just this question, evaluating the pharmacoeconomics of Makena based on the likely rate of treatment, the efficacy of the drug versus placebo, and the subsequent costs avoided due to treatment.
Based on published literature, Aetna estimated that about 139,000 women are candidates for Makena, of which about 22% (30,500) are likely to have a recurrent preterm birth absent medication. With treatment, 33% (10,000) preterm births could be prevented, saving $334 million in direct medical costs and $519 million in indirect medical costs. The indirect medical costs include maternal care, special education, early intervention costs, etc. The cost of treating the 139,900 women, at $29,000 per course of treatment (price before distribution mark-ups) would be $4.0 billion. Accordingly, including both direct and indirect medical costs, Makena will cost $8 for every $1 saved, a strongly negative ROI.
However, one could argue that demonstration of net cost-savings should not be the criteria for coverage but rather payers should examine the drug’s cost-effectiveness, measured as cost per life-year saved (LYS). To meet the threshold of $100,000 per LYS (inflation adjusted for the more commonly used $50,000 per quality-adjusted life year), the drug would need to prevent nearly 35,000 life years or 435 deaths, assuming an average lifespan of 80 years. Previous studies suggest this is unlikely as the drug has shown a small (and not statistically significant) effect on reducing mortality.
Another coverage option is to identify opportunities to further target use of the drug to patients at the greatest risk for preterm birth. However, even with an ability to target only patients with a 100% risk of preterm birth (highly implausible), the drug would still cost nearly double the amount it saves in direct and indirect medical costs given its effectiveness. Of course, these figures represent back-of-the-envelope pharmacoeconomic calculations for Makena; but absent very significant unmeasured benefits, the analysis highlights the difficulty this drug will have in demonstrating a favorable pharmacoeconomic profile.
In an earlier post, I discussed that while medication possession ratios (MPRs) tend to dominate the marketplace reporting tools for medication adherence programs, MPR is not, in fact, the best measure of medication adherence. Reason being, the results of an MPR analysis depend heavily upon the methodological choices made in defining MPR and the quality of the days supply figured provided by the pharmacist. Mostly importantly, MPRs allow for little to no clinical interpretation.
Of all the measurement options that exist, I find persistency to be the most useful. Persistency, which is a dichotomous yes/no measure that is based on the length of therapy, tells whether a patient’s length of therapy meets or exceeds a certain threshold. For example, it is common for published studies to report the percent of newly treated patients who were persistent with their medication one year after starting therapy. There are two key reasons why I prefer persistency over other adherence measures:
1. It measures the most significant medication adherence problem, i.e., whether patients stop taking their medication altogether. While small gaps in therapy are not desirable, they are far less prevalent or clinically impactful than discontinuing treatment altogether. Studies have shown that between 40 and 60% of newly treated patients stop taking their chronic medications after one year. The figure below, from a BMJ study of antihypertensive users, reports both persistency over time and non-adherence due to poor execution of the dosing regimen. As the authors state, “non-execution of the dosing regimen created a shortfall in drug intake that is an order of magnitude smaller than the shortfall created by early discontinuation/short persistence.” Discontinuation has to be the top priority, and organizations will manage what they measure–so measure persistency.
2. The clinical interpretation is unequivocal. As I discussed in the previous post, improvements in MPR are very difficult to interpret in terms of their clinical impact due to the lack of data on the relationship between differences in MPR or gaps and clinical outcomes. However, when a patient discontinues their chronic medication altogether, the clinical impact is far clearer (as long as the patient was an appropriate candidate for the medication to begin with—a topic for another day). The negative clinical and economic impact of discontinuation can be forecasted for a population based on published randomized trials.
Another advantage of a persistency measure is that it is less susceptible to errors in days supply figures provided by the pharmacist because the analysis typically provides a 30 or more day gap in therapy before labeling someone as non-persistent. That said, persistency is still vulnerable, albeit less than MPR, to differences in methodological approaches. The two key decisions in a persistency analysis are 1) how long to follow patients; and 2) what gap in therapy will be considered non-persistence (e.g., 30-day, 60-day, or 90-day gap). Obviously, the longer you follow patients, the lower the persistency rate will be and the larger the gap in therapy required to be labeled non-persistent, the higher the persistency rate will be. Accordingly, if comparing vendors, make sure you have the same follow-up length and gap criteria. In addition, persistency rates for new versus ongoing users look dramatically different so you should always ask that the two groups be reported separately to prevent changes in the mix of members from artificially influencing your results.
Perhaps the biggest reason persistency is not reported as frequently as MPR is because it requires member-level analysis over time, controlling for eligibility. This involves more complex analytics and processing time for large groups of patients, but given the reasons I’ve outlined above, it is worth the effort.
Over the last year, one of the most common questions I have received is how to best measure medication compliance or adherence for routine program reporting. Given the slowed growth in utilization and the need to differentiate, it seems that most PBMs and health plans have placed a renewed emphasis on improving medication adherence.
While medication possession ratios (MPRs) or versions of MPR tend to dominate the reporting tools, MPR is not, in fact, the best measure of medication adherence. Why? The results of an MPR analysis depend heavily upon the methodological choices made in defining MPR and the quality of the days supply figured provided by the pharmacist; and MPRs allow for little clinical interpretation. Each of these issues is discussed briefly below.
- First as background, MPR is calculated as the sum of the days supply for all claims during a defined period of time divided by the number of days elapsed during the period. MPRs can change significantly based on how the denominator is calculated. In a previously published example in JMCP, a patient’s MPR when the denominator was based on the time between the first and last fill was 0.75; but when the denominator was the entire time period, the MPR was only 0.53. Reason being, in the first approach, the MPR is affected solely by gaps between fills. When the entire calendar period is used, the MPR is affected both by gaps and treatment discontinuation.
- MPRs defined over longer time frames using fixed time periods will, by definition, be lower due to decreases in persistency over time so you cannot do a head-to-head comparison of a vendor who reports MPRs on quarterly basis to another vendor who reports MPR on an annual basis.
- Third, MPRs are highly sensitive to the population included. If the report includes both new and ongoing users, an influx of new patients into the program will artificially lower the MPR when it is based on a fixed time period as the denominator. Reason being, new users have lower persistency rates than ongoing users.
Quality of days supply
MPRs rely on the accuracy of the days supply figure provided by the pharmacist. In the case of inhalers, injectables and liquids, these figures are notoriously unreliable so the reporting of an MPR is simply not appropriate for many medications. For oral pills, the problem is less significant but comes into play when different drugs dosages have price parity and/or pill-splitting is common.
Little clinical interpretation
The most significant limitation of the MPR is the lack of ability to assess the clinical meaning of an observed improvement. When programs claim to improve MPR by 3-5 percentage points, it is simply unknown what clinical impact, if any, will be seen from this increase in MPR. Research examining the relationship between changes in compliance and clinical outcomes, is sorely lacking. While researchers have historically used an MPR of 80% or better as the benchmark for good adherence, it is well-known that this is a somewhat arbitrary cut-off, driven more by precedence than clinical rationale.
Is there a better alternative to MPR? Yes, and I’ll share some thoughts on this alternative later this week.
The concept behind drug trend reports, now standard fare in the PBM industry, was pioneered by Barrett Toan, former CEO of Express Scripts. Toan believed that the old adage–” you cannot manage what you cannot measure”– applied to pharmacy benefit management. While common knowledge today, at the time, the industry did not understand the basics of how price, utilization, and drug mix contributed to drug trend overall, let alone by therapy class. Hence, the Drug Trend Report (DTR) was born. First published in 1997, the DTR has contributed greatly to the market’s understanding of drivers of drug trend and how to better manage wasteful drug expenditures. The value was seen immediately in the marketplace, as evidenced by the press coverage, the use by policymakers in D.C., and how quickly the other major PBMs followed suit.
Now that drug trend season is upon us, it is a good time to discuss appropriate and potentially inappropriate uses of these reports. In recent years, some organizations have attempted to use book of business drug trends as evidence of PBM effectiveness in managing drug trend. Such comparisons are simply not valid for couple of reasons. First, the magnitude of drug trend across PBMs tend to be more similar than different in any given year; and as the figure below shows, underlying market trends (e.g., new product launches, generic availability) are, by far, the strongest driver of trends from one year to the next. Second, varying methodologies and client mix (e.g., health plan vs. employer, age) in any given year significantly impact drug trend, making direct comparisons highly problematic.
Prescription Drug Trend: 1996 to 2008
The second area of potential misinterpretation relates to meaning of the drug trend number. As drug trends have slowed in recent years to single digits, plan sponsors have certainly felt some reprieve. However, this sense of relief can become problematic when a reduction in drug trend is confused with or viewed to be the same as a reduction in drug spend. Reason being, drug trend only examines the change in drug spend over time and provides no assessment of the appropriateness of the underlying drug spend.
Accordingly, a review of actual drug spend can paint a very different picture of current pharmacy benefit performance than does single digit drug trend. Anticonvulsants are a classic case in point. Drug trend fell nearly 30% in 2009 due to the availability of generics, but prevalence of use grew nearly 5%. Of course, the challenge is that much of the growing use in this class represents off-label use that is not scientifically supported, exceeding 70% in most studies.
Keeping these two caveats in mind, Drug Trend Reports are a useful reference tool in understanding the complexities of prescription trend drivers and cost management tools.
I finally had a chance to read the study on mail and community pharmacy preferences published in The Journal of the American Pharmaceutical Association, an article which has been the focus of much discussion over the last several weeks. The study, published by CVS Caremark, examined member’s choice of channel in the first 4 months after being converted from an incentivized or mandatory mail program to their Maintenance Choice program, which offers an equal financial incentive for mail and 90-day retail.
The study was well-done methodologically and the conclusions did not over-interpret the data. What has been surprising to me is that the study is being discussed by some stakeholders as evidence that members, when equally incentivized, will choose mail over community pharmacy. This conclusion is not supported by the study, was not a conclusion made by the authors, and in fact, a detailed review of the study suggests the opposite conclusion.
The basis for this inference seems to be the finding that 56% of patients who were new to therapy (i.e., had never filled that particular medication at either a mail or community pharmacy) chose mail service for their prescription, a slight majority. However, if you read the study details, 76% of those patients had previously used mail for OTHER medications. Accordingly, when results are examined by whether or not the member had previously used mail for any medication, only 32% of patients with NO prior mail use chose mail for the new prescription and 63% of those with prior mail use chose mail for their new prescription. In other words, community pharmacy was the preferred choice for the majority of patients who had not previously used mail for any prescriptions.
The data for ongoing users can be broken down the same way, showing that 21% of patients with NO prior mail use chose mail for their refill and 75% of those with any prior mail use chose mail for their refill. Again, when you examine data points for both new and ongoing users, community pharmacy appears to have a slight advantage in terms of percent choosing. Furthermore, the authors note that the most important predictor of selecting mail service pharmacy was recent use of mail for another medication. Specifically, the odds ratio for selecting community pharmacy was 3.77 for community pharmacy users compared to prior mail users.
Also note that this study did not examine whether the members’ spouse had previously used mail, which I have found in previous research to be a strong predictor of mail use. Reason being, much of the initial paperwork is already in place and there is a familiarity with the mail service, reducing the barriers to use. Inclusion of this covariate would likely increase the odds ratio for prior mail use.
All that said, the authors conclusion, which focused on the diversity of preferences rather than which channel had an inherent preference advantage, was on point—“Patient behavior indicates that certain patients prefer to access prescription medications via mail service and others through community pharmacy.” Bottom line: if your PBM offers competitive 90-day retail rates, take advantage of them, with the caveat that you will need to make sure that your formulary and generic promotion programs remain effectively in place.
Published today in the Journal of Managed Care Pharmacy (JMCP) is an editorial on value-based insurance design, written by Kathleen Fairman and Fred Curtiss. In the article titled “What Do We Really Know About VBID? Quality of the Evidence and Ethical Considerations for Health Plan Sponsors”, Fairman and Curtiss review the recently published studies of copay waivers and discusses the implications for payers, both private and public.
The paper notes that use of copay waivers is estimated at about 20% of plan sponsors, but plan deductibles are actually trending higher, not lower. While the reasons for the still minority uptake have not been formally studied, the authors point to market data suggesting the “potential for short-term increases in utilization and cost” with “uncertain” health benefits and the possibility of “unintended incentives” that could reduce generic drug utilization if brand drug copayments are reduced too much.
The editorial also provides an extensive review of the challenges associated with defining low-value services, which under the original intent of value-based benefits, are those for which copays should be raised, rather than lowered.
In their review of the recent literature, the authors discuss five major weaknesses of the studies published to date, including:
- No information about generic utilization (the concern being that copay waivers for brands may discourage use of lower cost, clinically appropriate generic alternatives)
- No information about payer cost, despite claiming a positive ROI from copay waivers in some recent studies
- Problems in study design and/or reporting (e.g., inability to control for the Healthy Adherer effect)
- Lack of randomized trials
- Lack of plausibility in cost-benefit analysis (e.g., a VBID copayment reduction would have to increase the effectiveness of statin treatment by approximately 41%-49% in secondary prevention and 68%-79% in primary prevention—despite increasing medication adherence by only about 4%-6%, a clear implausibility.)
For future research, the authors warn plan sponsors to watch out for: 1) isolated significant findings; 2) causal linkages (or lack thereof); 3) reporting of total cost rather than health plan or sponsor cost; and 4) overextension of results from one study to other populations and benefit designs.
Fairman and Curtiss conclude that “Because VBID has been associated with only minimal medication adherence increases documented only in observational research, and because no health or medical utilization outcomes (e.g., ER or hospital use) have yet been reported for VBID programs, the evidence is insufficient to support expanding its use at the present time.”
VBID is a topic that I have written about extensively, both in the blog and in the published literature, and I found this review both thorough and insightful. For those of you trying to keep pace with the research in this space as well as the ongoing ethical and practical challenges associated with VBID, this paper is a great resource.