Archive for category Methodology

Do Cost-Effectiveness Models Need a Reality Check?

In a thoughtful commentary published in the British Medical Journal, clinical researchers from Europe question the claims of cost-effectiveness made for many commonly used pharmacological treatments.    The authors argue that “…although there are claims that important preventive drugs such as statins, antihypertensives, and bisphosphonates are cost effective,6 7 8 9 there are no valid data on the effectiveness, and particularly the cost effectiveness, in usual clinical care. Despite this dearth of data, the majority of clinical guidelines and recommendations for preventive drugs rest on these claims.”

The authors cite a 2009 study, which examined a cost-effectiveness model of selective cyclo-oxygenase-2 (COX 2) inhibitors, as evidence of the weak external validity of claims of cost-effectiveness.  The COX-2 evaluation, which was based on a clinical trial, found that the cost of avoiding one adverse gastrointestinal event by switching patients from conventional non-steroidal anti-inflammatory drugs to COX-2 inhibitors was approximately $20,000.  In contrast, when the same analysis was conducted using the UK’s General Practice Research Database, which includes patients’ medical records in routine care, the cost of preventing one bleed was over $100,000.2

These findings are similar to work myself and others conducted several years ago on the COX-2s.  While the original cost-effectiveness model for the U.S. reported a cost per year of life saved (YLS) of about $19,000 for COX-2s when compared to non-selective NSAIDs, our revised model, which was based on actual practice, found a cost per YLS of $107,000. 

In a different study, we examined the external validity of a cost-effectiveness model of treatment options for eradication of h. pylori.  The original decision-analytic model found that the lowest cost per effectively treated patient was for the dual combination of PPI and clarithromycin ($980), whereas we found that the lowest cost per treatment was for the triple combination of bismuth, metronidazole, and tetracycline at a cost of $852.  Why the disconnect?  In the original h. pylori model, the authors had made assumptions about medication compliance and the cost of recurrence that simply did not hold up in the real-world. 

In the case of the COX-2s, the recent commentary concluded that the published cost-effectiveness analyses of COX 2 inhibitors neither had external validity nor represented the patients treated in clinical practice. They emphasized that external validity should be an explicit requirement for cost effectiveness analyses that are used to guide treatment policies and practices.  At least one academic modeler vehemently disagrees with the requirement of external validity, arguing that “it is wrong to insist that models be ‘validated’ by events that have not yet occurred; after all, the modeler cannot anticipate advances in technology, or changes in human behavior or biology. All that can be expected is that the model reflects the current state of knowledge in a reasonable way, and that it is free of logical errors.”

It is true that right when a drug comes to market, the only available data will likely be from the original clinical trials used to seek FDA approval, and the modeler will be forced to make numerous assumptions about compliance, costs, concomitant medication use, etc.  The problem is that the extent to which these assumptions are made without bias is unclear.  Research has shown that sponsorship by the pharmaceutical industry affects the results of economic models.  In a review published in 2010, researchers found that 95 percent of studies sponsored by pharmaceutical manufacturers reported favorable conclusions compared to only 50 percent of nonsponsored studies.  While it could be argued that this reflects a publication bias, the validation studies that I have described above suggest otherwise.  In each of these cases, there were key assumptions that drove model outcomes which, from a plan sponsor perspective, we found highly questionable at the time the model was first published.

Surprisingly, the issue of model validity receives relatively little attention given the central role that these models play in the field of pharmacoeconomics, as for example, in the AMCP dossier process.  The commentary authors argue that real-world comparative studies are the key to producing cost-effectiveness models that possess external validity.  This certainly will help with the quality of models post-FDA approval.  However, for models used at the time a drug is launched, ultimately, I expect that plan sponsors will have to develop their own models to ensure systematic bias is removed.

Leave a comment

Persistency Is The Best Measure of Medication Adherence

In an earlier post, I discussed that while medication possession ratios (MPRs) tend to dominate the marketplace reporting tools for medication adherence programs, MPR is not, in fact, the best measure of medication adherence.  Reason being, the results of an MPR analysis depend heavily upon the methodological choices made in defining MPR and the quality of the days supply figured provided by the pharmacist.  Mostly importantly, MPRs allow for little to no clinical interpretation. 

Of all the measurement options that exist, I find persistency to be the most useful.  Persistency, which is a dichotomous yes/no measure that is based on the length of therapy, tells whether a patient’s length of therapy meets or exceeds a certain threshold.  For example, it is common for published studies to report the percent of newly treated patients who were persistent with their medication one year after starting therapy.  There are two key reasons why I prefer persistency over other adherence measures:

1.  It measures the most significant medication adherence problem, i.e., whether patients stop taking their medication altogether.  While small gaps in therapy are not desirable, they are far less prevalent or clinically impactful than discontinuing treatment altogether.  Studies have shown that between 40 and 60% of newly treated patients stop taking their chronic medications after one year.  The figure below, from a BMJ study of antihypertensive users, reports both persistency over time and non-adherence due to poor execution of the dosing regimen.  As the authors state, “non-execution of the dosing regimen created a shortfall in drug intake that is an order of magnitude smaller than the shortfall created by early discontinuation/short persistence.”  Discontinuation has to be the top priority, and organizations will manage what they measure–so measure persistency.

 

 2. The clinical interpretation is unequivocal.  As I discussed in the previous post, improvements in MPR are very difficult to interpret in terms of their clinical impact due to the lack of data on the relationship between differences in MPR or gaps and clinical outcomes.  However, when a patient discontinues their chronic medication altogether, the clinical impact is far clearer (as long as the patient was an appropriate candidate for the medication to begin with—a topic for another day).  The negative clinical and economic impact of discontinuation can be forecasted for a population based on published randomized trials.

Another advantage of a persistency measure is that it is less susceptible to errors in days supply figures provided by the pharmacist because the analysis typically provides a 30 or more day gap in therapy before labeling someone as non-persistent.  That said, persistency is still vulnerable, albeit less than MPR, to differences in methodological approaches.  The two key decisions in a persistency analysis are 1) how long to follow patients;  and 2) what gap in therapy will be considered non-persistence (e.g., 30-day, 60-day, or 90-day gap).  Obviously, the longer you follow patients, the lower the persistency rate will be and the larger the gap in therapy required to be labeled non-persistent, the higher the persistency rate will be.  Accordingly, if comparing vendors, make sure you have the same follow-up length and gap criteria.  In addition, persistency rates for new versus ongoing users look dramatically different so you should always ask that the two groups be reported separately to prevent changes in the mix of members from artificially influencing your results.

Perhaps the biggest reason persistency is not reported as frequently as MPR is because it requires member-level analysis over time, controlling for eligibility.  This involves more complex analytics and processing time for large groups of patients, but given the reasons I’ve outlined above, it is worth the effort.

, ,

Leave a comment

What is the best measure of medication adherence?

Over the last year, one of the most common questions I have received is how to best measure medication compliance or adherence for routine program reporting.  Given the slowed growth in utilization and the need to differentiate, it seems that most PBMs and health plans have placed a renewed emphasis on improving medication adherence.  

While medication possession ratios (MPRs) or versions of MPR tend to dominate the reporting tools, MPR is not, in fact, the best measure of medication adherence.  Why?  The results of an MPR analysis depend heavily upon the methodological choices made in defining MPR and the quality of the days supply figured provided by the pharmacist; and MPRs allow for little clinical interpretation.  Each of these issues is discussed briefly below.

Methodological Choices 

  • First as background, MPR is calculated as the sum of the days supply for all claims during a defined period of time divided by the number of days elapsed during the period.  MPRs can change significantly based on how the denominator is calculated.  In a previously published example in JMCP, a patient’s MPR when the denominator was based on the time between the first and last fill was 0.75; but when the denominator was the entire time period, the MPR was only 0.53.  Reason being, in the first approach, the MPR is affected solely by gaps between fills. When the entire calendar period is used, the MPR is affected both by gaps and treatment discontinuation.
  • MPRs defined over longer time frames using fixed time periods will, by definition, be lower due to decreases in persistency over time so you cannot do a head-to-head comparison of a vendor who reports MPRs on quarterly basis to another vendor who reports MPR on an annual basis.   
  • Third, MPRs are highly sensitive to the population included.  If the report includes both new and ongoing users, an influx of new patients into the program will artificially lower the MPR when it is based on a fixed time period as the denominator.  Reason being, new users have lower persistency rates than ongoing users. 

 

Quality of days supply

MPRs rely on the accuracy of the days supply figure provided by the pharmacist.  In the case of inhalers, injectables and liquids, these figures are notoriously unreliable so the reporting of an MPR is simply not appropriate for many medications.  For oral pills, the problem is less significant but comes into play when different drugs dosages have price parity and/or pill-splitting is common.

Little clinical interpretation

The most significant limitation of the MPR is the lack of ability to assess the clinical meaning of an observed improvement.  When programs claim to improve MPR by 3-5 percentage points, it is simply unknown what clinical impact, if any, will be seen from this increase in MPR.  Research examining the relationship between changes in compliance and clinical outcomes, is sorely lacking.  While researchers have historically used an MPR of 80% or better as the benchmark for good adherence, it is well-known that this is a somewhat arbitrary cut-off, driven more by precedence than clinical rationale.

Is there a better alternative to MPR?  Yes, and I’ll share some thoughts on this alternative later this week.

, ,

2 Comments

The Evidence Base for Step Therapy for Pharmaceuticals

While step therapy has been around for a decade or more, it still represents one of the lowest hanging fruit for plan sponsors who want to improve the value received from their pharmacy benefit with minimal member disruption.  Today more than 50 therapy classes have the opportunity for step therapy, including several specialty medication classes.

When I recently returned to pharmaceutical policy work, I was surprised to see how few evaluations of step therapy had been conducted in the last decade, particularly considering step therapy’s high degree of popularity among plan sponsors in both private and public settings.  I was also surprised to see that most of the evaluations of step therapy that had actually examined clinical and economic outcomes were funded by pharmaceutical manufacturers rather than managed care organizations that can provide thought leadership on this benefit tool.

In a paper published today in the Journal of Managed Care Pharmacy , I review the literature on step therapy and highlight important areas for future research.  Clearly, evaluations of step therapy are needed for numerous therapy classes of clinical significance, such as statins and specialty medications.  This is important because one would fully expect patient response and the clinical implications of patient choices to vary by therapy class and by the underlying indication.  The lack of evaluations of step therapy in the Medicare Part D population is a particularly notable gap.

Most of the research to date has focused on the drug cost savings of step therapy, a necessary condition for step therapy’s uptake but certainly not the only outcome of interest.  While savings from step therapy are widely known within healthcare organizations, a better understanding the clinical profile of patients who receive prior authorization for brand medications or receive no medication following a step edit is an important area of inquiry as it has the potential to affect not only economic outcomes, but clinical outcomes and member satisfaction. Perhaps the second most notable gap is the lack of evaluation of alternative forms that are growing in popularity, such as removal of grandfathering and integration of medical claims into the real-time step edits.

 In the paper, I also discuss some of the key methodological concerns with the publications to date and highlight examples of potential bias.  Based on this review, here are a few methodological tactics you should watch for when considering step therapy evaluations:

  • Reporting of non-significant findings as if they were statistically significant
  • Evaluation of all-cause medical expenses rather than disease-related expenses (all-cause medical costs are highly variable are have a greater chance of showing random differences across groups for reasons that have nothing to do with the program being evaluated)
  • Inclusion of patients who were unaffected by the program to calculate drug savings, which will reduce the magnitude of apparent savings
  • Examination of a small subpopulation of patients affected with extrapolations to the entire program

 As I point out in the paper, the popularity of step therapy among commercial, Medicaid, and Medicare plans is no doubt due to the wide availability of generic alternatives that offer significant savings, the strong clinical evidence that typically underlies these programs, and their ability to affect only new users, thereby minimizing member disruption. It is important that evaluations of step therapy keep pace with their growing use in order to optimize program design and patient outcomes.

, ,

1 Comment