Archive for category Publication Bias

Do Cost-Effectiveness Models Need a Reality Check?

In a thoughtful commentary published in the British Medical Journal, clinical researchers from Europe question the claims of cost-effectiveness made for many commonly used pharmacological treatments.    The authors argue that “…although there are claims that important preventive drugs such as statins, antihypertensives, and bisphosphonates are cost effective,6 7 8 9 there are no valid data on the effectiveness, and particularly the cost effectiveness, in usual clinical care. Despite this dearth of data, the majority of clinical guidelines and recommendations for preventive drugs rest on these claims.”

The authors cite a 2009 study, which examined a cost-effectiveness model of selective cyclo-oxygenase-2 (COX 2) inhibitors, as evidence of the weak external validity of claims of cost-effectiveness.  The COX-2 evaluation, which was based on a clinical trial, found that the cost of avoiding one adverse gastrointestinal event by switching patients from conventional non-steroidal anti-inflammatory drugs to COX-2 inhibitors was approximately $20,000.  In contrast, when the same analysis was conducted using the UK’s General Practice Research Database, which includes patients’ medical records in routine care, the cost of preventing one bleed was over $100,000.2

These findings are similar to work myself and others conducted several years ago on the COX-2s.  While the original cost-effectiveness model for the U.S. reported a cost per year of life saved (YLS) of about $19,000 for COX-2s when compared to non-selective NSAIDs, our revised model, which was based on actual practice, found a cost per YLS of $107,000. 

In a different study, we examined the external validity of a cost-effectiveness model of treatment options for eradication of h. pylori.  The original decision-analytic model found that the lowest cost per effectively treated patient was for the dual combination of PPI and clarithromycin ($980), whereas we found that the lowest cost per treatment was for the triple combination of bismuth, metronidazole, and tetracycline at a cost of $852.  Why the disconnect?  In the original h. pylori model, the authors had made assumptions about medication compliance and the cost of recurrence that simply did not hold up in the real-world. 

In the case of the COX-2s, the recent commentary concluded that the published cost-effectiveness analyses of COX 2 inhibitors neither had external validity nor represented the patients treated in clinical practice. They emphasized that external validity should be an explicit requirement for cost effectiveness analyses that are used to guide treatment policies and practices.  At least one academic modeler vehemently disagrees with the requirement of external validity, arguing that “it is wrong to insist that models be ‘validated’ by events that have not yet occurred; after all, the modeler cannot anticipate advances in technology, or changes in human behavior or biology. All that can be expected is that the model reflects the current state of knowledge in a reasonable way, and that it is free of logical errors.”

It is true that right when a drug comes to market, the only available data will likely be from the original clinical trials used to seek FDA approval, and the modeler will be forced to make numerous assumptions about compliance, costs, concomitant medication use, etc.  The problem is that the extent to which these assumptions are made without bias is unclear.  Research has shown that sponsorship by the pharmaceutical industry affects the results of economic models.  In a review published in 2010, researchers found that 95 percent of studies sponsored by pharmaceutical manufacturers reported favorable conclusions compared to only 50 percent of nonsponsored studies.  While it could be argued that this reflects a publication bias, the validation studies that I have described above suggest otherwise.  In each of these cases, there were key assumptions that drove model outcomes which, from a plan sponsor perspective, we found highly questionable at the time the model was first published.

Surprisingly, the issue of model validity receives relatively little attention given the central role that these models play in the field of pharmacoeconomics, as for example, in the AMCP dossier process.  The commentary authors argue that real-world comparative studies are the key to producing cost-effectiveness models that possess external validity.  This certainly will help with the quality of models post-FDA approval.  However, for models used at the time a drug is launched, ultimately, I expect that plan sponsors will have to develop their own models to ensure systematic bias is removed.

Leave a comment

Prescription Copay Waivers Plus Educational Mailings for Asthma Improve Compliance—Causality or Correlation?

In recent weeks, the market has been inundated with evaluations of value-based insurance design (VBID) so it’s no surprise that I was asked to comment on a recent study of VBID published in American Health and Drug Benefits.

For one large employer, asthma patients who volunteered to participate in a care management program that included copay waivers for asthma controller medications as well as 3 educational mailings were compared to asthma patients who chose not to participate in the program.  The study found a 10 percentage point higher medication possession ratio (MPR) in the year after program implementation for the treatment group control (54% versus 44%).

The critical question surrounding this study’s validity is the comparability of the control and intervention group.  Research has shown that patients who choose to enroll in a behavioral change program are more motivated to improve their behavior than are patients who do not enroll.  As evidence of the differences in motivation between the two groups, 99% of patients in the intervention group and 25% in the control group were enrolled in a traditional disease management program.  A second point of evidence of the presence of selection bias is that 74% of intervention patients were using an inhaled steroid versus 64% of control group subjects prior to program enrollment.  Given these differences, the most one can confidently conclude from the study is that patients who chose to enroll in the program were more compliant with their steroids than patients who did not enroll.  That difference is likely explained in part by selection bias and in part, by the copay waivers; but it is not possible to determine from the data how much each component contributed.

Viewing these results in light of other studies of VBID provides further support of the flawed control group.  VBID evaluations have found between a 2 to 4 percentage point increase in MPR following a copay waiver, depending on the therapy class.  Yet, this study reported a 10 percentage point improvement, which is a 2.5 to 5-fold greater effect than previous studies.  Advocates will likely argue that this improved effect size was due to the combination of copay waivers and education, but research on educational mailings suggests otherwise.

To the authors’ credit, they acknowledge the study’s key limitation and the fact that a stronger study would have compared this employer’s asthma patients (both enrolled and not enrolled in the program) to another employer population having comparable clinical and sociodemographics.  Finding a comparable group can be challenging but not impossible and provides a much stronger study design for making a causal interpretation.  I would expect this more robust type of comparison to show a compliance improvement anywhere from zero to 4 percentage points given what has been seen to date in other research.

The study also examined asthma-related pharmacy and medical costs.  After controlling for covariates and baseline differences in costs, the intervention group had a lower (but not significant) adjusted monthly cost at 12 months of follow-up compared with the control group ($18 vs. $23, respectively; p = .067).  Asthma-related pharmacy costs were higher ($89 vs. $53, respectively; p <.001).  Summing these two measures, total monthly asthma-related costs for the year after program implementation were higher for the intervention ($107) than for for the control group ($76).  However, the authors never reported total asthma-related costs as I just did; and study abstract only mentions all-cause medical costs, reporting that pharmacy costs increased, other medical costs decreased, and there was no difference in overall costs.

The use of non-participants as a control group, while known to be a weak research design, sometimes occurs due to its convenience and employer preference but simply prohibits making any causal conclusions.  The discussion of overall medical costs in the abstract as the primary endpoint rather than asthma-related medical costs may reflect a classic reporting bias or “spin” as others have called it.  It is a questionable practice to watch for as I have seen it used in other pharmaceutical policy literature, such as step therapy evaluations, in the absence of any plausible explanation.

, ,

Leave a comment

Do Drugs Save Money? Check The Assumptions

Econometrics has finally discovered the hidden truth about pharmaceuticals—they save money, a lot of money.  A recent study found that compliance with pharmaceuticals demonstrated a 8.4:1 ROI for congestive heart failure, 10.1:1 for hypertension, 6.7:1 for diabetes, and 3.1:1 for dyslipidemia.  This is certainly a surprising result given the contradictory findings from other credible research on the cost-effectiveness of pharmaceuticals.  However, it’s likely to capture some headlines given its intuitive appeal to many so I took a closer look.

This study, published by researchers at one of the large PBMs, examined the correlation between medication compliance and use of other healthcare services over several years for a large population of commercially insured members with a diabetes or cardiovascular-related diagnosis.  It is well-known in the published literature that making causal conclusions from the cross-sectional examination of medication compliance against total medical spend is highly problematic due to the Healthy Adherer Effect, which is the tendency of people who are adherent to their medications to also engage in other healthy behaviors, such as exercising regularly and eating a healthy diet.  Reason being, it is difficult, if not impossible, to control for differences in patient behavior that one cannot measure in the claims data. In this particular study, the Healthy Adherer Effect was likely compounded by assigning a compliance score of zero to patients having a diagnosis but no prescription claim for the condition.

The study authors stated that they overcame the systematic selection bias created by the Healthy Adherer Effect through the use of an econometrics technique called fixed-effect regression—basically an alternative form of the more commonly used ordinary least squares regression (OLS).  As is often the case with econometric models, to believe in the superiority of fixed-effect modeling, you have to believe a key underlying assumption that the model makes, which is that confounders, such as eating healthy and exercise, do not vary over time in conjunction with medication compliance.  As the authors acknowledge in the references, “fixed-effects modeling does not allow for the control of confounders that vary over time. Thus, for example, if patients who become adherent simultaneously start exercising regularly (assuming that both of these behavioral changes reduce health services use and spending), the estimated impact of adherence would remain biased.”  The authors make no case for why this assumption would hold true and it hardly seems plausible given what is already known on the subject.

While such fixed-effects estimators may be an improvement on basic cross-sectional methods, they are still quite limited when it comes to uncovering a true causal effect when  the confounder(s) varies over time; and like OLS, will tend to overestimate the causal effect of pharmaceuticals on medical spending in the presence of the Healthy Adherer Effect.  Some evidence to that effect lies in the manuscript’s appendix—for most of the conditions, the difference between the estimate for the OLS regression, which the authors acknowledge does not address the selection bias problem, and the fixed-effect model was minimal (e.g., marginal effects estimate for inpatient days for heart failure of 5.731 for OLS vs. 5.715 for fixed-effects).

However, setting aside the more technical discussion, one of the most informative and practical ways to test the strength of these results is to conduct a plausibility test of the ROI, which basically means to compare these findings against credible randomized controlled trials of medication efficacy.  I’ll show you those results in an upcoming post.  Of course, none of this discussion is meant to minimize the importance of improved medication compliance.  It continues to be a critically important gap to address but employers and other plan sponsors should be provided realistic expectations about the economic value of improved compliance.

, ,

3 Comments

Pharmaceutical Outcomes Unveiled

The study and application of health outcomes research to the management of pharmaceuticals is a messy business.  Research tools range from the large randomized controlled trial to the small, self-reported health outcomes study.  Considerable uncertainty still exists about the best methodology for many areas of inquiry, and commercial interests and publication bias runs rampant.   While pharmaceutical manufacturers are the most studied offenders, health care vendors are all potential violators; and of course, bias is not limited to those with commercial interests.

A study published earlier this year in JAMA once again brought awareness to the extent of the bias problem, with headlines reporting “Science for Sale.” Considering top medical journals, scientists reviewed over 600 studies from 2006 that had reported statistically non-significant primary outcomes.  They subsequently conducted a detailed analysis of 72 of those they considered to be of the highest quality—all randomized controlled trials.  Upon detailed review, they found that 50 percent of the articles misrepresented the data in the study conclusions, leaving the impression that the treatments were effective even though the primary results indicated otherwise.  This “spin” —which the study authors define as specific reporting strategies, whatever their motive, to highlight that the experimental treatment is beneficial, despite a statistically nonsignificant difference for the primary outcome, or to distract the reader from statistically nonsignificant results—also appeared in nearly 60% of the conclusions found in the study abstracts.

If the conclusions in 50 percent of the studies published in top medical journals are being spun toward independent interests, what is the magnitude of distortion in health outcomes research related to pharmaceuticals, where the methodological choices are greater, the standards less well-defined, the chance of study registration prior to initiation far less likely, and the quality of peer-review often suspect?  The issue of bias only serves to compound decision-makers’ challenge in reviewing and applying the rapidly growing body of health outcomes research to make a well-informed decision about their pharmacy benefits.  Recognizing this challenge, in the months ahead, our plan is to laud those who use the right evaluation methods for health outcomes assessment and call out those who do not and to provide simple tools for decision-makers to increase their knowledge and ability to see through the rhetoric.  Finally, by adding another voice on the side of rigorous analysis, the truth about what works and what doesn’t will continue to crowd out studies that are merely repackaged marketing.

,

1 Comment