In recent weeks, the market has been inundated with evaluations of value-based insurance design (VBID) so it’s no surprise that I was asked to comment on a recent study of VBID published in American Health and Drug Benefits.
For one large employer, asthma patients who volunteered to participate in a care management program that included copay waivers for asthma controller medications as well as 3 educational mailings were compared to asthma patients who chose not to participate in the program. The study found a 10 percentage point higher medication possession ratio (MPR) in the year after program implementation for the treatment group control (54% versus 44%).
The critical question surrounding this study’s validity is the comparability of the control and intervention group. Research has shown that patients who choose to enroll in a behavioral change program are more motivated to improve their behavior than are patients who do not enroll. As evidence of the differences in motivation between the two groups, 99% of patients in the intervention group and 25% in the control group were enrolled in a traditional disease management program. A second point of evidence of the presence of selection bias is that 74% of intervention patients were using an inhaled steroid versus 64% of control group subjects prior to program enrollment. Given these differences, the most one can confidently conclude from the study is that patients who chose to enroll in the program were more compliant with their steroids than patients who did not enroll. That difference is likely explained in part by selection bias and in part, by the copay waivers; but it is not possible to determine from the data how much each component contributed.
Viewing these results in light of other studies of VBID provides further support of the flawed control group. VBID evaluations have found between a 2 to 4 percentage point increase in MPR following a copay waiver, depending on the therapy class. Yet, this study reported a 10 percentage point improvement, which is a 2.5 to 5-fold greater effect than previous studies. Advocates will likely argue that this improved effect size was due to the combination of copay waivers and education, but research on educational mailings suggests otherwise.
To the authors’ credit, they acknowledge the study’s key limitation and the fact that a stronger study would have compared this employer’s asthma patients (both enrolled and not enrolled in the program) to another employer population having comparable clinical and sociodemographics. Finding a comparable group can be challenging but not impossible and provides a much stronger study design for making a causal interpretation. I would expect this more robust type of comparison to show a compliance improvement anywhere from zero to 4 percentage points given what has been seen to date in other research.
The study also examined asthma-related pharmacy and medical costs. After controlling for covariates and baseline differences in costs, the intervention group had a lower (but not significant) adjusted monthly cost at 12 months of follow-up compared with the control group ($18 vs. $23, respectively; p = .067). Asthma-related pharmacy costs were higher ($89 vs. $53, respectively; p <.001). Summing these two measures, total monthly asthma-related costs for the year after program implementation were higher for the intervention ($107) than for for the control group ($76). However, the authors never reported total asthma-related costs as I just did; and study abstract only mentions all-cause medical costs, reporting that pharmacy costs increased, other medical costs decreased, and there was no difference in overall costs.
The use of non-participants as a control group, while known to be a weak research design, sometimes occurs due to its convenience and employer preference but simply prohibits making any causal conclusions. The discussion of overall medical costs in the abstract as the primary endpoint rather than asthma-related medical costs may reflect a classic reporting bias or “spin” as others have called it. It is a questionable practice to watch for as I have seen it used in other pharmaceutical policy literature, such as step therapy evaluations, in the absence of any plausible explanation.