The Centers for Medicare & Medicaid Services (CMS) on Friday proposed a rule to amend the methodology for the U.S. Departments of Health and Human Services’ risk adjustment data validation (HHS-RADV) program. The technical changes, CMS said, will provide states and payers in the Affordable Care Act market with a more stable and predictable regulatory framework, promote integrity, and increase competition. In this article, RISE looks at the proposed changes and asks J. Gabriel McGlamery, J.D., senior HCR policy consultant for Florida Blue Center for Health Policy, and a member of RISE’s Risk Adjustment Policy Committee, to weigh in.

The risk adjustment program aims to reduce incentives for payers to “cherry-pick” healthy, low-risk individuals by compensating insurers who have sicker enrollees and therefore have higher medical costs. The program transfers funds from plans with relatively low-risk enrollees to plans that have higher-risk members. The formula spreads the financial risk across the markets and allows insurers to compete with one another based on price, efficiency, and service quality.

Risk adjustment state transfers are calculated separately for the individual non-catastrophic, catastrophic, and small group market risk pools within a state.

The HHS-RADV program validates the accuracy of data that payers submit that is used to calculate the amount of money transferred among insurers based on the risks of the individuals they enroll. The process is used to ensure the risk adjustment transfers show verifiable risk differences among payers, rather than risk score calculations that are based on poor data quality.

The proposed rule issued on Friday would amend the methodology in the HHS-RADV program as follows:

Error-rate calculation

CMS wants to tweak the HHS-RADV error estimation methodology that it uses to determine adjustments to payers’ previously calculated risk adjustment risk scores and transfers. Currently, the program only adjusts outlier issuers whose HCC validation rate makes them an outlier, which can include issuers with high error rates or issuers with low or negative error rates. In both cases the calculation is based on the validation of diagnostic codes associated with members selected for audit. To account for expected variations HHS-RADV grouped HCCs by net failure rate to account for the validation difficulty of the underlying diagnosis.

For 2019 HHS-RADV and beyond, CMS wants to:

Modify the way it groups medical conditions in the HHS-RADV within the same hierarchical condition category (HCC) coefficient estimation groups in risk adjustment to determine failure rates for those “Super HCCs.” The change, CMS said in a fact sheet, would better account for the difficulty in categorizing certain conditions and, therefore, refine how the error rate calculation measures risk differences within and between condition groupings.

CMS also proposes changes to reduce the amount of risk score adjustments for payers close to the threshold used to determine whether a payer is an outlier. Currently, payers whose failure rates are not significantly different from payers just inside the threshold may see significant changes to their risk scores and transfers, creating a “payment cliff” for issuers just outside the threshold. Adjusting the magnitude of risk score adjustments intends to mitigate this effect.

Finally, CMS wants to modify the error rate calculation in cases where a negative error rate outlier payer also has a negative failure rate. Error rate outliers can be either positive or negative.  Positive error rates reflect a higher failure rate and negative error rates reflect a lower failure rate. However, low failure rates are not always due to more accurate data submission, according to CMS. A lower failure rate can also be due to not identifying conditions that should have been reported when the insurer submitted their claims data. This proposal would encourage issuers to accurately capture diagnoses submitted for risk adjustment by limiting how much they might benefit from finding previously overlooked HCCs during RADV.  The proposed rule would refine the error rate calculation to mitigate the impact of adjustments that result from negative error rates driven by these newly found conditions.

CMS said the changes would strengthen integrity of the program by reducing possible incentives for payers to underreport diagnoses during initial risk adjustment data submission to achieve greater financial benefits from HHS-RADV later.

“These changes will also promote fairness by ensuring that issuers are not penalized in HHS-RADV when a difference in diagnosis for an enrollee has no effect on risk, as well as by ensuring that issuers that receive adjustments are receiving adjustments in proportion to the errors identified through HHS-RADV,” CMS said.

Application of HHS-RADV results

CMS also wants to apply the HHS-RADV results to adjust the risk scores and transfer amounts for the benefit year being audited.

Currently, HHS-RADV generally applies a prospective approach for adjustments to risk adjustment transfers, which means HHS-RADV results are used to adjust the subsequent benefit year risk score and transfers. For example, 2017 benefit year HHS-RADV results are generally used to adjust 2018 benefit transfer amounts.

The proposal is meant to address concerns that CMS is making adjustments to risk scores based on HHS-RADV error rates calculated using prior year data, when a payer’s risk profile, enrollment, or market participation could change substantially from benefit year to benefit year. CMS said it would be a fairer process because it avoids situations of when a payer that newly enters a state market risk pool is subject to HHS-RADV adjustments from a benefit year in which it did not offer plans.

Reaction to the proposal

Gabriel McGlamery, J.D., senior HCR policy consultant for Florida Blue Center for Health Policy, and a member of RISE’s Risk Adjustment Policy Committee, said that changes are for the better, and are hopefully the start of a longer-term effort to address fundamental problems within the program.

“Treating HCCs with constrained coefficients as a single HCC for the purpose of failure rate calculation is a simple, easily justifiable fix. Calling this set ‘Super HCCs’ seems silly but is refreshingly clear for a program with groupings of groupings of groupings,” he said.

In addition, he said, it is difficult to disagree with changes to modify the error rate calculation for payers with negative error rates that also have negative failure rates. “I would love to get a better idea of why certain issuers are negative outliers in certain failure rate groupings. Issuers might see RADV as a cheap alternative to the supplemental file or it might have to do with the complicated effects of HCC hierarchies on HCC failure rate grouping, which is only partially addressed by adding ‘Super HCCs’. Either way, this improves RADV. It either reduces perverse incentives or constrains the effects of sample bias and HCC hierarchies on HCC failure rates," he said.

The changes are a step in the right direction but still mean that for the next few years, the industry will see RADV adjustments that are, at best, random, and at worst, driven by issuer risk and driving perverse incentives, according to McGlamery.

Ultimately, he would like to see the Center for Consumer Information and Insurance Oversight (CCIIO) shift the policy direction of RADV so that it improves the accuracy of the risk adjustment transfer, rather than the current policy rational of penalizing payers for provider coding.

“COVID-19 should drive home the problems with a system that punishes coding errors and ignores claims costs,” he said. “2019 RADV will have regional document retrieval problems and 2020 may have regional problems where staffing issues affected documentation. Even if the effects are small, they could easily overwhelm the impact of actual bad actions by issuers. We appreciate that CMS is delaying 2019, but a system that does not differentiate between a pandemic and fraud is not a good system. I know how difficult rule making is, especially with a program as large and complicated as RADV. As long as CMS sees this as the beginning and not the end of a long term incremental process, I am impressed and really appreciate the work they are putting in.”