Earlier this month the Centers for Medicare & Medicaid Services (CMS) released a white paper that outlined concerns with the Risk Adjustment Data Validation (RADV) program. The agency addressed the current process for sampling, outlier detection, error rate calculation, and risk adjustment transfer features and wants feedback to help form future RADV policy. Comments are due on Jan. 6, 2020.

The 120-page white paper indicates that the U.S. Department of Health and Human Services (HHS) and CMS is considering potential modifications to four aspects of the program. RISE turned to Avalere’s Sean Creighton and Chad Brooker for a summary of the changes under consideration.

Enrollee sampling
Under the current method, most issuers have a sample size of 200 enrollees. But some issuers want a larger sample size to improve accuracy and potentially decrease the impact of a single enrollee’s results on their Hierarchical Condition Category (HCC) group failure rates. Others want a smaller size to reduce the administrative and financial burden associated with retrieving medical records. To refine the sample size, CMS is considering several options:

  • Increase the sample size for issuers that are outliers in at least one HCC group or that fail to meet the 10 percent precision target in all HCC groups.
  • Use national average commercial RADV error rates instead of proxy data from MA-RADV to calculate the sample size, a change that would increase a sample size for 330 (64 percent) issuers (all that have 4,000 or more enrollees) and reduce sample sizes for 31 issuers (6 percent).
  • May consider sampling options and measures to reduce the burden on those with small populations that may not have enough enrollees for a sample size of 200. Under consideration: Establish an issuer-specific sample size that is equal to the sum of all their enrollees with HCCs and a sample from those without HCCs that satisfies the Neyman allocation formula. Under this scenario, the average sample size for these issuers would be 86 enrollees. Another alternative: Exempt these issuers from HHS-RADV.

RISE Nashville 2020

Outlier detection
The white paper considers options to modify the process to more precisely identify true outliers. Several changes are under consideration, but CMS seems to favor the use of a binomial distribution method, which would result in wider intervals for issuers with a low number of individuals with HCCs and narrower intervals for issuers with a high count. However, CMS said it intends to delve deeper into the risk-score-based methodology to address the impact of HCC hierarchies on outlier detection. Other possible changes mentioned in the report indicate CMS could

  • Create different confidence intervals for high- and low-HHC count issuers
  • Calculate issuer-specific confidence intervals around each issuer’s failure rate estimated for each HCC group
  • Use the McNemar’s Test Methodology to highlight situations where an issuer’s risk score represented equal frequencies of found and non-validated HCCs for that issuer, rather than being based on a ratio of found and non-validated HCCs nationally
  • Account for how atypical an issuer is with coding, based on assessing multiple years of EDGE and RA data
  • Use machine learning to identify issuers with a pattern of failure rates that differs from others, rather than identify issuers whose overall failure rates are different
  • Assign HCCs to groups by whole hierarchies or create groups according to the difference in total risk score between EDGE and audit data for the HCCs in each hierarchy

Error rate calculation
The white paper considers alternatives to the current methodology that determines an outlier issuer’s risk score adjustment by calculating the difference between the issuer’s HCC group failure rate and the weighted mean group failure rates from the national metrics. CMS is considering options to address cases where the outlier issuer may have a failure rate that is only slightly outside of the acceptable range of variation, as well as cases where an outlier issuer has a negative failure rate. Under consideration:

  • Calculate the magnitude of error based on the confidence interval rather than the mean score
  • Adjust RA for positive error rate outliers (i.e., those with more error than other issuers in that HCC group in that market)
  • Create a sliding scale to reduce transfers

Application of HHS-RADV results
CMS indicated it is considering a change to the application of HHS-RADV results to better reflect actuarial risk of the benefit year being audit. Currently it uses an issuer’s RADV result to adjust RA transfers in the year after the year validated. CMS listed three options to make this transition if it were to finalize and implement the policy for the 2021 benefit year:

  • Calculate an average value between the current and previous benefit years’ RADV error rates and apply this average error rate to current year scores and transfers
  • Calculate previous year RADV adjustments to current year RA transfers, and take the difference between each of these values and the unadjusted current benefit year risk adjustment transfers to arrive at a total RADV modification
  • Apply previous benefit year RADV risk score adjustments to current year plan liability risk score (PLRS), and then apply current year RADV risk score adjustments to the adjusted current year PLRs and use the final adjusted PLRS to current year RA transfers

Comments needed by Jan. 6
CMS wants feedback on the policy by Monday, Jan. 6. Send comments to CCIIOACARADataValidation@cms.hhs.gov with the subject line of “December 2019 HHS-RADV White Paper.”