As the shift to both value-based care and risk-based contracting continues, health plans’ reimbursements – and overall financial performance – are increasingly tied to risk adjustment, forcing them to find ways to improve the efficiency and ROI of their risk adjustment programs.

As the shift to both value-based care and risk-based contracting continues, health plans reimbursements – and overall financial performance – are increasingly tied to risk adjustmentforcing them to find ways to improve the efficiency and ROI of their risk adjustment programs. 

Fortunately, the market has responded with high-tech solutions designed specifically to address the risk adjustment pain points of both payers and providers. Solutions utilizing natural language processing (NLP) can read the unstructured patient data in medical records to help organizations better identify risk, close gaps, and improve both care quality and financial performance. There’s no doubt that NLP-enabled technology is becoming a powerful tool for driving risk adjustment success.  

Boosting Risk Adjustment Performance and Quality with NLP 

Risk adjustment challenges abound. The explosion in the amount of patient data being collected has crashed headlong into the reality of limited resources. The dearth of skilled coders, plus the traditionally inefficient and costly manual chart review process, means many payers are struggling to review 100% of their members. This inability to review all charts often means the highest risk members may get missed, impacting both care quality and revenue. With NLP-enabled coding technology, health plans can quickly and easily risk stratify their members to prioritize and target the highest risk members, focusing on those with the most missed conditions and the highest number of claimed codes lacking documentation. This NLP- assisted risk stratification presents a significant value-add, allowing coders to focus on the right members first then work their way down the queue 

In addition to work-queue prioritization, NLP technology increases both coder productivity and quality. NLP does the first pass reviewproviding coders with a set of diagnosis codes to review and reducing the amount of data they must initially enter. Coders are freed up to spend more time on QA, ensuring the NLP picked up the right codes and looking for more low-hanging fruit to analyzeCoders can perform at the top of their certification, rather than getting bogged down in the coding weeds due to the sheer volume of data they must analyze and review. Increasing first pass accuracy also reduces the need for organizations to deploy the multiple levels of QA review usually needed to ensure codes are accurate and none have been missed. NLP, particularly highly accurate NLP, reduces this need for inefficient and expensive redundant passes by helping coders get it right the first time.   

Improve Provider Collaboration & Reduce Abrasion 

For many providers, taking on risk is still relatively new process requiring them to learn a new way of coding that’s very different from what they’re used to with fee for service. NLP technology, particularly when paired with high quality analytics and reporting dashboards, can help payers track provider coding patterns so they can identify areas of improvement and provide more effective Clinical Documentation Improvement (CDI) initiatives to help providers close coding gaps at the source.  

NLP can also play a part in reducing provider abrasion, which in turn can go a long way toward improving physician collaboration. The patient data retrieval process is often extremely inefficient and burdensome; provider office staff must spend precious time scanning and faxing patient charts or copying and sending them through the mail. Often, payers must send their staff to the provider’s office, interrupting their day to collect the dataNLP technology can streamline the data retrieval process through improved chase targeting and automated data extraction, removing or significantly decreasing chart chase woes and saving time, money and frustration on both sides.  

Delivering Accuracy, Efficiency and Control 

Organizations shopping for an NLP-enabled solution should keep two things in mind: accuracy and NLP development experience. Accuracy includes the level of recall the measure of the percentage of HCC condition codes that the NLP system automatically finds versus how many it was supposed to find (based on an initial review by a coder or QA person), and the level of precision - the percentage of the codes the system picks up that are correct versus incorrect.  

Striking a balance between accuracy & precision is important, requiring significant years of NLP development and patient data experience. Talix has spent 16 years building out our NLP for healthcare, including the last five years (and counting) dedicated to building NLP specifically for risk adjustment. Our NLP-enabled coding solution is integrated directly into the coding workflow to reduce clicks and speeding up chart reviews. As a result, the same number of coders can process a higher number of charts while maintaining, or even increasing, accuracy. Right now, health plans across the country are benefiting from our deep patient data experience and using our solution to take control of their risk adjustment processes, reduce costs, and improve quality and risk adjustment ROI 


For more information on how Talix’s NLP-enabled solution can help you improve your risk adjustment resultscontact us at 


Shahyan Currimbhoy, SVP of Product Management & Engineering, Talix 

As SVP of product management and engineering at Talix, Shahyan is responsible for leading product innovation, engineering and medical informatics for the company. He is a seasoned healthcare veteran with more than 12 years of experience driving product strategy and growth for startups, turnarounds and Fortune 500 companies. Prior to joining Talix, Shahyan held various product management leadership roles at Healthline, Caradigm, Microsoft and Siemens Healthcare. He holds a bachelor’s degree in Computer Science from Cornell University and a master’s degree in Software Engineering from Carnegie Mellon University.