Call us at 650-400-3029 (PST)

Using LLMs / GenAI to Improve Healthcare Communications

The US healthcare system is notoriously complex. One of the ways this complexity shows up is in claims – what gets paid, by whom, when, for what and at what level. While claims are legally required to have an explanation of benefits, this is often opaque – so such so that call centers regularly provide an explanation of the explanation of benefits! Given how good LLMs and Generative AI are at answering questions and chatting with humans, claims explanations are a hot topic for AI. 

There are two main problems. The first problem, of course, is hallucination. How can you be sure your AI is not hallucinating when it explains the claim to your member? What if it SAYS you decided for a reason that’s actually illegal? Even if you didn’t, you might end up in serious regulatory trouble. The second problem is that even a rational, coherent explanation might not match what your claims system actually does. And this kind of inconsistency is also a problem. 

The solution, as it so often does, requires mixing technology in a decision-centric way. 

The decision here is claims adjudication – should this claim be paid and in what amount. It needs to be automated in a way that is transparent and manageable so that the business knows it is correct. And it needs to be able to explain how it makes each decision it makes. Using a decision platform to automate the decision and a decision model to specify it ensures transparency and business manageability while allowing the creation of a structured decision log that shows how each element of the decision was made. 

This log explains to the business why it made the decision it did. It can also be used to explain the “why” to members. A subset of the log – the top-level decisions if you like – is selected based on criteria set by the business owners. An LLM or GenAI model can be trained that takes this structured information and turns it into a human-readable explanation. You get the chattiness ad readability of AI but the solidly compliant decisioning of business rules. 

For bonus points, you can expose a “skinny” version of the claims decision that lets your chat bots ask your members a few questions and then say “if you’ve answered my questions accurately, that would be covered because XXX and YYY”. You’re upleveling the conversation from raw data to a conversation but reusing the same decision-making for cross-channel consistency. 

Contact us today if you’d like to discuss how we can help.