Anytime a new technological advancement makes its way into an industry, there can be a temptation to anoint that shiny new toy as an anecdote to all of an industry’s ills. AI in healthcare is a great example. As the technology has continued to advance, it has been adopted for use cases in drug development, care coordination, and reimbursement, to name a few. There are a great number of legitimate use cases for AI in healthcare, where the technology is far and away better than any currently available alternative.
However, AI—as it stands today—excels only at certain tasks, like understanding large swaths of data and making judgements based on well-defined rules. Other situations, particularly where added context is essential for making the right decision, are not well-suited for AI. Let’s explore some examples.
Denying Claims and Care
Whether it be for a claim or care, denials are complex decisions, and too important to be handled by AI on its own. When denying a claim or care, there is an obvious moral imperative to do so with the utmost caution, and based on AI’s capabilities today, that necessitates human input.
Beyond the morality element, health plans put themselves at risk when they rely too heavily on AI to make denial decisions. Plans can, and are, facing lawsuits, for using AI improperly to deny claims, with litigation accusing plans of not meeting the minimum requirements for physician review because AI was used instead.
Relying on Past Decisions
Trusting AI to make decisions based solely on how it made a previous decision has an obvious flaw: one wrong decision from the past will live on to influence others. Plus, because policy rules that inform AI are often distributed across systems or imperfectly codified by humans, AI systems can end up adopting, and then perpetuating, an inexact understanding of these policies. To avoid this, organizations need to create a single source of policy truth, so that AI can reference and learn from a reliable dataset.
Building on Legacy Systems
As a relatively new technology, AI brings a sense of possibility, and many health plan data science teams are anxious to tap into that possibility quickly by leveraging AI tools already built into existing enterprise platforms. The trouble is that healthcare claims processes are extremely complex, and enterprise platforms often do not understand the intricacies. Slapping AI on top of these legacy platforms as a one-size-fits-all solution (one that does not account for all of the various factors impacting claim adjudication) ends up causing confusion and inaccuracy, rather than creating more efficient processes.
Leaning on Old Data
One of the biggest benefits of AI is that it gets increasingly better at orchestrating tasks as it learns, but that learning can only take place if there is a consistent feedback loop that helps AI understand what its done wrong so that it can adjust accordingly. That feedback must not only be constant, it must be based on clean, accurate data. After all, AI is only as good as the data it learns from.
When AI in Healthcare IS Beneficial
The use of AI in a sector where the outputs are as consequential as healthcare certainly requires caution, but that does not mean there are not use cases where AI makes sense.
For one, there is no shortage of data in healthcare (consider that that one person’s medical record could be thousands of pages), and the patterns within that data can tell us a lot about diagnosing disease, adjudicating claims correctly, and more. This is where AI excels, looking for patterns and suggesting actions based on those patterns that human reviewers can run with.
Another area where AI excels is in cataloging and ingesting policies and rules that govern how claims are paid. Generative AI (GenAI) can be used to transform this policy content from various formats into machine-readable code that can be applied consistently across all patient claims. GenAI can also be used to summarize information and display it in an easy-to-read format for a human to review.
The key thread through all of these use cases is that AI is being used as a co-pilot for humans who oversee it, not running the show on its own. As long as organizations can keep that idea in mind as they implement AI, they will be in a position to succeed during this era in which healthcare is being transformed by AI.
Credit: Source link