Data, Agents, and the Need for Determinism
AI is forcing everyone to examine their business. AI, and machine learning, have shone a spotlight on organizations that specialize in using data analytics and information processing to enhance decision making. We’re no different at Qlarant. We utilize many techniques to help our customers discover insights and help make decisions. Some techniques are new. Some are old with new names. Our teams are always pushing the envelope in analysis, data processing, and computing. Recently, during a solutioning session, fierce debate was made over what is AI, what is an AI agent, and where is AI most effective in health FWA.
What is Artificial Intelligence?
For those new to RIViR Reads, a quick refresher. AI is any program or system that mimics human behavior. When you think about it, AI is a large and encompassing umbrella term that includes simple if-then-else switching on the low end to ChatGPT and other similar products on the upper tier. Underneath AI, there is machine learning which is a collection of statistical and mathematical techniques that make predictions and discover patterns. You may have heard of neural networks in conversation. That’s an advanced form of machine learning. When you build large neural networks, that technique is called deep learning.
Many of these techniques are probabilistic. This means a model using these techniques can predict answers that are most likely, or probably, correct. You know, a 90-, 93-, or 95% chance of being right. If you have enough data, information, or experience you can be right, most of the time. That’s why machine learning models need lots of data.
Where does ChatGPT fit in? Your favorite text completion or chatbot uses some form of Generative AI, GenAI for short, and is predicting a word sequence based upon your inputs. If you close your text messages with, thank you and have a nice day, enough times, apps like iMessage can finish your personal closing as soon as you type a few words.
What happens on those 10-, 7-, or 5% of occasions? That’s when ChatGPT “hallucinates” and makes something up. Or, iMessage won’t predict anything at all.
What Are Agents?

Software agents have been around for a long time. An agent is any software system that automates a task. My definition. Agents can do one thing, or many things. Robotic Process Automation (RPA) has also been around for a long time and is used to automate business processes by orchestrating software tasks. A system using RPA may use agents to execute tasks. AI agents use LLMs or trained models to automate tasks and control flow.
Are you ready for another buzzword? A software system is agentic the more it incorporates agents to do its work.
An AI agent may use carefully prompted LLMs for verifying a beneficiary enters correct information on a benefit form. That agent may decide to continue processing an application if all its information is complete or reject the application if more information is needed. Other AI agents may perform tasks using machine learning models or trained, deep neural networks and use those outputs to control process flows. For instance, a deep learning trained fraud model may take multiple inputs from medical claims and compute a score used by an AI agent.
But here’s where things get dicey. In our world, we must get things right. At Qlarant, we strive to ONLY do what’s right.
Arguing for A Combination of Techniques
The rush towards AI can be reckless when the remaining 5% matters. False positives negatively impact lives on both sides of health care fraud. Our industry can produce better results by combining techniques. Rules-based models were once the foundation of our industry before predictive analytics came on the scene.
Systems that apply full coverage, rules-based models for straightforward compliance, machine learning for fraud discovery and classification, and AI agents to bring more humanistic analysis and investigation are easier to predict. A combination of techniques brings determinism to our work, and reduces our exposure when scenarios fall beyond AI’s probabilities. With clever orchestration, and Human-in-the-loop policies, AI+HI will significantly reduce false positives, bring in more efficiencies, and deliver benefits faster to those who deserve it.
AI has forced us to look hard at our processes, but that doesn’t mean surrendering everything to AI. It requires a new set of thinking and understanding – understanding policy, understanding technical and model limitations, and understanding the human impact.
AI isn’t going anywhere, and its uses will only proliferate. Teams considering responsible AI integration will be FWA leaders, and real people stand to benefit.