A Better Late Than Never AI Primer
The hot topic of GenAI continues to make headlines both in the mainstream and in the Medicaid program integrity world. RIViR Reads Readers reached out to me and asked great questions about co-pilots but were interested in AI basics. From a frequent reader, “you just kind of jumped into ChatGPT, what really makes this stuff work?”
I need to apologize. We’ve written a lot about AI the past year and a half, but apparently, I made many assumptions about people’s AI knowledge. We take our public service responsibilities seriously at RIViR Reads. Today, we’re offering you a short primer on AI technology.
What Is AI, Again?
Now that AI is built into every conceivable device on the planet, I was asked, “What is and isn’t AI?” AI is broad, very broad. AI is any software technology that mimics human decision making. AI encompasses everything between simple if-then-else programs to the latest and greatest version of ChatGPT. Machine learning is a subset under the encompassing AI umbrella and is where statistical analyses and predictive analytics use sophisticated mathematics to find patterns in and make predictions using data. Deep learning is the realm of sophisticated neural networks that are used for applications for computer vision, customer recommendations, and more. GenAI is a broad category of technologies that use a combination of machine learning, deep learning, and other learning techniques. This combination of AI technologies and techniques is the basis of transformer technology used in most GenAI products today. GenAI can be used to generate images from text, music from text, videos from text, and even text from text,. It can also be used to summarize and interpret input data to generate new information.
ChatGPT can generate a paragraph of text because it is pre-trained on virtually all the data scrubbed from the Internet, and uses that data to understand patterns from the text you prompted it with. Why does it seem so unique and life like? ChatGPT has been trained on the multitude of human creation. Sprinkle in some random numbers, and ChatGPT can remix what it was trained with to generate something fresh. The transformer works the same way to generate new music, images, and video. Well, why does it make up stuff sometimes? ChatGPT is designed to generate grammatically sound text. Any text, not necessarily truthful information. When ChatGPT can’t find information in its trained data, it will randomize output to generate something that’s seemingly plausible. We’re getting better at this, however, because ChatGPT can look up information in real time, and one-shot learn that information to summarize it.
Why Does AI Use So Much Electricity?
Deep learning and transformer models use algorithms that can take months to train, even with hundreds of computers working simultaneously. Computers used to train these models are using specialized chips called GPUs (graphics processing units) that run advanced math functions needed for training. GPUs are power hogs using lots of watts for their computations. The most advanced AI chip, the Nvidia H200, uses 700 watts when it’s operating normally. Most AI servers are configured with 4 H100 or H200 chips and burn 2.8kW of power when training. Similar computers are used to handle your ChatGPT requests. OpenAI reports getting 2.5 billion requests a day for ChatGPT.
What Is Prompting and Prompt Engineering?
When you ask ChatGPT to perform a task, that’s called a prompt. You’re prompting ChatGPT to do something, like how you prompted me for information about AI. Same deal. When you write prompts for ChatGPT to accomplish a specific task, prompt engineering is structuring of instructions and information formatted so the AI properly understands your request and successfully executes it. Prompt engineering really is engineering and is still important for anything beyond simple summarization. For example, if you wanted ChatGPT to analyze and sort medical claims, you would need to provide it with a list of claims, explain what exactly is in the claims, explain the fields in the claims, provide instructions for analysis, and finally tell the AI what kind of output you’re looking for. The organization, ordering, and writing of clear instructions to accomplish all of this is engineering. There’s art and science to it. In the early days of GPT-3, prompt engineering jobs were offering $180k and up. AIs have gotten better with interpreting language; however prompt engineering is needed for more sophisticated tasks.
How Can We (Program Integrity) Really Use This Technology? I Mean Really Use It.
Here’s a real answer, for a real question. Humans can eyeball information and make decisions on summarized information confidently. Humans aren’t particularly good at sifting through large amounts of information, remembering discovered patterns, and recalling where specific pieces of information can be found in a multitude of it. Machines do that well, and AI-enabled machines can do it very well.
At Qlarant, we’ve identified key aspects of the FWA claims analysis, record review, and investigative processes where human intelligence can be bogged down due to the magnitude of data and pages of information involved in medical fraud. Our 50 years of experience gives us a unique perspective of carefully utilizing GenAI technology. Financial analysis summarization can be used to bring more attention to specific transactional behaviors. Medical records summarization can speed up the medical record review process, saving reviewers hours. New visualization technologies can be used to create dynamic reports highlighting troublesome schemes. And, case summarization can help overtaxed MFCUs and AGs pinpoint cases with the highest probable collections.
This isn’t a pipe dream and this isn’t hype. It’s today. TENEX and ClaimsVue Health tooling can be integrated into existing workflows to turbocharge program integrity operations while doing more with less.

