AI Generated Fraud to Keep an Eye On in 2026
Happy New Year! Can I still say Happy New Year since it’s the first 2026 installment of RIViR Reads? I hope so. A great way to start any new year is to peer into the constantly changing future and prepare for what’s to come. Towards the end of 2025, fraud professionals witnessed the first sophisticated threats to financial systems from fraudsters using increasingly powerful, publicly available AI tools. Healthcare program integrity professionals attending NAMPI (National Association for Medicaid Program Integrity) and NHCAA (National Health Care Anti-Fraud Association) bore witness to fraud schemes where software tools were used to create fake claims and beneficiaries. With advanced GenAI tools readily available for $11 per month here are some things to keep an eye on in 2026.
Increasing Use of Deepfakes
Beneficiary fraud isn’t limited to identity theft and falsifying medical needs. An increasing use of robocalling technology and deepfaked voice calls are a rising cause of beneficiaries sharing sensitive information to fraudsters. Deepfakes are false recordings of real people using AI technology. Deepfakes use a snippet of a victim’s voice or a video recording combined with GenAI technology to create believable renditions of a person’s speech or recording. Fraudsters can use deepfakes to encourage victims to apply for benefits they don’t need or purchase insurance or other products.
Fake Credentialing
Credential fraud is one of the fastest fraud schemes occurring in the banking industry. According to the Federal Reserve, synthetic identities cost banks $6 billion a year. Synthetic identities, or fake credentials, are falsified IDs, birth certificates, college diplomas and other documents created by GenAI tools optimized for image generation with frightening accuracy. In 2021, $22.5 million in false claims were billed to Medicare and Medicaid by uncredentialed providers using false credentials. Furthermore, these tools have been shown to generate creases, signs of age, and other imperfections in documents that pass at-a-glance inspection. Fraudsters can use fake credentialing to create new provider, beneficiary, and even payer identities.
Proliferation of Large Language Model Fraud
Large language models, LLMs, are a powerful option in the fraudster’s toolkit. LLMs are designed to generate text from text. These tools can do more than write convincing fake emails. LLMs can be trained using real medical records, accurate diagnoses, and provider documentation. Fraudsters can use fine tuned LLMs to generate false medical records and diagnoses with high believability. Furthermore, LLMs can generate text that avoids many of the telltale signs that come with false medical records such as misspellings and incorrect locations or names. The prompting capability of many LLMs allow fraudsters to provide pertinent, yet false, facts the LLM can use to generate new documents based upon the real thing.
Using AI to Fight AI
AI tools may be cheap and easily scaled, but all is not lost. Traditional tooling may not be enough to thwart GenAI fraud. However, new tools and more importantly new techniques are gaining ground. Layered approaches, collaboration, and nuanced fighting fire with fire can be successful in beating the GenAI-enabled fraudster. We’ll go deeper into these ideas in our next installment of RIViR Reads.


