Getting Started with AI, Part 2
Hi Dear Reader! In our previous installment of RIViR Reads we focused on the setup for building a successful AI program. Today, we’re focusing on what it takes to build a successful practice around AI technology.
Solutions Are More Than Models
According to an August 2025 MIT report, 95% of AI projects fail. In our last RIViR Reads, we identified how ambiguous requirements are a common cause of failure for most AI projects. The second cause of failure is overreliance on the AI to be an end all be all solution.
AI technologies are exceedingly powerful. However, there are use cases for software and people-enabled processes. Specifically in terms of GenAI, the technology is a powerful companion when there is a need for searching documentation, automatic summarization, and context-based verification (the process of verifying that documentation or information products conform to standards and policies). GenAI’s capabilities in this area make AI attractive for compliance, governance, and program integrity organizations.
When designing AI-enabled here are key questions you should ask:
- What happens before data is input to the AI?
- What happens after data is generated by the AI?
The idea of garbage-in/garbage-out has never been more critical than now. An AI solution’s success is entirely dependent on the quality and provenance of information being fed into it. Understanding what happens before data is entered into the AI helps your team develop guardrails that can prevent models from hallucinating or giving up its secrets.
On the garbage-out side, identifying how information produced by the AI is handled helps build safeguards and security controls preventing data leakage.
Answers to these questions will enlighten your team. To protect users and intellectual property, some degree of software solutioning around the AI can be used for input and output validation. Additionally, most business processes require degrees of logic, and differing workflows depending on the decisions an AI may make. Thoughtfully crafted user interfaces and workflow control systems can make AI-enabled solutions more robust than presenting a prompt on the screen. Many AI projects failed because users were presented by a textbox nudging the user to ask questions, versus a solution that guides users into specific action to get the mission accomplished.
Adversarial Testing
QA is compounded in the AI systems world. Human creativity is adept at coming up with crazy prompts, realish looking input data, and mimicking an AI. All generative systems should be put through a thorough battery of adversarial tests. Prompts can literally be anything. In addition to uncovering potentially hallucinatory outputs, adversarial testing should also include prompting to discover how an AI may divulge its secrets. Secrets can be the model’s underlying traditional cyber secrets, pre-trained data, and the model’s system instructions.
Machine learning models also have their weaknesses. Computer vision models use deep neural networks and machine learning algorithms to train models to classify images. CV models can be fed with an image like a jpeg and report if it is a person, dog, or school bus. Computer vision models are susceptible to corrupted images that may include minor blemishes and pixel-level modifications that human viewers ignore.
Continuous Monitoring and Improvement
Continuously monitoring your AI solution brings your design, development, and deployment all together. Our security teams have raised continuous monitoring into everyone’s consciousness to help protect our systems. Now, a broad monitoring apparatus keeps your AI solution running well and adaptable over time.
AI models are trained on data from the past. Users access AI systems in the present. Model drift occurs when a model’s pre-trained data is so out-of-date the model underperforms and gradually drifts from delivering good results. A solid monitoring system can be used to analyze your model’s outputs against a baseline to determine if it should be retrained with updated data.
Be careful. It’s not enough to just monitor. An AI-enabled solution’s obsolescence is easier to spot and is more impactful than traditional software systems. Many systems featuring outdated looking buttons and shades of green are still performing…well. Incorrect output from a GenAI system will never get better. The underlying system must be retrained or replaced depending on how our users utilize the system.
See Me at PSC
Thanks for reading Part 2 of Getting Started with AI. At Qlarant, we’re bringing many technologies to program integrity world and it’s in our DNA to share our knowledge and discoveries during this exciting time.
If you’re in West Virginia, I’ll be at the Professional Service Council annual conference this year. AI systems, risks, and playbooks are topics I’ll be actively engaged in. If you’re at the conference be on the lookout for me.

