Getting Started with AI, Part 1

posted on March 31, 2026 by Will Mapp, III
  1. Data Science & Technology


 
RIViR Reads logo AI is everywhere. After kicking 2026 off with webinars, blogs, and seminars on AI, I considered making this month’s blog about new AI technologies; especially since Google announced TurboQuant, a new compression technology that will speed up training AI models. However, I was recently on a sales call and the customer asked, “How do we get started?” This inspired me to look through the archives, and I discovered we never talked about how to start building out a successful AI program.

So, here we go.

Settle on an Objective and Know the Outcome

When it comes to technical programs, most organizations want to jump in headfirst and start building. AI is essentially an IT related program. However, AI differs from other IT initiatives by virtue of its demands on data to ensure success. AI systems’ ability to predict outcomes, generate new information, and make decisions requires access to high sensitivity data volumes not seen before.

Any organization starting an AI program should decide on the expected outcomes of deploying AI systems. Organizations should justify cost benefits and develop new relationships with data and risk.

Most organizations are considering AI systems in order to do one or two things. Improve human decision making and/or increase human productivity. The fancy analytical predictions and GenAI magic boil down to these use cases.

So…

Take Time to Understand Risks

Building a production AI system carries more risk than other IT systems. According to ISACA, the biggest single risk impacting AI systems implementation involves the data ingested in the system. Data is susceptible to multiple attacks that can impact system performance. For instance, data poisoning is an information attack where source data is corrupted by attackers. AI trained on corrupted data will deliver inaccurate results. This kind of attack happens before training data is delivered to developers.

AI systems are often trained on data collected from human interactions. For example, consumer purchase information, patient medical information, and student report cards can be used for training new AI systems. Regulatory and compliance breaches can be costly in not only fines but civil penalties for mishandling data.

We mitigate risks like this by creating AI governance frameworks and operating steering committees with leaders who understand the business and devise strategies protecting human rights and their data.

Use Smart (and Proven) Design Principles

Wooden blocks with letters A and I
Proven design principles provide a solid foundation for new models.

AI systems are heavily dependent on models.

New models are being deployed every week, and existing models are getting smarter upgrades seemingly in real time. Vendor lock in is real. AI hasn’t eliminated that risk. AI systems using fluid design patterns can switch models when necessary.

Thanks to a proliferation of AI models, the industry has responded with data sheets and model cards giving system builders information for choosing the right model. A model card gives usage details about AI models. It has what you would expect: an overview, version information, and intended usage. It also provides extended details about training data and data sources used in the model, how data was cleaned and scrubbed of sensitive information, and performance metrics. Model cards may also include ethics and bias mitigation strategies. Ethics and bias details are important for designers working in the education, financial, and medical fields. AI systems in these industries have a higher risk of inadvertently perpetuating biases.

To Be Continued

In part 2, we’ll learn about system building, maintenance, and operations to achieve the performance we desire. AI presents many new and exciting opportunities to deliver for our customers. AI also invites a handful of new challenges. Taking time upfront to understand and plan will help you build safe and successful AI systems.

See Me at PSC

I’ll be at the Professional Services Council annual conference this year. AI systems, risks, and playbooks are topics I’ll be actively engaged in. If you’re at the conference be on the lookout for me.

 

about the author

As Chief Technology Officer, Will Mapp keeps a constant eye on the future and ensures Qlarant is at the forefront of the latest and emerging technologies. See all posts from Will Mapp, III.

Let's Talk About Solutions

How can Qlarant help you improve quality and program performance?