Good Decisions: Episode 3

The EU AI Act and Minimum Viable Governance

In this episode, Dave Trier discusses the minimum viable governance capabilities required to safeguard your enterprise from AI risks.

Register for the series.

Navigating the complexities of AI governance and risk management can be daunting, which is why we're here to provide you with practical guidance and actionable insights to jump start your AI Governance journey. In this episode, Dave Trier will help you gain a deeper understanding of the minimum viable governance capabilities required to safeguard your enterprise from AI risks, as well as provide a roadmap for implementing these critical capabilities within your organization. Whether you are at the early stages of your AI journey or looking to enhance your existing governance framework, this webinar is your opportunity to gain the knowledge and resources needed to make informed decisions and drive positive change within your organization. Watch this virtual event and learn how to shape the future of AI governance and risk management at your company!

Download the slide deck.

For a deeper dive on the topic, download the whitepaper: Minimum Viable Governance: Must-Have Capabilities to Protect Enterprises from AI Risks and Prepare for AI Regulations, including the EU AI Act

Transcript

Thank you for joining us for Episode 3 of the Good Decisions monthly webinar. Today's topic is the EU AI Act, which was approved about two weeks ago, and Minimum Viable Governance (MVG). These are the must-have capabilities to help protect your enterprise from AI risks. I’ll kick things off, and then Dave Trier, our subject matter expert on AI governance, will share his thoughts on MVG and its importance.

Our agenda is straightforward:

  • We’ll discuss AI risks that are frequently in the news and in our conversations with Fortune 500 executives.
  • We’ll cover the EU AI Act, breaking it down into key points and a timeline for when it’s expected to take effect.
  • Then we’ll dive into MVG, outlining the most important steps organizations must take to safeguard against AI risks.
  • Finally, we’ll conclude with a Q&A session. Feel free to submit questions in the chat at any point, and we’ll address as many as we can during the discussion.

Let’s get started.

Significance of AI in Business
Dave and I are fortunate to talk to AI leaders at Fortune 500 companies, and we’ve noticed two things: First, AI is becoming a key investment area. FactSet reported that over one-third of S&P earnings calls mentioned AI as a strategic priority, indicating its growing significance in business operations.

However, we’re also seeing increasing risks related to AI, including financial, brand, intellectual property, and regulatory issues. These risks are making headlines regularly. The question isn’t if AI will go wrong, but when. The challenge is how to prepare and protect your organization while embracing these powerful innovations.

Overview of the EU AI Act
One of the major concerns is regulatory compliance. On March 13, the European Parliament approved the EU AI Act. Dave and I have spent a lot of time reading the 459-page document, so here’s what you need to know in a nutshell. Although it’s early days, with many steps still to follow, here are five key points:

  1. Purpose: The Act aims to promote responsible innovation while preventing worst-case scenarios like discrimination or social scoring. It’s designed to keep humans in control of AI.
  2. Scope: Similar to GDPR, the EU AI Act is extra-territorial. If your business operates in the EU or if AI outputs are used in the EU, this regulation applies to you.
  3. Impact: The Act regulates the entire AI lifecycle, from development to maintenance and compliance, affecting both employees and customers.
  4. Penalties: Fines for non-compliance can reach up to 7% of a company’s annual revenue—twice the penalties under GDPR. This highlights the importance of preparing now to avoid the significant costs and stress that may come with this regulation.
  5. Risk-based approach: The Act categorizes AI systems into three risk levels—unacceptable, high-risk, and minimal-risk. It places the highest regulatory burden on high-risk systems, such as those used in healthcare and banking.

Timeline of the EU AI Act
Let’s talk about the timeline. The Act was approved on March 13. It’s expected to go through a final review and endorsement in June, with enforcement starting around July. Here are the key milestones:

  • Six months post-enforcement: Prohibitions will be placed on unacceptable AI systems, such as those used for social scoring.
  • One year later (July 2025): Penalties for providers of general-purpose AI systems will begin.
  • Two years post-enforcement: Obligations for high-risk systems will take effect.

Now, the big question is: how can your organization prepare?

Accountability for AI Risks
Before I hand it over to Dave, let’s talk about accountability. Who in your organization is responsible for managing AI risks? Let’s do a quick poll. Please take a moment to share who you believe holds accountability for AI risks within your C-suite.

Alright, Dave, over to you.

Minimum Viable Governance (MVG)
Thank you, Jay. As Jay mentioned, AI offers massive opportunities, but it also introduces risks to brand reputation and regulatory compliance, particularly with new regulations like the EU AI Act. So, how can organizations embrace AI while safeguarding themselves? That’s where Minimum Viable Governance (MVG) comes in.

Visibility and Inventory
The first element of MVG is visibility. You can’t govern what you can’t see. It’s difficult to manage AI effectively without knowing where it’s being used across your organization. The EU AI Act emphasizes the need for an inventory of all AI systems, categorizing them by risk level. This inventory should include details like system scope, limitations, technical documentation, and transparency.

One of the challenges in building this inventory is the variety of AI technologies—internally developed AI, vendor products, and embedded AI systems. Each requires careful tracking and management. The good news is that the same basic principles apply across different regulations: you need a governance inventory to track AI usage, risks, and accountability.

Rolling Out an MVG Approach
To help organizations manage this, we recommend a dynamic inventory approach. Instead of manually tracking every AI system, which can be slow and costly, we suggest automating the process, constantly updating the inventory as AI systems evolve. With simple integrations, you can streamline the entire process while ensuring compliance.

Here’s an example from our own product. We’ve helped many large enterprises implement scalable AI governance solutions without slowing innovation. Starting with visibility into all AI use cases, risk tiering, and system status, you can gradually build a robust governance framework.

Challenges in Using Different AI Models
One of the common challenges organizations face is managing a combination of internally developed, vendor-provided, and embedded AI models. Whether you're using ChatGPT, an internally developed Python model, or an off-the-shelf software with embedded AI, how do you ensure that the same principles and controls apply across the board?

This is where we see some consistent themes across regulations. Transparency is key—not just transparency in AI use, but also in the data, technical assets, and documentation associated with each model. Comprehensive testing is often overlooked, especially with vendor and embedded models. We'll dive deeper into why that's important.

Need for Automation in AI Governance
Light controls can be implemented early on to ensure compliance with not only the EU AI Act but also regulations across the U.S. and globally. One pitfall we often see is organizations relying on spreadsheets to track AI models, trusting employees to follow the proper processes manually. While this approach can work to an extent, it can only go so far.

Systematic Enforcement and Automation
Given the pervasive use of AI across marketing, back-office operations, and supply chains, a more systematic approach is needed. This is where automation becomes critical. Automation ensures that regardless of the AI technology or regulation involved, all steps in the governance process are enforced. Vendors like ModelOp have invested significant time and resources into developing automated solutions that help organizations implement best practices efficiently.

Transition to Comprehensive Control and Monitoring
By leveraging automation, you can move from a scenario where you're unsure about where AI models are used or if they comply with regulations, to one where you have full visibility into open items, risks, and necessary controls. This ensures that compliance is maintained across the organization and across all technologies.

Executive-Level Perspective on AI Governance
At the executive level, two key questions always arise: "How are we using AI?" and "How are we safeguarding the organization?" Reporting plays a critical role in ensuring that business, financial, regulatory, and risk perspectives are all addressed. The third element of Minimum Viable Governance (MVG) is focused on reporting.

Reporting Requirements in the EU AI Act
The EU AI Act outlines specific reporting requirements, particularly for high-risk systems. These systems require ongoing monitoring of operations, including post-market monitoring. This applies to both developers of generative AI and users of generative AI technologies. It’s important to continuously monitor to ensure that AI systems are delivering expected results, are robust, stable, and ethically sound.

Ongoing Monitoring and Central Registration
Another requirement in the EU AI Act is the development of a central registration system for all high-risk AI systems. Even if you're not an AI developer, using AI for healthcare decisions or hiring decisions could classify your system as high-risk, requiring registration and ongoing monitoring.

Challenges in Reporting AI Usage and Risks
Reporting challenges stem from the rapid changes in technology and the evolving regulatory landscape. Common themes in reporting across regulations include maintaining an inventory of AI systems at the executive level, identifying high-priority risks, and regularly updating performance metrics and usage statistics. High-risk AI systems, in particular, require consistent reporting.

Automated Reporting and Top-Level KPIs
A common pitfall is attempting manual reporting across teams, which often results in missed AI usage and inconsistent data. Most importantly, it can leave unknown risks unaddressed. However, if you implement the first two elements of MVG—inventory and light controls—reporting becomes much easier. Automation can handle testing and provide the necessary reports, including key performance indicators (KPIs) and key risk indicators (KRIs).

Executives must align on the top-level KPIs and KRIs needed to manage AI usage effectively across the organization, enforcing policy through a top-down approach.

Use of Built-In Automation for Performance Testing
Here’s a simplified example of how we’ve helped customers implement automation. Built-in automation allows for regular performance testing, fairness testing, and robustness assessments. These can be run automatically, based on thresholds set by your organization's risk tolerance. The results can be consistently reported to the executive level, providing a clear understanding of AI usage, risks, and mitigation efforts.

Summary of MVG Approach
In summary, AI governance can seem overwhelming, especially when dealing with hundreds of pages of regulations. However, by breaking it down into three steps—inventory, light controls, and reporting—you can implement a Minimum Viable Governance approach quickly and effectively.

I’ll now hand it back to Jay to wrap up and answer some of the questions we’ve received.

Q&A Session
We’ve got four main categories of questions to address. The first question is: What’s the best strategy for managing third-party and embedded AI risks, as highlighted by the EU AI Act?

Dave: Great question! The same principles apply to third-party and embedded AI systems as to internally developed AI. You need to capture usage, limitations, approved scenarios, and key assets—especially if you're sharing data with a vendor using customized versions of their AI. You also need to verify and test how the vendor’s AI is used in your specific scenarios, and ensure comprehensive documentation and reporting.

Chapter: Difference Between AI Governance and Model Risk Management (MRM)
Another question we’ve received is: What’s the difference between AI governance and Model Risk Management (MRM)? Is there a difference between model risk and AI risk?

Dave: Yes, this is a common question. Many of the principles in AI governance are similar to those in MRM, especially in the financial sector. However, AI introduces more variety in technologies and embedded AI models, which aren’t typical in traditional financial models. There’s also a faster pace of change in AI technologies. While many financial institutions have evolved their MRM practices to manage AI, additional governance areas like data and security are not always covered by MRM.

Chapter: Determining Risk Tier and Carbon Impact Disclosure
Another question is: How do you determine a risk tier for AI models?

Dave: Risk tiering varies by industry. In the financial sector, it’s often based on materiality and exposure. In healthcare, for example, the FDA has proposed risk tiering based on the health situation and the AI’s role in decision-making—whether it’s driving clinical decisions or simply managing clinical operations.

Is there a requirement to disclose the carbon impact of AI solutions under the EU AI Act?

Dave: Yes, for developers and providers of AI (like those building generative AI models), there are clauses related to carbon impact. However, for users or deployers of AI, I’m not certain if this applies. Please consult your legal counsel for specifics.

Role of MRM and Governance in Organizations
For companies without a Chief Data Officer (CDO), does AI governance fall under GRC (Governance, Risk, and Compliance), legal, or audit functions?

Dave: This is evolving. In some cases, companies assign AI governance to a Chief AI Officer, responsible for both innovation and governance. In others, it may fall under the CDO or Chief Risk Officer, with MRM involved. It varies by organization.

Distinguishing AI Models and Non-AI Models
One more question: How do you distinguish between AI and non-AI models?

Dave: This is something the EU AI Act spent a lot of time defining. At a high level, AI models involve inputs with probabilistic outputs, unlike deterministic systems that always produce the same result given the same input. We’d be happy to send you the specific legal definition from the EU AI Act.

Closing Remarks
Thanks for all the great questions! If you have more, feel free to reach out. Our next webinar will focus on accountability for AI risks.

Thanks again for joining today. We hope to see you next month!

Get the Latest News in Your Inbox
Share this post
Contact Us

Get started with ModelOp’s AI Governance software — automated visibility, controls, and reporting — in 90 days

Talk to an expert about your AI and governance needs

Contact Us