Good Decisions: Episode 8

The Minimum Viable Governance Approach to Complying with the EU AI Act

In this webinar, we share practical and tactical tools and insights for quickly and effectively complying with the EU AU Act requirements, including the February 2025 deadline on prohibited systems.

Register for the series.

Are you prepared for the EU AI Act's February 2025 deadline on prohibited systems?

Now's the time to act. Learn how to implement a governance strategy that’s effective, efficient, and compliant.

In this webinar, ModelOp’s Dave Trier covers the Minimum Viable Governance approach, a fast-track method to ensure your AI systems meet regulatory requirements without overcomplicating your operations. It’s a 30-minute investment that could save your business from costly compliance missteps.

Download the slide deck.

Transcript

1. Introduction to the Webinar (Jay Combs)

I'm Jay Combs, VP of Marketing here at ModelOp. Welcome to the Good Decisions webinar. This month, we will discuss Minimum Viable Governance (MVG) as an approach to complying with the EU AI Act. We will cover three main topics in today's webinar:

  1. An overview of market trends related to AI regulations, which are evolving rapidly.
  2. A deep dive into the EU AI Act, including a recap of what it entails, its risks, obligations, and deadlines.
  3. Practical steps that can be taken now, which we call Minimum Viable Governance (MVG), to prepare for these regulations, safeguard your business, and accelerate innovation.

2. Legal Disclaimer and Importance of Counsel

Before we begin, I want to note that neither Dave Trier, VP of Product here at ModelOp, nor I are lawyers. When dealing with regulations, it is crucial to consult with your legal counsel. While we provide guidance on the EU AI Act and how to translate it into AI governance capabilities, it is essential to work closely with your own legal team.

3. Introduction to the EU AI Act

On a global scale, the EU AI Act, officially called the Artificial Intelligence Act, entered into force on August 1st. We have been discussing this regulation for a while as it went through several phases before being signed into law. Now that it is in force, companies must be aware of the timelines and obligations.

The EU AI Act aims to create trustworthy AI that supports innovation while ensuring ethical standards. Its scope is broad, similar to GDPR, and affects the entire AI value chain, from providers to deployers. The impacts include significant transparency and conformity obligations, with real consequences for noncompliance, including fines. Regulatory authorities are becoming more involved, making it crucial for organizations to understand and meet these requirements.

4. Risk-Based Approach to AI

The most significant takeaway from the EU AI Act is its risk-based approach, which classifies AI systems into four risk levels: unacceptable risk, high risk, limited risk, and minimal risk. Unacceptable risk systems are prohibited, while high-risk systems have extensive transparency and conformity requirements. The Act sets different deadlines for compliance, depending on the risk level, with obligations beginning as soon as February 2025.

Most of the Act's requirements focus on high-risk AI systems, emphasizing risk management across the entire AI lifecycle—starting from defining the use case, through production, and ultimately to decommissioning. There are strict requirements around documentation, transparency, and ongoing review to ensure adherence.

5. Key Capabilities for AI Governance

The EU AI Act and other similar regulations can be summarized into five key capabilities that organizations need to develop for AI governance:

  1. Inventory - Understanding what AI is being used within the organization.
  2. Controls - Establishing appropriate controls around AI systems.
  3. Testing and Validation - Continuously testing and validating AI systems.
  4. Ongoing Reviews - Maintaining ongoing reviews to assess compliance.
  5. Risk Management - Applying risk tiering and managing risks effectively.

With these capabilities in mind, the right level of governance needs to be achieved to ensure compliance without stifling innovation. Dave will now take us through the concept of Minimum Viable Governance.

6. Introduction to Minimum Viable Governance (Dave Trier)

Thank you so much, and thanks again to everyone joining. I'm Dave Trier, VP of Product. I've had the pleasure of working with some of the largest organizations in the world, especially around AI governance and model governance over the past six-plus years. What we often see is that many organizations understand they need AI governance but are unsure how to get started due to its complexity. This is where the concept of Minimum Viable Governance (MVG) comes in.

Think of MVG as the "Goldilocks" approach to governance—not too much, not too little, but just enough to protect the organization while maintaining AI innovation cycles. MVG involves three core facets:

  1. Establishing a governance inventory to ensure visibility into all AI usage.
  2. Applying lightweight controls to manage verification, evidence, and approvals without overwhelming innovation.
  3. Implementing streamlined reporting to achieve transparency and understand how AI is being used.

In the next few sections, we will explore how MVG can help address the challenges related to regulatory requirements and best practices in AI governance.

7. Establishing a Governance Inventory

To start with MVG, it is essential to establish a governance inventory. The EU AI Act emphasizes understanding where AI is being used and conducting risk tiering based on different categories of risk: unacceptable, high, low, or minimal.

Many organizations face challenges in this area because they often use manual methods—such as spreadsheets—to track AI systems, which can result in inconsistent and outdated information. Proper accountability and asset management are crucial, particularly for high-risk AI systems that require registration with a central EU database. Article 71 of the Act outlines these obligations, which are much more demanding than simply keeping a list of AI systems.

MVG helps organizations move beyond manual tracking by creating a dynamic, real-time inventory that ensures visibility across all technologies using AI. It also simplifies the process of bulk importing existing AI records into governance tools, making it easier to track accountability and manage assets effectively.

8. Implementing Lightweight Controls

Once an inventory is established, the next step in MVG is to implement a lightweight set of controls. The EU AI Act requires specific actions depending on the level of risk associated with an AI system. For example, high-risk systems must conduct a fundamental rights impact assessment, as required under Article 27.

In many organizations, compliance often involves either blanket prohibitions or cumbersome manual processes. Such approaches can lead to unnecessary delays, making it difficult for teams to innovate quickly. Additionally, these manual processes may require significant resources, creating "AI governance debt"—a backlog of governance activities that eventually need to be addressed at great cost.

MVG addresses these challenges by using automated workflows to enforce governance policies consistently and efficiently. It allows organizations to apply just the right level of controls, ensuring compliance without hindering AI projects. By automating risk tiering and enforcing required actions based on risk levels, MVG enables organizations to maintain a balance between regulatory compliance and innovation.

9. Introducing the MVG Approach

The MVG approach to controls is not about creating an overbearing governance structure with numerous steps. Instead, it aims to implement the "Goldilocks" level of governance—just enough to be effective. This starts with automated and consistent risk tiering, ensuring that AI systems are categorized according to their risk levels, whether under the EU AI Act's unacceptable, high, low, or minimal risk framework.

Automation plays a significant role in the MVG approach, streamlining workflows and ensuring that governance policies are applied uniformly. For example, based on risk tiering, certain reviews or additional steps may be required, such as fitness for purpose assessments or ethical fairness reviews. Automation helps organizations map controls to policies effectively, allowing for consistency without causing unnecessary delays.

The MVG approach ensures that organizations establish an inventory, apply lightweight controls, and achieve effective reporting without compromising innovation. This balance allows companies to comply with regulations while maintaining their competitive edge in AI deployment.

10. The Importance of Reporting in AI Governance

Reporting is the third core facet of MVG, providing transparency into how AI systems are being used. A quote from Julie Sweet, CEO of Accenture, highlights the need for visibility—less than two percent of CEOs can identify where and how AI is being used in their organizations. This lack of visibility presents a significant risk, especially as AI investments continue to grow.

The EU AI Act also outlines specific reporting requirements for high-risk AI systems, including proper record-keeping, usage updates, and performance tracking. However, many organizations use inconsistent, manual processes to track these metrics, which makes it challenging to comply with regulatory requirements or demonstrate consistent governance practices.

With the MVG approach, organizations can implement a consistent set of metrics and automated reporting processes to track AI usage, performance, and compliance. This includes generating model cards, detailed reports, and even submitting required information to regulatory authorities. By automating reporting, organizations can provide executives with real-time insights into AI value and risk, ensuring transparency and effective management.

11. Consequences of Noncompliance in AI Systems (Jay Combs)

Sticking with the status quo of manual processes and inconsistent governance can have severe consequences. The EU AI Act includes penalties for noncompliance, which will become increasingly significant as different provisions come into effect over the coming months and years. Beyond regulatory fines, ineffective governance can lead to serious operational and reputational risks.

A recent example is TD Bank, which faced a record $3 billion penalty for failing to maintain effective anti-money laundering systems. Decision-making models, including AI, present significant risks if not properly governed. The costs of noncompliance or maintaining ineffective systems can be enormous, both financially and reputationally.

AI governance is now considered table stakes for any organization leveraging AI. Relying on manual processes or repurposing existing systems for AI governance may work temporarily, but it becomes increasingly complex and costly as AI use grows. MVG helps organizations establish a solid foundation for AI governance, minimizing the risks of noncompliance while supporting innovation.

12. The Need for Minimum Viable Governance

The key question for many organizations is why they can't simply maintain the status quo. The reality is that the high cost of noncompliance and the challenges of scaling manual systems make this unsustainable. As organizations expand their AI use cases, manual processes become burdensome, requiring substantial resources and creating bottlenecks.

The MVG approach is designed to provide the right level of governance to meet AI model requirements while minimizing friction. Instead of trying to adapt existing systems for AI governance, MVG ensures that organizations have the right controls, automation, and visibility in place to comply with regulatory requirements and scale AI initiatives effectively.

Ultimately, the goal of effective AI governance is not to act as a brake on innovation, but to work in harmony with the business, enabling competitive advantage. MVG provides a streamlined approach to AI governance that accelerates innovation while ensuring compliance, making it a critical tool for organizations navigating the complexities of AI deployment.

13. Trends in AI Accountability and Ownership (Jay & Dave)

Jay Combs: Thanks, Dave. Now, before we wrap up, let's address an important question that came up in the chat: Who owns AI in an organization? Where do you see accountability and ownership of AI evolving within enterprises?

Dave Trier: Great question, Jay. Right now, there isn't a one-size-fits-all answer. In some companies, we've seen the rise of a Chief AI Officer—a role dedicated specifically to overseeing all AI initiatives.

Jay Combs: Right, and in other cases, it seems like the Chief Data Officer or the Chief Data and Analytics Officer takes on that responsibility.

Dave Trier: Exactly. Sometimes, AI even rolls up to a Chief Transformation or Chief Digital Transformation Officer. It really varies depending on the organization. But one thing is consistent: accountability needs to be assigned to someone who understands both the strategic opportunities and the risks of AI.

Jay Combs: Absolutely. And regardless of the title, that person must ensure that AI is aligned with the business's objectives and complies with regulations like the EU AI Act.

Dave Trier: Couldn't agree more. Whether it's a Chief AI Officer, CDO, or another executive, the key is having someone at the top who is accountable for AI outcomes and governance. Without clear ownership, AI initiatives can become fragmented, increasing the risks we talked about earlier.

Jay Combs: Thanks for clarifying that, Dave. It's clear that accountability and ownership are crucial for effective AI governance. Any final thoughts for our audience?

Dave Trier: Just that AI governance is a journey. It may seem overwhelming, but starting with Minimum Viable Governance can help organizations build a strong foundation without getting bogged down. It's all about balancing innovation and compliance in a practical, scalable way.

Jay Combs: Well said, Dave. And with that, I want to thank everyone for attending today’s session. We hope you found it informative, and we encourage you to reach out if you have any more questions or need further assistance with AI governance. Have a great rest of your day!

Get the Latest News in Your Inbox
Share this post
Contact Us

Get started with ModelOp’s AI Governance software — automated visibility, controls, and reporting — in 90 days

Talk to an expert about your AI and governance needs

Contact Us