Good Decisions: A Monthly Webinar for Enterprise AI Governance Insights

AI Governance Unwrapped: Insights from 2024 and Goals for 2025

With AI adoption accelerating across industries, the challenges of responsible AI have never been more pressing. In this webinar, we unpack the key trends from 2024 and looked ahead to 2025. Learn actionable strategies to manage AI portfolios, comply with evolving regulations, and scale your AI initiatives with confidence.

Register for the series.

2024 has been a year of rapid growth, rising expectations, and new challenges for AI Governance. As AI adoption accelerates, organizations are still facing critical hurdles:

  • The AI production gap: Pilots take off, but production results fall short.
  • Third-party and embedded AI: Widely in use, but still unmanaged and ungoverned.
  • A fragmented regulatory landscape: EU, U.S. states, and agencies are setting the pace.
  • Unclear accountability: Who truly owns AI Governance in your organization?
  • The steep, hidden costs of DIY governance.

What’s next in 2025?
In this webinar, we discuss the trends and strategies that will shape AI Governance in the year ahead:

  • AI Portfolio Management and Minimum Viable Governance (MVG): Practical approaches to balance innovation and compliance.
  • Regulatory pressure: Growing real teeth as public awareness increases.
  • The rise of agentic AI and trust-centric governance—turning hype into action.

Download the slide deck.

Transcript

1. Overview of AI Governance Insights (Jay Combs)

Welcome to the Good Decisions webinar! Today, we’ll provide a year-end recap of key insights into AI governance, drawn from our conversations with customers and market analysts. We’ll also share trends and goals for 2025 to help you prepare for the coming year.

Before diving into the agenda, if you’re enjoying these insights, please consider following ModelOp on LinkedIn. It’s a great way to stay informed and connected throughout the year.

Here’s what we’ll cover:

  1. Key insights from 2024, including notable trends and lessons learned.
  2. Trends and goals for 2025, focusing on where the market is heading and enterprise priorities.
  3. Implications and key takeaways for AI governance moving forward.

Let’s get started!

2. Reflections on 2024: The Year of Generative AI (Dave Trier)

Thank you for joining us as we reflect on 2024, a year marked by the rise of generative AI pilots and exploration. This was a pivotal year for AI governance, as organizations experimented with new technologies and confronted emerging challenges.

Generative AI saw significant investment in 2024. For example, Accenture reported billions in AI-related revenue, much of it driven by generative AI pilots and proofs of concept. However, widespread adoption of these technologies remains in its early stages, as many organizations are still exploring their potential.

Despite high expectations at the start of the year, actual usage fell short, with a reported 42% gap between anticipated and realized adoption. A significant portion of deployed AI systems relied on vendor solutions rather than in-house developments, highlighting the industry's dependence on external expertise.

3. Regulatory Landscape in 2024

In 2024, regulatory developments played a central role in shaping the AI landscape. While the EU AI Act made waves as one of the most comprehensive AI-related legislative efforts, much of the momentum in the United States came from state-level initiatives. Colorado, for instance, introduced impactful regulations for the insurance sector.

At a vertical level, industries like banking and insurance led the way with tailored regulations. Banking, with its longstanding governance practices, provided a model for others. Meanwhile, newer initiatives, such as Canada’s OSFI E-23 AI extensions, reflected the growing need for industry-specific guidelines.

These developments underscore the importance of staying ahead of regulatory changes, which vary widely by region and sector. We'll provide more detailed resources for those interested in delving deeper.

4. Key Themes in AI Regulations

AI regulations introduced in 2024 varied widely, but several consistent themes emerged across regions and industries. These key areas provide a framework for organizations to align their governance practices:

  1. Visibility and Inventory: Organizations must identify and document where AI is being used, its intended purpose, and the data and systems it impacts. Ethical considerations should also be accounted for at this stage.
  2. Controls: Implementing the right controls is essential to mitigate risks associated with AI usage, such as access restrictions, process changes, and ethical safeguards.
  3. Comprehensive Testing and Validation: Regular testing and independent reviews ensure that AI systems are robust, fair, and aligned with expected outcomes.
  4. Ongoing Monitoring: Continuous evaluation of AI systems ensures they remain effective and within risk parameters, adapting to deviations as needed.
  5. Risk Management: An overarching risk management strategy is vital for addressing the full spectrum of risks associated with AI technologies.

These themes highlight the importance of proactive governance to ensure AI is used responsibly and effectively.

5. Challenges in AI Governance

While 2024 marked significant progress in AI governance, organizations faced several challenges:

  1. Accountability: There was no clear consensus on who should oversee AI governance within organizations. Depending on the company, this responsibility was assigned to Chief Data Officers, Chief Analytics Officers, or even newly created roles like Chief AI Officers. This inconsistency made it difficult to establish and enforce governance practices.
  2. DIY Governance: Many organizations opted for in-house solutions, attempting to "duct tape" existing systems together. While initially cost-effective, this approach often proved inefficient due to high maintenance costs, a lack of expertise, and reliance on manual processes. These issues slowed innovation and made it harder to scale AI initiatives.
  3. Manual Processes: Manual workflows and checkpoints frequently hindered the deployment of AI systems, resulting in slower innovation cycles and missed opportunities.

Despite these challenges, there was a notable rise in dedicated AI governance roles, signaling progress toward greater accountability and specialization. Moving forward, organizations must balance innovation with robust governance frameworks to drive sustainable success.

6. Looking Ahead: 2025 Projections

As we turn our attention to 2025, the focus shifts from experimentation to delivering measurable AI value. While 2024 was defined by pilots and exploratory efforts, 2025 will demand tangible outcomes from AI investments.

Business leaders are setting high expectations, with 78% anticipating a return on investment (ROI) from generative AI within the next 1–3 years. However, this urgency highlights the need for proper frameworks to avoid "AI governance debt"—the accumulation of inefficiencies and risks from ad-hoc or insufficient governance practices.

Organizations must invest in scalable AI governance capabilities to ensure value realization without compromising innovation or incurring long-term costs.

7. The Importance of AI Governance for Value Realization

AI governance is emerging as a critical enabler for achieving ROI on AI investments. Analysts emphasize the importance of "AI portfolio intelligence"—tracking and evaluating the value and impact of AI systems across an organization.

Visibility into AI systems allows organizations to prioritize high-impact initiatives and optimize resource allocation. Studies show that executives recognize responsible AI practices as key differentiators, with 46% citing them as essential to organizational success.

Moreover, the AI governance software market is projected to grow at a compound annual growth rate (CAGR) of 30% from 2024 to 2030, underscoring the growing importance of dedicated tools to support responsible AI practices.

8. Building Trust in AI Systems

Trust is fundamental to the successful adoption and utilization of AI. Research shows that trusted companies outperform their peers by over 400%, demonstrating the tangible business benefits of responsible and ethical AI use.

Establishing trust requires robust governance procedures, ensuring that AI systems operate transparently and ethically. When customers trust that AI is being used responsibly, it enhances brand reputation and drives long-term business success.

By prioritizing governance, organizations can build this trust, creating a competitive advantage while mitigating potential risks.

9. Getting Started with AI Governance

If the concept of AI governance feels overwhelming, you’re not alone. To help organizations get started, we’ve developed a Minimum Viable Governance (MVG) approach—a practical framework that introduces just the right amount of governance at the right time.

This approach focuses on three foundational elements:

  1. Visibility and Inventory: Establish a systematic way to track all AI use cases across the organization. Avoid manual spreadsheets and emails by using dynamic, automated processes to maintain an up-to-date inventory.
  2. Light Controls: Implement a manageable set of controls to safeguard sensitive data and mitigate high-risk scenarios without stifling innovation.
  3. Streamlined Reporting: Develop clear, concise reporting to provide stakeholders with insights into AI activities, successes, and the value being generated.

The MVG approach ensures organizations can build a strong governance foundation while maintaining flexibility to adapt as they grow.

10. Navigating Regulatory Challenges

With the EU AI Act set to enforce prohibitions on certain AI systems in February 2025, organizations must prepare now to ensure compliance. High-risk systems will face further enforcement starting in February 2026, making regulatory readiness a top priority.

Key steps for navigating these challenges include:

  • Understanding the specific requirements of applicable regulations.
  • Implementing processes to identify and mitigate risks associated with prohibited or high-risk AI systems.
  • Ensuring that governance frameworks are scalable to accommodate future regulatory changes.

Regulatory compliance is non-negotiable, but it can also serve as an opportunity to strengthen governance practices and build organizational resilience.

11. Addressing Brand and Reputational Risks

While regulatory compliance is critical, the most significant driver for many organizations is the need to protect their brand and reputation. A misstep in AI governance can result in public scrutiny, loss of trust, and lasting damage to a company’s reputation.

To mitigate these risks:

  • Implement change management processes to ensure responsible AI use.
  • Prioritize transparency and accountability in AI systems to build customer trust.
  • Address potential reputational risks proactively to avoid negative media attention or public backlash.

As research shows, trusted companies significantly outperform their peers, underscoring the value of responsible AI practices not only for risk management but also for long-term success.

12. Preparing for 2025: Showcasing AI Value (Jay Combs)

As we move into 2025, the pressure is on to demonstrate the tangible value of AI investments. Organizations must showcase ROI from generative AI use cases and decision-making models to justify costs and sustain innovation efforts.

A portfolio approach to managing AI systems can simplify this process. By treating AI as part of a strategic portfolio, organizations can:

  • Highlight high-value initiatives to stakeholders.
  • Provide transparency into AI performance and contributions to business goals.
  • Align AI activities with broader enterprise objectives.

Preparation is key—don’t wait for an adverse event or delayed results to begin implementing AI governance. Acting now positions organizations to manage their AI effectively and demonstrate its value across the enterprise.

13. Taking Action on AI Governance Today

Governance has traditionally been viewed as a heavy, burdensome process, but it doesn’t have to be. Adopting a Minimum Viable Governance (MVG) approach allows organizations to start small, implementing scalable governance measures tailored to their current needs and maturity.

This method supports innovation while ensuring that essential safeguards are in place to protect businesses and their reputations. Governance is not just a compliance requirement—it’s an enabler of trust, value, and long-term success.

As you plan for 2025, take these steps:

  1. Begin establishing an AI governance framework today.
  2. Scale governance efforts gradually as your organization grows.
  3. Prioritize transparency, accountability, and alignment with business objectives.

Starting small can lead to big rewards, positioning your organization to thrive in the evolving AI landscape.

Get the Latest News in Your Inbox
Share this post
Contact Us

Get started with ModelOp’s AI Governance software — automated visibility, controls, and reporting — in 90 days

Talk to an expert about your AI and governance needs

Contact Us