This post highlights key AI governance insights from 2024 and identifies the trends that will shape 2025 — pulling from my and ModelOp’s experience over the past year working with large enterprises and multi-national corporations who are tackling the rapidly evolving challenges of bringing AI to market, responsibly.
Part 1: Key Insights from 2024 - Pilots and Exploration
1. AI Production Gap: Pilots Take Off, but Production Expectations Fall Short
2024 witnessed a proliferation of AI pilots across enterprises. From generative AI chatbots to predictive analytics tools, many organizations explored the potential of AI with great enthusiasm. However, transitioning these projects from pilot phases into full-scale production environments proved difficult. Common barriers included insufficient infrastructure, unclear ownership, and a lack of governance processes to mitigate risk. While AI projects generated buzz, fewer delivered measurable ROI, exposing the gap between ambition and execution. Driving home this point, Battery Ventures’ 2024 State of Enterprise Tech Spending Survey showed a 42% delta between expected and actual AI usage.
The challenge for executives: Build the governance and operational frameworks to scale AI beyond pilots and prove its business value.
2. Third-Party and Embedded AI: In Use but Unmanaged
For those AI systems in use, a substantial portion of these AI systems were from third-party vendors. These include SaaS-based AI platforms, existing software with embedded generative AI (e.g. AI-driven CRMs and analytics tools), and APIs offering AI capabilities. However, many organizations lacked oversight into how these systems operate, who owns their governance, and how risks are assessed.
The takeaway: Enterprises must govern not just their in-house AI but also third-party AI solutions to avoid vulnerabilities and compliance issues. Read more about the uses cases and model types that enterprises implemented in 2024 here.
3. AI Regulatory Landscape is Fragmented: EU, US States & Agencies Lead the Way
In 2024, regulatory developments around AI intensified globally, albeit in a fragmented fashion. The EU’s AI Act entered into force in August, setting a high bar for AI risk management and transparency. In the United States, state-level regulations emerged, while federal agencies issued guidance on AI safety and accountability. This patchwork of regulations created complexity for Fortune 500 executives managing multinational operations.
The key insight: Staying ahead of regulatory changes will require adaptable AI governance frameworks that comply with evolving global standards. Learn more about the EU AI Act, governance themes, and compliance requirements here.
4. Unclear AI Accountability: Governance Ownership Still Up for Grabs
One of the clearest gaps in 2024 was the lack of clarity around AI accountability. Who owns AI governance? While CAIOs and CDAOs often led AI initiatives, oversight responsibilities involving legal, compliance, IT, and operational teams remained unclear. Many organizations struggled to align governance priorities across departments and business units.
The opportunity: Define roles and responsibilities for AI governance—from oversight to implementation—to ensure accountability and cohesion. Technical and business leaders need to work together to prioritize effective and efficient governance. Learn how FINRA approached jump starting governance here.
5. Understanding DIY Governance: Steep and Hidden Costs
Some organizations attempted to build custom AI governance frameworks internally. While this “DIY” approach may seem cost-effective initially, hidden expenses quickly surfaced: resource drain, technical complexities, legal risks, and incomplete coverage of AI risks. Enterprises discovered that ad hoc solutions were financially unsustainable and technically unmaintainable as AI use cases proliferated.
The lesson: Investing in scalable, purpose-built AI governance strategies can save significant costs and prevent AI-related disruptions down the line. Read more about the steep costs of building a homegrown governance system here.
Part 2: Trends and Goals for 2025 - Achieving AI Value and Accountability
1. AI Portfolio Intelligence and Minimum Viable Governance (MVG): Business Priorities
In 2025, organizations will need to move beyond experimentation to demonstrate AI’s tangible value. AI Portfolio Intelligence—a strategy to track, realize, and optimize AI assets like a financial portfolio—will emerge as a best practice and best-in-class AI Governance software will provide the capabilities to deliver it. Alongside this, implementing Minimum Viable Governance (MVG) will allow businesses to balance oversight with innovation, focusing on critical AI use cases first.
2. Regulatory Pressure and Public Awareness: Growing Teeth
While there is some uncertainty and there are cases in which AI legislation didn’t move forward (e.g. California’s SB-1047), regulators are gaining momentum, and 2025 will see enforcement efforts intensify. The EU AI Act is now in force and during the 2024 legislative session, at least 45 states introduced AI bills, and 31 states adopted resolutions or enacted legislation. Enterprises can no longer treat governance as optional. Simultaneously, public awareness around AI ethics and risks is increasing. Organizations that fail to govern AI responsibly risk losing customer trust and market credibility.
Proactive governance: Businesses must anticipate and align with evolving regulations, prioritizing ethical AI deployment to build trust and avoid penalties.
3. The Rise of Agentic AI, Trust-Centric Governance, and AI Ownership
2025 will bring the next wave of AI capabilities—including agentic AI systems capable of autonomous decision-making. These advancements introduce new risks, including accountability gaps and unforeseen consequences.
To succeed, organizations will need trust-centric governance models that ensure AI systems are transparent, auditable, and aligned with business objectives. Ownership of AI initiatives will become formalized, with CAIOs and C-level executives playing central roles in governance leadership.
Part 3: Business Implications
What's Your GenAI Exposure? A Cautionary Tale
A recent story underscores the risks of AI. This is obviously a sensitive and unfortunate example, but it underscores the problems a single adverse event can cause. Optum’s AI chatbot, used internally by employees (which appears to have been part of a pilot program) to ask sensitive questions about claims, was inadvertently exposed to the internet. This misconfiguration revealed a serious lapse in governance and exposed the company to reputational damage, security risks, and regulatory scrutiny at an unprecedentedly challenging time for the company (source).
The message is clear: AI exposure without robust governance can lead to significant consequences. Missteps with AI systems, particularly generative AI, can erode years of trust and cause harm to a brand’s reputation at any time.
Part 4: Conclusion and Takeaways - 2024 to 2025: AI Pilots Need to Show Results in Production
As we look ahead to 2025, the imperative for executives is to bridge the gap between AI exploration and enterprise value realization. Here are three key takeaways:
1. Prepare to Manage AI Like a Portfolio
To show measurable ROI, AI initiatives must be treated as a strategic portfolio. Leveraging AI Portfolio Intelligence allows leaders to prioritize high-impact projects, track performance, and demonstrate value to stakeholders.
2. Don’t Wait for an Adverse AI Event
AI-related incidents, such as misconfigurations or misuse, can cause rapid and lasting damage to a company’s reputation and bottom line. Proactively implementing AI governance frameworks—before something goes wrong—is critical.
3. Focus on Priority Use Cases with Minimum Viable Governance (MVG)
Organizations don’t need every AI governance capability overnight. Adopting an MVG approach allows businesses to focus governance efforts on their most critical AI use cases, ensuring compliance, mitigating risk, and maximizing impact.
Final Thought
2025 will be a defining year for AI governance. Fortune 500 executives who act now to operationalize AI governance, scale pilots into production, and build trust will set their organizations up for long-term success. The stakes are high, but the rewards are transformative: AI done right will drive growth, innovation, and competitive advantage well into the future.