Good Decisions: Episode 4

Who's Accountable for AI and its Risks?

Join Dave Trier, ModelOp’s Vice President of Product, for a discussion of the complexities surrounding artificial intelligence and the imperative for clear accountability in today's fast-evolving technological landscape.

Register for the series.

In this webinar, Dave Trier discusses the critical question of AI ownership within enterprises. As AI continues to revolutionize industries and redefine the ways in which businesses operate, it has become increasingly essential for CEOs and organizational leaders to establish clear accountability structures for AI systems. From mitigating risks to maximizing opportunities, the stakes are high when it comes to navigating the ethical, legal, and operational challenges posed by AI.

Whether you are a seasoned executive looking to enhance your AI governance strategy or a curious newcomer eager to learn more about the complexities of AI ownership, this virtual event is designed to inform and empower you on your journey towards responsible AI leadership.

Download the slide deck.

For a deeper dive on the topic, read the article: Who’s Accountable in the Enterprise for AI and its Risks?

Transcript

Hello, I’m Jay Combs. Welcome to Episode 4 of our Good Decisions monthly webinar series on enterprise AI Governance. Joining us today are Dave Trier, ModelOp’s VP of Product, and Pete Foley, the CEO and co-founder of ModelOp. We have three key areas to cover:

  1. The AI Accountability Problem – Why this is a pressing issue right now.
  2. The Rise of the Chief AI Officer – A fireside chat between Pete and Dave, with Pete sharing insights from over 25 years as a tech executive.
  3. A Deep Dive into Accountability – Including an important AI Governance timeline from the U.S. Office of Management and Budget (OMB) and why December 1, 2024, is a critical date for your calendar.

With that, let’s get started. Dave, over to you.

Dave Trier: In previous webinars, we’ve talked about the transformative potential of AI, and how AI has elevated to the forefront of discussions at the executive level, within boardrooms, and even on the streets, particularly at Fortune 500 and Global 2000 companies. These organizations are discussing AI as a key strategic initiative to drive transformation.

From my experience, whenever a company has a significant transformative initiative—whether it’s digital transformation or moving to the cloud—they assign a senior officer to drive and oversee that initiative. But as you’ll see on this slide, less than 2% of CEOs can identify where AI is being used within their organizations or understand the associated risks. This statistic comes from Accenture’s CEO and Chair, Julie Sweet.

Similarly, in my conversations with customers and prospects, I often ask, “Who is accountable for AI, especially if something goes wrong?” I’ve received responses like the one from a global CISO who said, “I’m responsible for security breaches, but I don’t know who is responsible for AI.” This is baffling. AI is being discussed with the same weight as cloud or digital transformation—so how is there no senior officer accountable for this crucial transformation?

Impact of Regulations on AI Governance
Let’s move to the next slide. This lack of accountability is going to be a big problem, as regulations will soon force enterprises to address this question head-on. A recent study from the AI Index 2024 Annual Report shows a 56% increase in AI-related regulations within the U.S. alone over the past year.

As we’ve discussed in previous webinars, the EU AI Act, which was recently ratified, introduces classifications for high-risk and unacceptable-risk AI systems. Many enterprises are using AI for tasks like recruiting or processing applications, which often fall under these high-risk categories according to the EU AI Act. These regulations will soon compel enterprises to assess how they’re using AI, what risks are involved, and, most importantly, who is accountable.

US Federal Requirements for AI Accountability
In the U.S., the government is moving ahead as well. Following President Biden’s executive order, the U.S. Office of Management and Budget (OMB) published a memorandum (M-24-10) in March of this year, which mandates that all federal agencies assign accountability for AI in the form of a Chief AI Officer. This is a major step forward, ensuring accountability at the federal level.

There are three main requirements:

  1. Accountability – Every federal agency must designate a Chief AI Officer.
  2. Governance Plan – Each agency must have an AI Governance plan.
  3. Safeguards – Agencies must implement proper safeguards by December 1, 2024.

This is a huge step in ensuring AI is used responsibly and that someone is accountable for its use within federal agencies.

Timeline for Federal Mandate Compliance
Let’s look at the timeline in more detail. This isn’t just a “nice to have.” There are specific deadlines that federal agencies must meet:

  • Within 60 days – Agencies must designate a Chief AI Officer.
  • Within 180 days – A plan for AI Governance and risk management compliance must be in place.
  • By December 1, 2024 – AI safeguards must be fully implemented.

These regulations will push agencies—and eventually private enterprises—to establish proper accountability and governance frameworks.

Private Sector Response to AI Accountability
Now let’s switch gears to the private sector. Regardless of the federal mandate, many large enterprises, including Fortune 500 and Global 2000 companies, are already recognizing the need for accountability. We’re seeing the rise of the Chief AI Officer (CAIO), who is responsible for overseeing AI initiatives and risks within these organizations.

For example, the Financial Times recently published an article discussing how companies are defining this AI leadership role. Recruitment for Chief AI Officers and other AI executive positions has tripled in the past five years, signaling a clear trend toward AI accountability in the private sector.

Job Description for Chief AI Officer
Here’s a sample job description for a Chief AI Officer. We won’t go through all the details, but as you can see, the CAIO is responsible for all aspects of AI within an organization—from technology development and usage to alignment with strategy, policy, governance, and oversight. Essentially, they are the key person accountable for ensuring that AI is being used responsibly and safely within the organization.

Discussion on the Importance of AI Ownership
We’ve now laid the groundwork by looking at the regulations and trends at the U.S. federal level, and what’s happening in the private sector. It’s clear that identifying who owns AI and who is accountable for its usage from a risk and governance perspective is critical.

I’ll now turn it over to Pete Foley, CEO of ModelOp, to share some of his insights. Pete, you’ve been in the technology space for 25 years and have witnessed the rise of trends like data, cloud, and the Internet. Let’s spend a few minutes discussing how you see AI transforming business today, and what best practices and lessons you’ve learned over the years. Thanks so much for joining us, Pete.

Insights from Pete Foley on AI Transformation
Thank you, Dave, and thanks for highlighting my experience—though maybe not so much the exact number of years! It’s fascinating to see how roles evolve in response to technological advancements. In the early days, enterprise security was focused on physical security, but the Internet and e-commerce created new challenges, leading to the rise of the Chief Security Officer and eventually the Chief Information Security Officer (CISO). Someone had to get ahead of the new threats.

Similarly, we saw the emergence of Chief Data Officers (CDOs) as data privacy concerns grew, particularly with the introduction of GDPR and other data privacy laws. The explosion of data also created strategic benefits, which further drove the need for dedicated data leadership.

Now, we’re seeing a similar evolution with the Chief AI Officer (CAIO). AI is too transformative, too strategic, and too important for organizations not to have someone owning it. Existing roles, like the CIO or CDO, are often overwhelmed with their own responsibilities. The velocity of AI adoption—especially with generative AI—requires someone focused solely on this technology. AI models are unique assets, and the risks they pose are just as significant as the opportunities they present.

Enterprises' Approach to AI Ownership
Thanks, Pete. You’ve talked to many executives across different industries. How are you seeing organizations approach AI ownership? Is it more common to have committees, or is there a trend toward assigning a single person to be accountable?

Pete Foley: What I’m seeing is that it often starts with a committee, but that’s evolving. AI is a board-level discussion. CEOs are trying to figure out how to answer tough questions from their boards: How are we taking advantage of AI? and How are we managing the risks?

The first instinct for many enterprises is to form a committee. But what we’re seeing among more innovative companies, particularly Fortune 500 and Global 2000 firms, is the appointment of senior-level executives to lead AI strategy. Public announcements about AI leadership roles are becoming more common, and there’s a clear correlation between these companies and their drive to stay ahead with AI adoption.

Committees are still important because AI requires collaboration across multiple departments to succeed. But for AI to move forward, someone has to own the strategy. Much like the CDO and CISO work closely with the CIO, a CAIO needs to work hand-in-hand with the CIO and CTO to execute a comprehensive AI strategy.

Chapter: Impact of Committees on AI Innovation
That makes sense. One thing we’ve noticed is that committees can slow down innovation due to the need for consensus. Have you seen this in your conversations with executives? What’s their feedback on AI Governance committees?

Pete Foley: Absolutely. Committees are necessary, but they can stall progress. It’s like having five players on the basketball court, and everyone wants to take the last shot. While input from various stakeholders is important, at the end of the day, someone has to be the decision-maker for AI strategy.

We’re seeing that the most successful companies appoint a senior executive to own the strategy while still gathering input from a committee. This person ensures that the AI initiatives are moving forward at the right pace, while balancing innovation with the necessary guardrails. Governance often lags behind, but it’s critical to get it right.

Interestingly, the U.S. federal government is ahead of many private enterprises when it comes to AI Governance. The mandate for AI chiefs at federal agencies is a clear sign that accountability for AI is no longer optional.

Demand for AI and Governance
I often draw a comparison to the early days of cloud adoption. When the demand for cloud services grew, businesses started swiping their credit cards to get what they needed without waiting for approval from IT. We’re seeing a similar situation with AI, where businesses are so eager to leverage generative AI and large language models (LLMs) that they might act independently.

Pete, what’s your perspective on how organizations can avoid this “credit card swiping” scenario with AI?

Pete Foley: Right now, many enterprises are blocking the use of generative AI because they don’t want to end up in the headlines for using AI improperly. Brand protection is driving that hesitation. But simply blocking AI isn’t a long-term solution. The better approach is to develop a strategy that balances innovation with governance.

Without clear direction, generative AI can spread throughout an organization unchecked, just like cloud services did in the past. Someone needs to step forward, create a strategy, and work with the necessary stakeholders to safely implement AI models. Unlike swiping a credit card for a cloud service, AI models are high-risk, high-reward assets—and they need to be treated that way.

Blocking AI vs. Formulating Strategy
Absolutely. One last question we hear often is: Should we wait for AI regulations to be fully developed before moving forward? Some executives argue that regulations, like the EU AI Act, will take years to come into effect. How do you respond to that?

Pete Foley: It’s simple: You can’t wait. S&P 500 companies are already highlighting AI in their earnings calls because AI is becoming a competitive advantage. If you’re waiting for regulations to be finalized, you’re going to be left behind. You need governance in place now, not just to protect your brand but to ensure you’re leveraging AI to its fullest potential.

Importance of AI Governance Accountability
Thanks, Pete. That’s great advice. To summarize, here are the key takeaways:

  1. Identify Accountability – As we’ve emphasized throughout, you need to identify who will be accountable for AI Governance. Whether that’s a Chief AI Officer or another role, this person needs to answer to the CEO and the board about AI usage and protection.
  2. Start Now – Don’t wait for regulations to catch up. Business units are already pushing ahead with AI. If you don’t have governance in place, you’ll end up with “AI Governance debt,” having to catch up later. Implement a Minimum Viable Governance (MVG) framework to build a strong foundation now.
  3. Build Out Gradually – Work with your organization to build out your AI Governance structure. You don’t need years of analysis—AI is moving quickly, and you need to act now.

With that, I’ll turn it back to Jay for next steps.

Q&A
We have a question from the audience about communication and alignment: How can the CIO and CEO work with senior leadership to align on AI strategy, given the challenges in revising KPIs for a more AI-friendly approach?

Dave Trier: That’s a great question. The key is to work with trusted partners who have experience with AI Governance. They can help develop a common language and vernacular that takes into account best practices and regulatory requirements. We often recommend a “101” training session on AI Governance to align executives and stakeholders across the organization.

Closing Remarks and Invitation
Thanks, Dave. A few final thoughts before we wrap up:

AI Governance might seem like a daunting task, but it doesn’t have to be a multi-year effort. ModelOp can help you implement a Minimum Viable Governance approach in fewer than 90 days. With the upcoming OMB rules and over 400 federal agencies starting to appoint Chief AI Officers, AI Governance is going to be more prominent in the news, so now is the time to get ahead of it.

Get the Latest News in Your Inbox
Share this post
Contact Us

Get started with ModelOp’s AI Governance software — automated visibility, controls, and reporting — in 90 days

Talk to an expert about your AI and governance needs

Contact Us