Good Decisions: Episode 1

AI Governance Urgency and the Risks of AI Gone Wrong

Join ModelOp’s VP of Product, Dave Trier, for a webinar in which he discusses three real world situations that stress the urgency for Enterprise AI Governance.

Register for the series.

The speed with which Generative AI is being embraced and rushed into production by enterprises is head-spinning, and the end of the calendar year was ripe with headlines of “AI Gone Wrong”.  This means the time to safeguard AI was yesterday.

Join ModelOp’s VP of Product, Dave Trier, for a webinar in which he discusses three real world situations that stress the urgency for Enterprise AI governance from enterprise AI gone wrong.

In this webinar, Dave analyzes current events in which enterprises failed to balance AI risk and reward, and shares AI Governance insights that he’s collected from conversations with over one hundred Fortune 500 executives over the past year.

Download the slide deck.

For a deeper dive on the topic, check out the whitepaper: 2024 Readiness: Three Urgent Lessons for Enterprises on Managing AI Risk and Reward


Transcript

Introduction

Hello, and welcome. I am Jay Combs, the VP of Marketing at ModelOp, and welcome to "Good Decisions," a monthly webinar for enterprise AI governance insights. Thank you to everyone who registered and joined today. I know it's challenging to carve out time in the middle of the day, especially midweek.

Evolution of AI Tech and Use Cases

With AI technology, use cases, and regulations evolving rapidly, we've had the opportunity to speak with many Fortune 500 companies and executives over the past five years. We've observed a significant gap in education and the ability to exchange insights about enterprise AI and AI governance, including visibility, controls, and reporting on AI initiatives. This webinar series aims to provide a forum for sharing ideas and asking questions.

This is our inaugural episode, titled "Real-World Lessons on AI Governance Urgency and the Risks of Enterprise AI Gone Wrong," led by Dave Trier, our VP of Product. Dave, take it away.

Opening Remarks From Dave Trier

Thank you, Jay. Good morning, good afternoon, and good evening to everyone joining us, including those from Europe and farther east. I'm Dave Trier, VP of Product at ModelOp. Over the past decade, I've had the privilege of speaking with hundreds of executives and users of AI and ML systems. Today's webinar will address two key questions for those starting their AI journey: What are the risks when adopting AI in an organization, and what are the key steps to begin an AI governance journey to safeguard the organization?

We'll first highlight a few examples of where AI has gone wrong, helping you identify the risks to safeguard against. We'll discuss examples from large organizations on getting started with AI governance, focusing on visibility, controls, and automation. We'll conclude with key steps to begin your AI journey.

Significance of Generative AI

Everyone is aware of AI, and more recently, generative AI. Having worked in technology for over twenty years, I've never seen such rapid adoption by large organizations. Generative AI is not just a technology trend; it's being used across various departments, including marketing, finance, and operations, elevating discussions to the C-suite and board levels. This widespread use is evident in the increasing mentions of AI in earnings calls over the past year.

Demand for AI and Its Implications

The rapid adoption of generative AI brings significant demand and, consequently, significant risks. While ethical concerns like avoiding racial or ethical bias are paramount, we also need to consider other risks such as financial, regulatory, and intellectual property (IP) risks.

Risks of AI Adoption

  1. Financial Risks: An example is a case where an actor tricked a dealership using ChatGPT to give away a Chevy Tahoe for a dollar. This incident highlights the financial risks if AI safeguards are inadequate.
  2. Regulatory Risks: Rite Aid faced regulatory action after using biased facial recognition technology, leading the FTC to restrict their use of AI for several years. This situation underscores the need for appropriate AI governance.
  3. IP Risks: A Samsung employee unknowingly submitted source code to GPT, risking IP leakage. This scenario emphasizes the importance of proper safeguards to protect intellectual property.

Measures to Safeguard the Organization

To mitigate these risks, organizations should start by gaining visibility into AI initiatives. For instance, a large healthcare organization faced challenges tracking AI and ML use cases across multiple teams and departments. They needed a comprehensive view of all AI initiatives, whether internally developed, vendor-supplied, or embedded in software. This visibility is crucial for reporting to executives and regulatory authorities.

Permission Risk and Visibility

Permission risk arises when teams don't know the data lineage or usage permissions. Proper cataloging and inventory processes can help manage these risks by defining usage permissions and limitations for each AI system.

Consistent Governance Processes

Organizations often struggle with inconsistent governance processes, especially when balancing innovation and safeguards. A consumer goods company with AI models making automatic decisions faced this challenge. They implemented a consistent but lightweight governance framework with automated controls to ensure adherence to governance steps, independent reviews, and testing before deployment.

Automated Controls and Processes

Automating governance processes is essential to ensure consistent application across different AI systems, including open-source, vendor models, or embedded AI. This approach helps manage the risks associated with AI deployment in production environments.

Proactive Risk Mitigation in Production

Organizations must proactively monitor AI systems in production to prevent issues. For example, regular monitoring of generative AI systems can detect potential problems like PII leakage or performance issues. In case of an incident, having an automated fallback plan can mitigate damage, as illustrated by the earlier mentioned Chevy incident.

Key Lessons Learned

In summary, the key steps to safeguard your organization include:

  1. Gaining visibility into all AI initiatives.
  2. Implementing a lightweight set of controls for testing and evidence gathering.
  3. Establishing monitoring and automated fallback plans for production environments.

Ownership Approach for AI Governance

For effective AI governance, it's crucial to have a designated AI governance lead. This person should oversee a cross-functional committee, ensuring comprehensive coverage across all organizational departments.

Getting Started with Governance

Governance may seem daunting, but it doesn't have to be. Start with targeted steps to protect your enterprise from undue risks. It's possible to make significant progress in weeks, not years.

We encourage you to reach out to us at ModelOp.com or sales@modelop.com for further discussions, executive workshops, and more. This concludes our first episode. Next month, we'll focus on regulations in healthcare, pharma, biotech, and other industries. The regulatory landscape is evolving quickly, and we'll share insights to help navigate it.

Get the Latest News in Your Inbox
Share this post
Contact Us

Get started with ModelOp’s AI Governance software — automated visibility, controls, and reporting — in 90 days

Talk to an expert about your AI and governance needs

Contact Us