Transcript
Introduction to AI Governance
Dave Trier (VP of Product, ModelOp):
My name is Dave Trier, and I’m the VP of Product with ModelOp, a software startup specifically focused on AI governance for the Fortune 500. We’ve been in business for about five years, and during that time, I’ve talked with hundreds of executives about governance—specifically AI governance—and how to protect organizations.
Recently, over the last 18 months, I’ve seen executives ask two critical questions: "How are we using AI, and how do we get it out there faster?" and "How do we protect the organization?" These are opposing questions in many ways, and many of these leaders turn to me and ask, "Dave, how do I get started with this? I don't even know what AI governance is."
Minimum Viable Governance Concept
So, how do you get started? I’m an engineer by training, so I think of this through the lens of product development—like minimum viable product (MVP). This is where we came up with the concept of "Minimum Viable Governance" (MVG). I’ve invited Jodi and Agus to join me to discuss the right level of governance needed to get started, given that everyone's at different stages in their AI journey.
We’ll explore the Goldilocks of governance—just enough to get started now, while knowing you can mature your governance practices as your AI capabilities grow. Jodi, why don’t you introduce yourself?
Jodi Blomberg (VP of Data Science, Cox Automotive):
I’m Jodi Blomberg, the VP of Data Science for Cox Automotive. I was previously with Charles Schwab and Waste Management, so I’ve transitioned from highly regulated industries to a largely non-regulated B2B environment. I’ve seen both sides of the spectrum, and AI governance has been part of both.
Agus Sudjianto (SVP of Risk & Technology, H2O.ai):
I’m Agus Sudjianto, the Senior VP of Risk & Technology at H2O.ai. I’ve retired from Wells Fargo, where I was the Head of Model Risk Management. In my previous role, if anything went wrong with a model, it was my responsibility. Now, I’m focused on making AI safe by designing and building new products at H2O.ai.
Risks in Generative AI
Dave Trier:
Before we dive into the practical side of governance, let me ask you both about the risks you see in generative AI. Jodi, as you’ve moved from a regulated industry to one that's not, what keeps you up at night?
Jodi Blomberg:
Honestly, not much. I told my chief security officer last week that nothing keeps me up at night, and it’s mostly true. We’re already monitoring around 120 traditional AI/ML models in production, and I feel confident in our governance over them.
However, with generative AI, I do worry about reputational risks. When you use third-party models, there’s less transparency in how they were trained. You might not know if the model was trained on Reddit data or The New York Times, and that unpredictability poses potential risks. We are cautiously proceeding with generative AI but with strict parameters for its use, especially when customer-facing.
Dave Trier:
Agus, what are your thoughts?
Agus Sudjianto:
When using AI for low-stakes purposes like entertainment, there’s less to worry about. But for real decision-making, especially in high-risk applications, AI can be unreliable. These are powerful systems, but they’re also prone to errors, and that’s where things get risky. If AI makes a mistake, it could land in The Wall Street Journal, and your CEO might be called to testify before Congress. That's why you need robust governance in place.
Practical Steps for AI Governance
Dave Trier:
Let's get into the practical side—how do organizations get started with AI governance? Jodi, what are the minimum capabilities needed to start an AI governance program?
Jodi Blomberg:
At a minimum, you need two things. First, a model inventory. You should know exactly what models you have in production, even if that’s tracked in an Excel sheet. Second, you need some form of ranking system. You should evaluate the risks your models pose, whether that’s reputational, financial, or otherwise. It’s about setting a baseline—what errors are you accepting from humans today, and how does that compare to the errors you're accepting from machines?
Dave Trier:
Great points. Agus, from your experience, what are the basic steps?
Agus Sudjianto:
The first step is accountability. In a bank, someone like the Chief Model Risk Officer is accountable for the models and reports directly to the Chief Risk Officer or the board. This level of independence is crucial for making unbiased decisions. Second, everyone must have the mindset that all models are wrong at some point. You need to know when they fail and what the impact will be.
Testing Generative AI Models
Dave Trier:
Testing generative AI models is a big question right now. Agus, what’s your approach to testing these models?
Agus Sudjianto:
A lot of testing out there, like benchmark scores, is complete nonsense. You have to test for specific use cases. It’s not about cherry-picking test data to make models look good, but rather about understanding the limitations of the model and thoroughly testing it under a variety of conditions.
Dave Trier:
That’s right. Testing needs to be comprehensive and honest, not cherry-picked to showcase good results. For organizations that don’t have the resources banks do, automation becomes critical to scale testing efforts.
Initiating an AI Governance Journey
Dave Trier:
Jodi, if you were advising the audience on the first steps of an AI governance journey, what would they be?
Jodi Blomberg:
Education is key. People need to understand that generative AI models should be monitored just like any other model. You need a model inventory and a ranking system to identify high-risk models and manage them effectively.
Learning from the Financial Sector
Dave Trier:
Agus, any final thoughts?
Agus Sudjianto:
Learn from the financial sector. Banks have been doing this for over a decade, and there’s a lot to learn. Start by scaling down what they do to fit your organization’s needs. And if you’re interested in testing, stop by H2O.ai’s booth for a demo of our automated testing tools for generative AI.
Conclusion
Thank you, Agus and Jodi, for sharing your insights today. And thank you to the audience for joining us.