Hosted by
Guest
Watch Full Episode
Jay Combs: Hello, and welcome to the Good Decisions podcast, the podcast for AI governance insights. I'm Jay Combs, the VP of Marketing here at ModelOp. This podcast will be a series on practical insights and frequently asked questions we hear from our customers, prospects, and industry folks about enterprise AI governance. Topics will include governance frameworks, regulations, generative AI, responsible AI, and more. Our goal is to help people understand what everyone is talking about, ensuring we're all speaking the same language when it comes to AI governance. Each week, we'll drop a new podcast featuring a guest with specific expertise in the industry or on a governance topic.
Introduction of Guest and Discussion on Risk Tiering
Jay Combs: With me today is Amanpreet Kaur, our Customer Success and Implementation Engineer. She has over ten years of experience with data science and machine learning models and holds a Ph.D. in astrophysics. Welcome, Aman.
Amanpreet Kaur: Hi, everyone. Thank you so much, Jay, for having me on the very first podcast for ModelOp. I'm thrilled to be here.
Jay Combs: We're thrilled to have you. I know you're busy working with our customers and getting a lot of questions. You have a unique position to hear what folks are curious about across different verticals and customers. So, we want to pick your brain about what people need insights on. Today's podcast is about risk tiering.
Understanding Risk Tiering
Jay Combs: What is risk tiering, and why is it important?
Amanpreet Kaur: Risk tiering involves categorizing AI initiatives based on potential risks, severity, and impact. This helps organizations prioritize their risk management workflow. For example, a high-risk model would be prioritized differently than a low-risk model, which might have fewer steps to follow.
Jay Combs: Why does this matter to someone on a marketing or engineering team?
Amanpreet Kaur: It helps prioritize focus and resources, enabling effective workflows and accelerating innovation. For instance, without proper risk tiering, data leakage could occur, leading to significant issues. Proper risk tiering ensures models go through appropriate scrutiny to prevent such incidents.
Jay Combs: I've heard of cases like Samsung accidentally sharing proprietary information with ChatGPT. Proper risk tiering could prevent such mishaps, right?
Amanpreet Kaur: Exactly. These incidents highlight the importance of risk tiering.
EU AI Act and Implementation of Risk Tiering
Jay Combs: How do you implement risk tiering within a larger AI governance framework?
Amanpreet Kaur: Implementation varies by industry. For instance, financial institutions and healthcare have different risk criteria. The first step is to identify what constitutes high risk in each industry. Data privacy, safety, compliance, and model ownership are also crucial factors. Risk calculations can involve numerous controls, making the process complex.
Jay Combs: Does risk tiering apply to all types of models, whether developed in-house or using third-party tools?
Amanpreet Kaur: Yes, every model that goes into production and impacts the business should have a defined risk tier. It's essential to start thinking about risk tiering early in the development process.
Implementing Risk Tiering: Broad Approach
Jay Combs: Can you bucket the implementation points into a broader approach?
Amanpreet Kaur: Sure. There are three broad categories: model complexity, intended usage or purpose, and exposure or impact. Complexity involves the mathematical and statistical aspects of the model. Purpose refers to why the model is being developed, and exposure considers the model's potential impact.
Jay Combs: So complexity, purpose, and impact are key factors in risk tiering.
Amanpreet Kaur: Exactly.
Scaling Risk Tiering and Automation
Jay Combs: How do you scale up risk tiering with so many checks and balances?
Amanpreet Kaur: Automation is essential. Depending on the model's risk tier, it can go through varying control points. Automating this process is crucial to manage scaling and complexity effectively.
Jay Combs: Automation seems critical for scaling. Without it, bringing models to market would take too long.
Amanpreet Kaur: Absolutely.
Trends in Risk Tiering and AI Governance
Jay Combs: What's a top trend you're seeing related to risk tiering and AI governance?
Amanpreet Kaur: Scaling and complexity are major trends. As more use cases involve AI, risk calculations become more complex. We've seen customers go from a few to dozens of criteria for defining risk. Generative AI introduces new risks, further increasing complexity.
Jay Combs: How does rising complexity impact managing these initiatives?
Amanpreet Kaur: It increases the need for automation. Model life cycles now require more control points, making manual management impractical. Automation is necessary to handle this complexity.
Summary and Conclusion
Jay Combs: To summarize, risk tiering categorizes AI initiatives based on complexity, purpose, and impact. Different industries have specific guidelines and criteria. The need for automation is critical as the scale and number of models grow. For those struggling with these challenges, ModelOp can help. Visit www.modelop.com to get in touch. Thank you, Aman, for joining us and sharing your insights on risk tiering.
Amanpreet Kaur: Thanks so much.
Jay Combs: That's it for the first episode of the Good Decisions podcast on risk tiering. We'll have another episode soon. Please subscribe to the podcast, and we look forward to talking to you again soon.