Get a first look and demo of the new ModelOp 3.3, which includes the world’s first AI Governance Score. ModelOp’s VP of Product, Dave Trier, will demonstrate how ModelOp 3.3 gives executives a standardized, real-time metric to measure risk across diverse AI initiatives, regardless of whether an organization is using generative AI, in-house, third-party vendor, or embedded AI systems. Dave will also share ModelOp 3.3’s major new enhancements that give executives real-time visibility into their AI initiatives and the risks across the entire enterprise, including:
- AI Governance Inventory & Comprehensive Use Case Management
- AI Governance Score & Automated Compliance Controls
- Enhanced Reporting on AI Governance Adherence
Transcript
1. Introduction
Alright, hello everyone. Sorry for the delay in getting started here. We're ready to go. I appreciate you sticking with us, and welcome to the fifth episode of the Good Decisions monthly webinar on AI governance insights. I'm just checking to make sure people are able to get in okay and doing a quick look at who's joined us.
Great, awesome. Thanks, everybody, for your patience while we get started here. I'm Jay Combs, VP of Marketing, and with me is Dave Trier, our VP of Product.
Over the past couple of months, we've been doing webinars breaking down different AI regulations and trends. But today, we're going to take a bit of a detour and do a deeper dive into our new ModelOp version 3.3, the software we just announced, and we're really excited to share it with you. This is a first peek into what we've been working on, and it includes the world's first AI governance score, which is a standardized metric that measures AI risk and compliance adherence across diverse AI initiatives—regardless of whether you're using generative AI, in-house developed or proprietary models, third-party vendor tools, or embedded AI systems like Microsoft Copilot or Salesforce Einstein.
2. AI Accountability Problem
So, there's a lot to talk about, and what that is. Essentially, it's an apples-to-apples comparison for executives to understand what's going on with AI within the organization, and we're really excited to share it with you. We've got a fairly straightforward agenda today. We're going to talk first about the AI accountability problem—to set up the challenges we saw in the market that prompted us to make these significant announcements and investments in ModelOp 3.3.
We'll give you a quick overview of what it is, and then Dave, our VP of Product, who you all know fairly well by now, will give you a demo of ModelOp 3.3. And, of course, as always, this material will be available after the webinar. We'll send out the deck and the recording so you can share it and come back to it.
As always, feel free to use the chat or the Q&A box to share any thoughts you might have. And with that, let's get going.
Alright, so we've discussed this in some past webinars, but what we call the "AI accountability problem" is a few things. First of all, very few executives can properly identify what AI is being used, how it's being used within their organizations, and what risks it presents.
Part of the issue is that anyone with a credit card can acquire an AI tool and start using it with company information in a variety of ways—similar to how cloud technology got started back in the day. This makes it difficult to manage AI across a large organization.
Another reason is the rapid evolution of technology. There are tons of tools. For instance, ChatGPT has just announced another update. Different verticals and industries, as well as development teams and data science teams, want the freedom to use the tools best suited to their use cases, which makes it challenging to systematically identify all the risks and value of those use cases across a larger enterprise.
And then, almost every week, there are new laws and regulations related to AI, with real deadlines that are happening now. All of this points to a singular challenge: How do you innovate with AI while protecting the business?
3. Introduction of ModelOp 3.3
That leads us to introducing ModelOp 3.3. Dave, why don't you take it from here?
Excellent. Thanks so much, Jay, and thanks everybody for joining. As Jay mentioned, I'm Dave Trier, VP of Product with ModelOp. I am so excited to announce ModelOp 3.3 today. We're incredibly proud of all the effort we put into it. Version 3.3 addresses the challenges that Jay brought up: board-level visibility into the use of AI, discussions around how we can achieve substantial gains from AI, and, most importantly, how to protect the organization.
4. Features of ModelOp 3.3
With ModelOp 3.3, we're advancing what we've done in the past to allow for freedom of innovation while maintaining the right level of control. In version 3.3, we've enhanced visibility into all AI initiatives being worked on across the enterprise. We've also extended our capabilities to help enforce appropriate governance controls without stifling innovation. Additionally, we've improved how we govern and report on the use of AI systems—not just from a safety and responsibility standpoint, but also from a value perspective.
For those who haven't seen or heard of ModelOp Center before, our software is composed of three core capabilities. First, a comprehensive inventory that provides visibility and accountability into all types of models, whether they're open-source, cloud-based, proprietary, vendor models, or embedded AI. Second, a powerful process automation engine that enforces governance policies with automation to avoid slowing down innovation. And third, a robust reporting engine with a suite of out-of-the-box tests for comprehensive testing, monitoring, and automated documentation generation for reviews and validation reports.
5. Demonstration of ModelOp 3.3
Dave will now demonstrate ModelOp 3.3, showcasing how these new features come together. We built this version to ensure that the right governance controls are in place, providing board-level visibility while enabling substantial gains from AI. With the updated inventory, users can dynamically track all types of AI models used within their organization without slowing innovation cycles. The process automation engine ensures the correct governance steps are followed for every AI system, reducing the need for manual interventions.
6. AI Governance Score
One of the key new features is the AI Governance Score, which provides an apples-to-apples comparison for executives or those not familiar with all the nuances of AI technology. It answers questions like: "Am I complying with the governance policy? Where do I have outstanding items? Where are the risks?" This score enables large enterprises to adhere to their governance policy in an automated and systematic manner.
Additionally, we've introduced model cards—a common concept that allows for easy customization and documentation of AI models. These model cards make it simple to understand a model's purpose, biases, risks, and limitations, providing an easy-to-consume format that can be shared with executives or stakeholders.
7. Key Takeaways
In summary, ModelOp 3.3 offers substantial improvements in visibility, governance, and automation for AI initiatives. By enhancing inventory management, process automation, and reporting capabilities, we're providing organizations with the tools they need to manage AI risks and comply with regulations effectively. The AI Governance Score and model cards are just a few of the new features that make it easier to evaluate and communicate the status of AI systems across the organization.
With that, I'll hand it back to Jay.
Thank you, Dave. I hope everyone found that overview helpful. As always, feel free to reach out with questions, and we encourage you to take advantage of the resources we'll be sharing after the webinar.
8. Detailed View of AI Use Cases
Now, let's dive into a detailed view of how ModelOp 3.3 handles AI use cases. One of the main goals of ModelOp 3.3 is to ensure consistency and accountability for each AI use case. For example, when you open up an AI use case in ModelOp Center, you can immediately see key information like the organization it belongs to, whether it uses PII (Personally Identifiable Information) or PHI (Protected Health Information), and its risk tier. You also have a clear view of the AI lifecycle stage—whether it's in development, validation, testing, or production.
Most importantly, ModelOp 3.3 presents the open items that need to be addressed, such as tasks or identified risks. You can easily drill down into each risk to understand what needs to be done, and the governance score is displayed prominently to provide an overview of compliance with governance requirements. The governance score is automatically updated as new information is added, which ensures that compliance is always up to date.
9. Inventory and Search Enhancements
With the new inventory and search enhancements in ModelOp 3.3, we've made it easier for users to find specific AI systems and track relevant information. Whether the AI is vendor-supplied, open-source, proprietary, or embedded, users can maintain a consistent view across the enterprise. ModelOp 3.3 also includes enhanced search capabilities, enabling users to quickly identify AI systems with specific attributes, such as those that include PII or those belonging to a particular business unit.
These improvements make it possible to generate reports in various formats, including CSV files, which can be used for offline analysis. For enterprises that prefer using tools like Excel, this feature provides a convenient way to create custom reports and share key information.
10. Use-Case Centered Inventory
In ModelOp 3.3, we've centered everything around the use case. AI is meant to solve business problems, so it's natural that our inventory should focus on business use cases. This approach helps users understand the bigger picture of what they're trying to achieve and how their AI initiatives contribute to solving business problems.
Users can now filter the inventory by business problems or use cases, allowing them to view all AI systems that are part of a specific business function. This consistency makes it easier for stakeholders to understand how AI is being used and what the potential impact is on their organization.
11. Customizable Information Collection
One of the great features of ModelOp 3.3 is the ability to customize the information collected for each AI use case. Enterprises have different needs, and ModelOp 3.3 allows them to customize fields to capture the information that matters most to them. Whether it's business justification, scope, or other attributes, users can add custom fields to the inventory.
This level of customization extends beyond use cases and into technical implementations as well. Users can track the specific technologies used for each AI initiative, such as language models or orchestration frameworks. This provides both technical and non-technical stakeholders with a comprehensive understanding of each AI system, from its purpose and business impact to its underlying technologies.
12. Detailed View of AI Governance Scores
In ModelOp 3.3, the AI Governance Score is an integral part of assessing compliance and risk across AI initiatives. Each AI use case is assigned a governance score, which dynamically updates as new information becomes available. This score helps stakeholders quickly understand the compliance status of each AI system.
The governance score aggregates key factors such as risk assessments, approval status, and completion of required tasks. Users can also see where specific gaps exist, allowing them to address any compliance issues proactively. The automatic updating of governance scores helps ensure that compliance is always current, providing greater peace of mind for executives and governance teams.
13. Automating AI Governance Processes
ModelOp 3.3 emphasizes the automation of governance processes to streamline AI management. By automating key steps, such as risk assessments, compliance checks, and documentation generation, ModelOp 3.3 reduces the burden on governance teams and ensures consistency across AI systems.
For example, the process automation engine can trigger actions based on specific criteria—such as notifying stakeholders when a new risk is identified or when an approval is overdue. This type of automation helps teams stay on track without needing to manually monitor every AI use case, leading to greater efficiency and reduced risk.
14. Reporting at Multiple Levels
Effective reporting is crucial for managing AI at scale. ModelOp 3.3 provides robust reporting capabilities at both the individual use case level and the departmental level. Users can generate reports that provide detailed information on specific AI systems, including governance scores, compliance status, and performance metrics.
For executives and governance officers, ModelOp 3.3 also offers a "My Work" dashboard, which provides an overview of all AI systems under their purview. This dashboard highlights critical tasks, recent risks, and areas that require attention, enabling users to focus their efforts on high-priority items.
15. Managing Risks in AI Systems
One of the challenges in AI governance is effectively managing risks across multiple AI systems. ModelOp 3.3 provides tools to identify, assess, and mitigate risks in a systematic manner. Each AI use case includes a detailed risk assessment that outlines identified risks, their severity, and the actions needed to address them.
The system also allows users to add comments and updates to each risk, creating an audit trail that documents how each risk is being managed. This feature is particularly useful for tracking the resolution of risks over time and ensuring that all stakeholders are informed of the current status.
16. Workflow Automation and Task Management
In addition to automating governance processes, ModelOp 3.3 includes workflow automation features that help manage tasks throughout the AI lifecycle. Users can define workflows for key activities, such as model validation, testing, and deployment, ensuring that each step is completed in accordance with governance policies.
Tasks are automatically assigned to the appropriate team members, and users can track the progress of each task through the ModelOp Center interface. This level of workflow automation helps ensure that all required activities are completed on time and that nothing falls through the cracks.
17. Conclusion and Next Steps
In conclusion, ModelOp 3.3 represents a significant step forward in AI governance, providing enterprises with the tools they need to manage AI risks, ensure compliance, and maximize the value of their AI initiatives. By focusing on visibility, automation, and consistency, ModelOp 3.3 helps organizations navigate the complexities of AI governance with greater confidence.
As we wrap up, we encourage everyone to explore the features of ModelOp 3.3 further and consider how they can be applied within your own organizations. Please feel free to reach out to our team for more information or to schedule a personalized demo.
Thank you once again for joining us today, and we look forward to continuing the conversation in our next webinar.