GenAI and Agentic AI are revolutionizing how businesses approach data management, but they also introduce unprecedented challenges. Is your organization ready to navigate the risks, compliance demands, and governance complexities these technologies bring?
Join us for an insightful discussion led by experts from ModelOp, Macula Systems, and Mercy . Together, we explore how data and AI governance must align to enable innovation while safeguarding compliance and organizational readiness.
What You’ll Learn:
- The Role of Governance: How integrating data governance with AI governance ensures visibility, trust, and compliance.
- Practical Use Cases: Success stories on data and AI governance from Doug Graham, Director of Enterprise Data Governance at Mercy.
- Scalable Solutions: Frameworks to manage data and AI at scale, avoiding pitfalls in innovation and compliance.
- Actionable Insights: Quick-start governance strategies to prepare for the future of GenAI.
Transcript
1. Introduction to AI Governance
Jay Combs:
Welcome to Good Decisions, a monthly webinar about enterprise AI governance insights. I'm your host, Jay Combs, from ModelOp.
Today, we're going to talk about the explosion of data and AI—and how to manage the chaos it can generate—with a special focus on data and AI governance.
Alright, let me get the slides moving here. So with us today are several experts. First, we have Jim Olsen, CTO of ModelOp. Then we have Kyle Hutchins and Cam Rojas from Macula Systems, who are experts in both data and AI solutions.
And finally, we have Doug Graham, Director of Enterprise Data Governance at Mercy—one of the largest U.S. health systems, operating across about seven states including Missouri, Arkansas, and Oklahoma. Its lineage goes back 195 years.
2. Webinar Overview and Objectives
Jay Combs:
So what we're going to cover are four things. First, we’ll talk about the boom that AI is driving in data. Then we’ll bring on Cam and Jim to talk about how to navigate that growth—the trends that are fueling the AI and data boom, with a specific focus on data and AI governance. What are those critical capabilities you need to manage that exponential growth?
Next, we’ll bring on Doug and Kyle to provide a real-world perspective on the importance of governance at Mercy, and share some of the insights Doug has learned over the years.
And finally, we’ll close out this thirty-minute webinar by underscoring the message that governance isn’t just about safeguards—it’s about being a business enabler.
So with that, let’s dive in.
3. Understanding Data Growth
Jay Combs:
Alright. So, you know, there is a giant boom in data—and AI is driving it. I wasn't really sure how exponential that growth was until I came across this article by Chris Curry.
In the article, he shows a strong connection between global data generation and foundational AI models. You can see a link to the article in the data sources he used to build this chart. The chart shows growth in data, in zettabytes, versus the number of foundational AI models over the last couple of years.
It can be hard to internalize how big this growth really is. So, just as an analogy: if you think of one terabyte as 200 miles—like driving from Boston to New York—then 170 zettabytes would be like flying to the outer edges of our solar system, past Saturn and out to Uranus. Just massive, massive growth in data.
And that raises the question: how do you manage that? What does that mean for your business?
4. The Interplay of Data and AI
Jay Combs:
Data is fueling AI. AI is fueling data.
How do you manage this for the benefit of your business? I think there’s a hard way of doing it—and an easy way. And that’s what we’re going to talk about in this webinar.
Let’s first go into some of the trends behind this growth and how to navigate it with Jim and Cam. Welcome, guys. Take it away.
Cam Rojas:
Yeah, so as Jay just discussed—and as the chart pointed out—the growth in data isn’t slowing down anytime soon. I’ll talk a bit from the perspective of the data org and data teams.
We have additional sources still coming into play, and more ways we're interacting with data—especially with the growth in streaming and real-time data. At the same time, data executives and data teams are feeling pressure from boardrooms and stakeholders to meet business demands for data and data products faster. They’re also being asked to evolve their role and bring ML and AI into a more strategic position. That often requires a decentralized or federated operating model.
5. Maximizing Data Value
Jim Olsen:
Yeah, no—and I mean, as we gather all this data, the real value isn't just in hoarding it or dumping it into data lakes. The value comes from putting it to work for actual business use cases.
That's where we're seeing this data flow into ML and AI solutions—and especially into generative AI—using architectures like RAG, or even emerging ones like Magenta AI. So you've got all this data out there, and now it’s being fed into tons of different models that can deliver business value.
We’re also seeing an explosion of different LLMs. Every week, there's something new that outperforms the last one. We saw GPT-4o and its reasoning improvements. Then DeepSeek-V2 came out as a new reasoning model. And now this week, Grok-3 just beat DeepSeek-V2 in some areas.
We're going to continue to see rapid evolution in these models—including distillation, where one model trains another. So how many datasets went into that? Where are they coming from? What’s in there that we need to be aware of?
As we continue to use these distillation techniques and generative AI models, we really need the ability to track back to the data—whether it's used in business solutions or traditional training of AI/ML models.
6. Challenges in Data Governance
Cam Rojas:
Yeah, so the trends we just discussed—the broader trend of data growth, and the more specific trends Jim talked about, like the explosion of models and generative AI—are putting a lot of additional strain on data teams.
And they're exacerbating challenges that already existed.
7. Existing Data Challenges
Cam Rojas:
Over here in the blue box, you can see challenges that existed even before the explosion of data—but they’re only getting harder to deal with now.
Oftentimes, data lives in silos. It’s hard to share. It’s not easy to discover. I don’t know who owns what. Accountability for what happens in the data process is unclear.
I can’t enforce data quality across all of my data products or processes. Trust is low—both in terms of what’s being done with the data and in the quality of the data itself.
And on top of all that, we’re seeing another layer of regulatory complexities being added.
8. Trust Issues in AI
Jim Olsen:
Yeah, and as Cam mentioned, trust is a big issue—especially when it comes to generative AI. How many stories have we heard about hallucinations?
Just today, there was a release from a law firm cautioning their lawyers against using generative AI in court case documents because the AI was making up invalid cases. It all comes back to the lack of transparency—what’s going into these models, what they’re trained on, and so on.
And then, water finds its own level. Even if businesses aren’t explicitly deploying generative AI solutions, many employees are using them on their own. I mean, Copilot is built into Windows now. People have ChatGPT-4 accounts. So even if you’re trying to enforce policies on where and how these tools are used, you often lack traceability.
We don’t have visibility into where generative AI is actually being used in the organization. There's a certain level of autonomy at the individual level, and people are using these tools—sometimes even introducing them into production code.
Do you have processes in place to recognize and trace the use of ML, AI, and generative AI back to their business purposes?
9. Navigating Data Chaos
Jim Olsen:
How do I locate those things? How do I track that usage? We're seeing a lot of chaos out there right now—especially in the data space, and even more so as people experiment with these technologies and sometimes use them for business purposes they probably shouldn't.
Cam Rojas:
Yeah, and I think Jim and I have just talked through some of the challenges from the perspective of the data org and machine learning teams.
What we’ve seen is that these challenges—or symptoms—are especially present, or get worse, when an enterprise lacks a strong foundational data governance program. That includes both the structure and the tools used to manage it.
And this has a real business impact. It’s not just about tech. You end up with high overhead and manual processes that don’t scale. You lose the ability to treat data as products that drive strategy and value. And you get stuck in a never-ending loop of compliance fire drills that disrupt work and often overlook actual risk.
10. Consequences of Poor Data Management
Jim Olsen:
Yeah—and not to mention the lost business opportunities. If you don’t know where your data is or what it represents, it’s much harder to take advantage of it. You can’t easily use a RAG solution or a traditional ML model if you're not even aware that the data exists.
So there’s chaos, not just in managing the data, but even in finding it—knowing it’s there so you can apply newer techniques and unlock new value for the business.
Cam Rojas:
Okay, I’ll kick us off on this one.
We went through a couple iterations of this slide—it was an interesting exercise. Jim represented the ML/AI and data science side, and I came at it from the governance side. At one point, we had 50 capabilities we wanted to list.
But we forced ourselves—because we only had one slide and limited time—to pick the top three capabilities that span both data governance and AI governance. These are the ones we think will have the biggest impact on navigating the chaos and delivering value faster.
I'll go through them quickly from the data side, and then Jim will walk through them from the AI/ML side. Then we’ll show how they come together.
11. Data Governance Essentials
Cam Rojas:
From the traditional data org perspective, the first essential is having a centralized catalog—something that makes data easily discoverable and clearly shows what domain owns what data.
Second is traceability across the entire data product lifecycle. That includes understanding what changes were made and where the source system is.
And third, the ability to start automating compliance. That might include detecting and classifying sensitive data automatically, or enforcing security and access policies based on how that data is being used.
12. Model Governance Considerations
Jim Olsen:
Yeah—and just like with data, you need to know what models exist, where they are, and what they’re doing. It all has to start with the use case, because a single ML model might support multiple use cases.
You need an inventory—not just of models, but of the use cases they support and the data they consume. That helps you close the loop.
Without an inventory, you’ve got nowhere to start.
Similarly, you want traceability and lineage on the model side. For example, who’s using GPT-4 in your organization? Who’s using DeepSeek—and maybe shouldn’t be? You need that visibility into which use cases, models, and assets are in play, and how they relate to things like risk assessments and data sources.
13. Automating Governance Processes
Jim Olsen:
That really just closes the loop—giving you a full picture of the solution: where it came from, what’s going on, and what it’s connected to.
And nobody wants to do this stuff manually. It’s a lot of work. You need a tool—like ModelOp—that automates the full process, drives things forward, and handles what can be done automatically.
It also provides lineage so you can prove you went through the right steps. You can tie everything back to the data, and have that complete end-to-end view—not just of the models, but of the data itself.
Think about potential PII being exposed by a model in a RAG solution. If a document contains PII—as identified by the data catalog—and it ends up in an LLM usage, you’ve got a potential disclosure issue.
How do you answer for that if you don’t have these processes in place?
That’s why you need a complete solution that tracks your data, your models, and your use cases.
14. Real-World Governance Challenges
Cam Rojas:
Yeah, and thinking about some real-world scenarios I’ve been in—we’ve worked with large enterprises that, for all intents and purposes, have unlimited budgets and resources.
In an ideal world, you might want to build a completely vendor-agnostic, centralized data governance solution from scratch. But the reality is: that’s really hard to do, and time is not on your side.
So I think about tools like Microsoft Purview—and how those capabilities can be applied across the different governance layers. Purview can help automatically catalog and organize data into a centralized system, pulling from both on-prem and cloud sources.
It gives you a nice visual map of data flow, including transformations across your entire data product pipeline. You can automatically classify sensitive data and enforce security policies—all within the Microsoft ecosystem.
So tools like Purview give you a huge jumpstart and competitive advantage when it comes to getting data governance done right.
Jim Olsen:
Yeah, and of course, the ModelOp solution—that’s our bread and butter. At its core, it’s built to provide governance across everything from generative AI models to traditional ML, even Excel spreadsheets.
They all use data. They all make business decisions. And we need to understand potential bias, PII, and other risks. Those questions can be answered with a good data catalog—but you also need to know how the data is being used, what models are using it, and where those models are deployed.
That’s where our solution provides that complete, closed-loop visibility—in an automated and traceable way.
Jay Combs:
Nice. Yeah, that’s really helpful. Now let’s translate that into a real-world example.
15. Insights from Mercy Health System
Jay Combs:
We’ve got Doug from Mercy here, and he’s going to share his experience tackling data and AI governance in the healthcare space. So, welcome Doug Graham and Kyle. How are you guys?
Kyle Hutchins:
Great. Thanks, Jay. Thanks for the setup and introduction.
Doug, I think about the time we’ve known each other and worked together through some of the challenges and capabilities that Jim and Cam just discussed. For today’s audience, it’s probably helpful to provide a real-world journey—what Mercy’s going through.
So to start, could you share a little background on your journey and Mercy’s journey at a high level? Then we can go into a bit more detail from there.
Doug Graham:
Sure. Thanks, Kyle.
One of the core elements of our journey started a few years ago when I came back to Mercy. I’d done a lot of consulting before that, and I had worked at Mercy previously, then left and returned—so I’m a boomerang.
I was presenting to our executive leadership about what data governance is, how to build a program, and what the critical elements of a successful program look like. One of the key things is: you have to have executive leadership support.
At the time, I was told, “We need to make this governance process generational for Mercy.” And right then, I stopped my presentation and said, “Okay, let’s talk about what that really includes.”
We had that executive support from the beginning—this was the second round of data governance at Mercy. We started building our framework, which includes eight different domains, each with its own subdomains.
We brought all the business stakeholders together and assigned an executive leader as a domain trustee, along with an operational leader as a domain steward. That sparked a lot of conversations about what we’re doing at Mercy, and what we’re doing together around data.
We had monthly meetings with all the domains represented. That gave us the ability to understand, collectively, where our challenges were. For example, we discovered we were reentering data multiple times throughout the organization. So we started tackling issues like that—making sure we had the right people engaged in solving the right data problems.
16. Evaluating AI Integration Risks
Doug Graham:
And then this annoying little mole kept popping his head up—an AI project that had been spun up. I started hearing about some activity going on, and we had to look into it and evaluate where it fit within our organization.
At first, we saw it as very high-risk—especially given the consequences of a bad decision in a healthcare setting. The implications are major.
So we immediately started asking: how does this fit into our culture? How do we integrate AI responsibly? We called in our mission leaders and began asking what we, as a ministry, are going to choose to do—and not do—with AI.
That was really the beginning of the deeper conversation. And it gave us the right positioning inside the organization to make an impact, to affect change, and to influence how AI would be implemented at Mercy.
17. Collaboration Between Teams
Kyle Hutchins:
Yeah, and Doug, what’s always impressed me is how you took that strategy and call to action—and then organized teams around it.
Can you share a bit about how you, as the lead of data governance, work with the business? How do you collaborate with the data science team? How do you organize to actually execute on that strategy?
Doug Graham:
It really comes down to knowing what’s happening—understanding what the key, business-centric use cases are, and where the real pain points are that AI can help solve.
One example I can share is the ER to inpatient (IP) handoff. When a patient moves from the emergency room to an inpatient room, there’s a ton of information that needs to be transferred. The nurse taking over for that patient overnight needs to know everything that happened up to that point.
But the way that handoff happened varied depending on the location, the people involved, and how they handled documentation. It took a lot of time to gather that information.
I heard about a hackathon that was focused on solving this problem. It came up during one of our data governance council meetings. I asked who was participating and learned it involved a group of our nursing leaders—they were going to dive into the issue and figure out how to improve the handoff process.
So they went ahead with the hackathon. Our data science team was there. Carrie, one of our VPs of Data Science, participated.
They worked through a solution in the course of a day, exploring how generative AI could be part of it. They also incorporated the SBAR framework: Situation, Background, Assessment, Recommendation—that’s a common format for nursing handoffs.
18. Addressing Patient Handoff Challenges
Doug Graham:
So they defined the SBAR structure and figured out how to bring that together. But it couldn’t just rely on generative AI—it had to go beyond free-text notes. We also needed vitals, trends in those vitals, and test results. All of that had to be integrated into a single view.
The nursing leadership really took ownership of this hackathon. It sparked a lot of conversation across the ministry about how this was coming together. A few months later, after significant work with the data science teams, Chelsea—one of our core data scientists—and Tracy, one of our lead nurses, were working directly in the ER and inpatient units.
They spent time with nurses, made real-time adjustments to how the model prompted, and improved the process through hands-on engagement. As they worked together, they were also defining the components of the model itself—what needed to be included, how it needed to behave.
And all of that ties back into governance, because that’s where it started. It was a governed process, led by the business, and supported through the right capabilities.
19. Future Directions for Data Governance
Kyle Hutchins:
Yeah, that’s great—and I’m sure our audience really appreciates the hackathon story and the detail you shared.
As a final thought: looking ahead, what’s next for Mercy? I know you’ve outlined four different workstreams that map back to what Jim and Cam talked about—catalogs, lineage, data quality, and compliance. Anything you’d want to leave the audience with in terms of where you’re going next?
Doug Graham:
Yeah. So, core things we’re focused on now include migrating within the Purview application. We’ve built all of our governance domains, business terms, business assets, and supporting documentation. Now, we’re moving to a new version of Purview, and that gives us a chance to rethink everything.
We’re asking: does what we built still make sense in the new paradigm? Because things are changing, and we have to adapt with that.
One thing we didn’t originally have in scope was the concept of a model card for our AI models. Now, that’s becoming a priority. At Mercy, we’re helping lead the Coalition for Health AI, which is working to standardize how AI models are described—so they can be reused or repurposed across public and private sector applications.
20. Standardizing AI Model Descriptions
Doug Graham:
If we can develop a common description of AI models—what they are, what they require, the data they need, how they function, and how they’re monitored—that gives us the ability to assess whether a model fits a specific use case.
Then you get into the next step of governance: ensuring no model is deployed unless the right data is available—in the right format and with a level of quality that gives you confidence the model will deliver the intended results.
Not all data will be high-quality from the start. It’s a process. That’s why your catalog is so important—it helps describe the data and assess its quality.
Once models are deployed, we need to be able to scan and capture metadata about them. That makes it easier for model builders to document what they’ve done. And while documentation might not be every data scientist’s favorite part of the job, the really good ones know how important it is.
Finally, we need registries—for all deployed AI across the organization, and for potential AI capabilities that could be adopted in the future.
I mentioned earlier the idea of the “mole popping up.” That’s what it feels like right now—every application has some new AI feature.
We’ve got more than 1,500 applications, and many of them weren’t purchased with AI in mind. So when they start introducing new AI features, you need to be aware of it. That’s why a central registry is going to be absolutely critical going forward.
21. Key Takeaways and Conclusion
Jay Combs:
Yeah, thanks for sharing. Thanks, Doug. I know there's a lot to cover, and I appreciate you sharing those insights on both the data and AI governance sides.
We’re at the half-hour mark, so we’re about to wrap up. Just a few key takeaways:
Data governance and AI governance, individually, are both critically important. But when you bring them together, the whole becomes greater than the sum of its parts. That’s when you can really drive business outcomes—building trust, enabling responsible AI, and operating efficiently at scale.
So governance isn’t just about safeguards—it’s about being a business enabler.
Coming full circle—no pun intended—data and AI model growth isn’t slowing down. You’ve got a choice: are you going to manage it the hard way or the easy way?
From everything we’ve heard today, the easy way starts with getting ahead on your data governance solution. Doug mentioned using Microsoft Purview for that. And then there’s the AI governance software that drives the capabilities Jim described. That’s exactly what ModelOp does—and we’d love to show you more.
So please reach out to us. Request a demo. There’s a lot to dive into, and it’s absolutely critical to enabling your business in the age of generative AI.
And of course, you need an experienced partner to help with both data and AI solutions—that’s exactly what Macula Systems is. You can contact Kyle and Cam at results@maculasys.com.
We’d love to hear from you, learn about your challenges, and help however we can. We’ll be sending out the recording and slide deck soon—as always. Look for that in the next day. We’ll also be announcing next month’s webinar very soon.
Thanks for joining the Good Decisions webinar. We hope to see you again soon. Thank you so much.