Good Decisions: Episode 6

 AI in Healthcare: Patient Quality of Care, Operational Efficiency, and Regulatory Compliance

Join Forrest Pascal for an in-depth discussion on how healthcare organizations are managing the risks of AI amidst a growing number of use cases and evolving regulations.

Register for the series.

In this webinar, Forrest Pascal leads a discussion on the risks and challenges of AI use cases in healthcare. As a former AI leader at Kaiser Permanente and now VP of Healthcare and Responsible AI at ModelOp, Forrest shares his expertise on navigating the complexities of AI Governance and health equity.

What You'll Learn

  • The impact of recent rules and regulations — such as the CMS 4201-F and ONC HTI-1 final rules
  • Strategies to drive patient quality and operational performance as AI use cases multiply
  • Best practices for ensuring compliance in AI healthcare applications
  • Insights into the unique challenges and opportunities in the healthcare sector

As the number of AI use cases explodes, along with the number of rules and regulations, you won't want to miss this opportunity to gain critical insights to innovate and protect your organization with AI Governance.

Download the slide deck.

Transcript

1. Introduction

Jay Combs: Hello, and welcome to the Good Decisions webinar, episode six! Today, we're diving into AI in healthcare—focusing on patient quality of care, operational efficiency, and regulatory compliance. I'm Jay Combs, VP of Marketing here at ModelOp and your regular webinar host. We have a special guest joining us this month—Forrest Pascall, our new VP of Healthcare and Responsible AI.

Jay Combs: Forrest, it's great to have you here. Could you give us a quick introduction before we jump into today's discussion?

Forrest Pascall: Thanks, Jay. I'm excited to be here. As Jay mentioned, I'm the new VP of Healthcare and Responsible AI at ModelOp. Before this, I led AI and ML health equity model governance at Kaiser Permanente and have also been a lecturer at Rutgers University in their Master of Health Administration program. My background is in biomedical informatics, and I'm passionate about making AI work responsibly within healthcare.

Jay Combs: Fantastic, Forrest. Thanks for being here and sharing your expertise. Let’s dive right in.

2. Introducing Forrest Pascall

Jay Combs: Forrest, as a leader in AI governance, what are your thoughts on how healthcare organizations can prepare for the growing cascade of AI rules and regulations? It seems like they’re evolving constantly.

Forrest Pascall: Absolutely, Jay. These regulations are coming at us from all directions—global, federal, state, and even municipal levels. It’s definitely a challenge for healthcare organizations to keep up. But there are some foundational steps that can help.

Jay Combs: Such as?

Forrest Pascall: Well, first and foremost, organizations need alignment—everyone must understand the critical importance of AI governance. That means defining responsible AI principles, developing an AI code of ethics, and establishing a risk assessment process for models. Beyond that, it’s about operationalizing these principles—taking them from paper to practical, day-to-day actions.

Jay Combs: Makes sense. It's interesting because AI governance can often sound like red tape, or even policing.

Forrest Pascall: Exactly, and that’s a common perception. But it doesn’t have to be like that. If governance is lightweight and practical, it can help avoid risks without feeling like an unnecessary burden.

3. Impact of Lack of AI Governance

Jay Combs: What happens when healthcare organizations don’t have AI governance in place? What are the risks?

Forrest Pascall: Well, there are several significant risks—reputational, legal, and ultimately patient harm. Imagine a scenario where an algorithm makes a flawed decision, like denying elderly care because of a bias in the model. Without AI governance, there might not be proper workflows or attestation cycles in place to catch and correct these errors.

Jay Combs: So it’s not just about compliance; it’s really about patient safety and outcomes.

Forrest Pascall: Exactly. AI governance helps ensure models are accountable and validated continuously, reducing risks to patients. When you consider the consequences—whether it's non-compliance, legal repercussions, or, most importantly, harm to the communities we serve—it’s clear how vital these practices are.

4. Deep Dive into HTI-1 Final Rule

Jay Combs: Let’s shift gears and talk about the ONC's HTI-1 final rule. Forrest, is this something that you were working with during your time at Kaiser?

Forrest Pascall: Actually, it’s something that came onto my radar since joining ModelOp. It wasn’t widely discussed during my time at Kaiser, but it’s an incredibly important rule that impacts anyone using electronic health record systems.

Jay Combs: Can you break it down for us in simple terms? What’s the main focus of this rule?

Forrest Pascall: Sure thing, Jay. At its core, the HTI-1 final rule is about algorithm transparency. It requires healthcare organizations to document and disclose key details about clinical decision support interventions—essentially, the algorithms driving healthcare decisions. The rule emphasizes accountability, ensuring that all aspects of an algorithm’s development and use are properly documented.

Jay Combs: So it’s really about making sure there’s transparency and accountability throughout the lifecycle of these models?

Forrest Pascall: Exactly. It’s about documenting everything from the initial development phase to ongoing monitoring. This includes describing the purpose of the model, who the intended users are, and what the intended outcomes are. It’s all about being able to show that the model is being used responsibly and effectively.

Jay Combs: Sounds like a lot of documentation, but necessary for building trust.

Forrest Pascall: Absolutely. It might feel overwhelming at first, but it’s essential for ensuring that AI systems are fair, effective, and, most importantly, safe for patients.

5. Risks and Concerns at Kaiser

Jay Combs: Forrest, during your time at Kaiser, what were some of the biggest risks you encountered related to AI, and how did AI governance help address those concerns?

Forrest Pascall: One of the biggest risks we faced was reputational risk. If something went wrong with an AI model—say, if a flawed algorithm denied care to a vulnerable patient group—it could end up in the news, tarnishing our reputation. Our members are watching, and they expect us to get it right.

Jay Combs: Absolutely. It sounds like governance was critical in mitigating those risks.

Forrest Pascall: It really was. AI governance ensured that we had proper workflows in place, including validation processes and attestation cycles, to minimize these risks. We also worked on transparency, making sure we could trace how a model made decisions and validate that it was in line with regulatory expectations and patient safety standards.

Jay Combs: That makes a lot of sense, especially considering the importance of public trust in healthcare.

Forrest Pascall: Exactly, Jay. Trust is everything in healthcare. When patients and members know that we are taking all possible steps to ensure fairness and safety in our AI systems, it helps build that trust.

6. The Importance of a Patient-Centric Approach

Jay Combs: Let's talk about the “why” behind all of this. Why is AI governance so important in healthcare?

Forrest Pascall: Ultimately, it’s about the patients and the communities we serve. AI governance is about ensuring that our AI systems are safe, effective, and fair, which directly impacts patient outcomes. If we put patients at the center of everything we do, we’re going to care deeply about quality, fairness, and ethical considerations.

Jay Combs: Right. It sounds like a patient-centric approach also helps drive better operational efficiency.

Forrest Pascall: Absolutely. When you’re focused on patient outcomes, you start to see the value of operational efficiency in a new light. For example, managing AI models through proper governance means we’re not dealing with outdated spreadsheets and manual updates. Instead, we have a “living” model record that tracks changes, monitors performance, and ensures we’re always up to date.

Jay Combs: That’s an interesting point—having a dynamic, continuously updated model record really does make a difference in quality of care, doesn’t it?

Forrest Pascall: It does. It allows us to be proactive instead of reactive, ensuring our models are operating effectively and supporting positive patient outcomes. And, ultimately, when governance is done right, it helps ensure that every decision made by AI is aligned with our commitment to patients.

7. Operational Efficiency and Regulatory Compliance

Jay Combs: Forrest, let’s talk about operational efficiency and regulatory compliance. How does AI governance contribute to these areas?

Forrest Pascall: Great question, Jay. AI governance helps streamline operations by providing a structured approach to managing AI models. Instead of manually tracking everything through spreadsheets, AI governance gives us a standardized process that ensures consistency and reliability. This not only saves time but also reduces the risk of human error.

Jay Combs: And what about regulatory compliance? It seems like governance would play a big role there.

Forrest Pascall: Absolutely. Regulations like HIPAA, HTI-1, and FDA requirements all have stringent criteria for how data is used and protected. AI governance helps us document and demonstrate compliance with these regulations. It ensures that all models are developed, validated, and monitored in accordance with these rules, reducing the likelihood of violations and penalties.

Jay Combs: So governance is really a bridge between being efficient and staying compliant?

Forrest Pascall: Exactly. It’s about building efficiency while ensuring that every model adheres to regulatory requirements, ultimately supporting both operational success and patient safety.

8. Recommendations for Addressing AI Regulations

Jay Combs: Forrest, what are your recommendations for healthcare organizations trying to address AI regulations without feeling overwhelmed?

Forrest Pascall: First, start small—don’t try to boil the ocean. Focus on minimum viable governance to get the basics in place. This means aligning people, processes, and tools to create a lightweight framework that’s easy to manage.

Jay Combs: Minimum viable governance—could you expand on that?

Forrest Pascall: Sure. It’s essentially about putting together the simplest version of governance that works for your organization. For instance, create clear roles and responsibilities, define a simple risk assessment process, and establish a single source of truth for your models. It’s about getting started in a way that’s practical and scalable.

Jay Combs: Sounds like a solid starting point that can grow over time.

Forrest Pascall: Exactly. The key is to get started, even if it’s not perfect. Governance is an iterative process, and it will evolve as your organization matures.

9. Focus on Patient and Member Experience

Jay Combs: Forrest, how does AI governance relate to the overall patient and member experience?

Forrest Pascall: At the heart of it, AI governance ensures that we’re always focused on what’s best for the patient. By making sure our models are fair, transparent, and effective, we’re directly improving the quality of care that patients receive. Additionally, governance helps maintain trust—patients can feel confident that their data is being used ethically and responsibly.

Jay Combs: So it’s really about building trust through responsible AI use?

Forrest Pascall: Exactly. When patients know that healthcare providers are using AI in a responsible manner, it reinforces their trust in the system. This trust is crucial for improving outcomes and fostering stronger relationships between patients and providers.

10. Adopt Early and Embrace Organizational Transformation

Jay Combs: You’ve talked a lot about getting started with governance. Why is it so important for organizations to adopt early?

Forrest Pascall: Early adoption of AI governance gives organizations a competitive advantage. It’s easier to scale and adapt governance practices when they’re implemented proactively rather than reactively. Plus, early adopters are better positioned to navigate new regulations as they emerge, reducing compliance risks.

Jay Combs: It sounds like it’s also about embracing change across the organization.

Forrest Pascall: Absolutely. AI governance is an organizational transformation effort—it impacts people, processes, and culture. Embracing this transformation helps create a foundation that supports responsible AI use and drives better patient outcomes.

11. Invitation to Webinar and Industry Experts

Jay Combs: Forrest, before we wrap up, I know we’ve got another exciting webinar coming up soon. Can you tell us a bit about it?

Forrest Pascall: Sure thing, Jay. On July 24th, we’ll be featuring the Financial Industry Regulatory Authority, or FINRA, and discussing model operations and governance in the financial sector. We have two esteemed guests, Sumalatha Bachu and Harvey Westbrook, who will share insights on jump-starting model governance and analytics at FINRA.

Jay Combs: Sounds like it’ll be a great discussion. For anyone interested, make sure to register at the link provided. We’d love to have you join us.

12. Conclusion and Thanks

Jay Combs: Forrest, this has been an incredibly insightful discussion. Thank you so much for joining us today and sharing your expertise on AI governance in healthcare.

Forrest Pascall: Thank you, Jay. It’s been a pleasure. I’m really passionate about this topic, and I hope our conversation today has provided some valuable takeaways for everyone listening.

Jay Combs: Absolutely. For everyone watching, if you have any questions or want to learn more, please reach out to us at modelop.com. We’d love to hear from you. Thanks again, Forrest, and thank you to our audience for joining us. See you next time!

Get the Latest News in Your Inbox
Share this post
Contact Us

Get started with ModelOp’s AI Governance software — automated visibility, controls, and reporting — in 90 days

Talk to an expert about your AI and governance needs

Contact Us