The rapid evolution of AI in healthcare brings transformative tools and personalized treatment options, but it carries significant risks. Both Federal and State officials are investigating and developing regulations to ensure the decision making models that healthcare providers, pharmaceutical companies, and biotech firms employ are not violating patient rights or inadvertently contributing to adverse outcomes.
In this episode, ModelOp’s Dave Trier will take you through the big regulatory themes shaping AI implementation in healthcare, pharmaceuticals, and biotech. You’ll gain a strategic understanding of the regulatory risks and best practices for implementing governance frameworks. With several key dates and deadlines approaching, this is your chance to align your business with upcoming regulatory changes and ensure your AI Governance capabilities are robust and adaptable.
Download the slide deck.
For a deeper dive on the topic, check out latest whitepaper: 2024 Outlook for AI Regulations
Transcript
1. Introduction
Good morning, good evening, and thanks to everyone for joining today's discussion on AI regulations in healthcare, pharma, and biotech.
My name is Dave Trier, VP of Product. In today's webinar, we’re going to cover three main areas:
- First, we’ll look at a few examples where regulatory authorities have already stepped in regarding the use of AI.
- Second, we’ll distill key themes from these AI regulations so you can think about how to design and organize your overall programs, and how to safeguard your organization in its use of AI.
- Third, we’ll discuss some practical steps to help your healthcare organization prepare for what's ahead.
Alright, let’s start with the first example.
2. Examples of AI Regulations
Our first example comes from the Department of Justice, which is investigating the use of AI to connect patient records with recommended treatments. This investigation is rooted in a 2020 case involving Purdue Pharma, where the company was fined $8.3 billion for encouraging the use of its painkillers through pop-up notifications to physicians. While this use of pop-ups could be controlled through software, today’s AI systems complicate the issue by introducing more dynamic, non-deterministic elements.
The Department of Justice is concerned about ensuring that AI used to match patients with drugs and devices adheres to anti-kickback laws and regulations. While there isn't a specific law addressing this, the DOJ has already engaged with several pharmaceutical companies to enforce proper AI usage.
The second example comes from the FTC, which banned the use of AI facial recognition technology at Rite Aid for five years. This was due to improper usage of AI, which led to partiality or bias in customer interactions. While the FTC hasn’t established concrete laws for AI, they stepped in to prohibit Rite Aid from using the technology because of insufficient testing and verification, ensuring there was no bias.
The third example comes from CMS, which recently issued a memo stating that AI cannot be used to deny insurance claims. While AI can help ensure the correct coverage is applied, it must not be used to determine coverage or deny care. This comes after lawsuits were filed against UnitedHealthcare and Humana, alleging that flawed AI systems were responsible for denying care to elderly patients. CMS has now set guidelines to control the use of AI in the healthcare payer industry.
3. Key Themes in AI Regulations
Next, let’s distill some of the key themes from these AI regulations.
The first theme is visibility. Regulatory guidelines emphasize that organizations must know where AI is being used across all systems, tools, and software. Texas House Bill 2060, for instance, requires agencies to maintain an inventory report of all AI or automated decision systems in use. Whether the AI system is internally developed or vendor-provided, it’s crucial to track each one, including its purpose and usage. The key takeaway is the need for a systematic inventory of all AI systems, ensuring that each system's role, such as identifying fraudulent claims, is clearly documented.
The second theme is accountability. In addition to knowing where AI is used, organizations need to establish who is accountable for each AI system. For example, California’s Attorney General requires organizations to identify the individuals responsible for evaluating and overseeing AI tools. There's also a common misconception that foundational model providers, like OpenAI, are fully responsible for governance. However, according to the World Health Organization's ethics and governance guidelines for AI in healthcare, it's the healthcare organization deploying the AI, not the model provider, that bears responsibility for mitigating risks.
The third theme is transparency. Transparency means ensuring that individuals are aware when AI is used to make decisions, as outlined in GDPR pathways. For instance, the US District Court for the Eastern District of Pennsylvania mandates that any attorney using AI in legal documents must disclose its use. Similarly, organizations need to ensure that customers are informed when AI systems are involved in decision-making processes. The key takeaway here is establishing clear traceability for AI systems interacting with consumers, and providing the appropriate notices as AI use becomes more widespread.
4. Safety and Security in AI Regulations
The second key theme in AI regulations focuses on safety and security.
When we talk about safety, we're drawing largely from the NIST AI Risk Management Framework. The fundamental goal of AI regulations is to ensure that AI systems don’t cause harm—whether to human life, healthcare, the environment, or property. To achieve this, it's essential to understand the full scope and usage of the AI system. While AI can serve many functions, organizations need to apply it thoughtfully, particularly when it interacts with patients or consumers.
From a governance perspective, this means capturing details about the information the AI system uses, its output, and, most importantly, the potential implications if something goes wrong. The NIST framework emphasizes that AI systems should not pose harm to people, organizations, or ecosystems. The left-hand side of the NIST framework graphic highlights the importance of ensuring the AI system is safe for its intended use.
The FDA also provides guidance on safety, particularly for AI/ML-based software used in medical devices. Their recommendations include applying risk tiering to ensure the right safety protocols are in place. This involves assessing the healthcare situation: Is it critical, serious, or non-serious? And what is the AI system being used for—treating or diagnosing, managing clinical decisions, or informing them? Depending on these factors, organizations can assign a risk tier and adjust the level of scrutiny accordingly.
Another example comes from the FDA's recommendations around AI/ML device software functions. The FDA emphasizes the need to develop, validate, and implement AI systems with all the appropriate safeguards to ensure their safety and effectiveness. They provide guidance on monitoring these systems continuously and ensuring that, before any updates are deployed, the proper protocols are followed.
Now, let’s talk about security. Security in AI systems can mean different things, but I’m going to focus on a few key points from the NIST AI Risk Management Framework. First, AI systems need to be privacy-enhanced, meaning they should incorporate practices to safeguard human identity, PII (Personally Identifiable Information), and PHI (Protected Health Information), ensuring anonymity and confidentiality where necessary.
Second, AI systems must be resilient. These systems, which may inform critical decisions, should be able to withstand adverse events—whether that’s malicious activity like data poisoning or unforeseen issues during real-world deployment.
Finally, security means that the AI system must be protected against cyberattacks and have the ability to recover from them. The key takeaway here is to work closely with your security team from the start to ensure principles like resiliency testing, data security, application security, and network security are implemented from the ground up.
5. Validity and Fairness in AI Regulations
The third theme centers on validity and fairness in AI systems.
When we talk about validity, we mean ensuring that the AI system is performing according to its original requirements with accuracy and reliability. Achieving this requires robust testing and an ongoing monitoring framework to cover all scenarios, including unexpected or adverse situations. This helps ensure that the AI system continues to operate within the expected parameters.
The FDA’s framework for AI-based software as a medical device (SaMD) calls for thorough upfront testing in three areas: clinical association, analytical validation, and clinical validation. Regulatory bodies like the FDA are increasingly focused on ensuring that AI systems undergo rigorous testing before they’re deployed.
Similarly, the FDA’s Digital Medicine publication highlights the importance of implementing a robust quality management system tailored specifically for healthcare AI technologies. The key takeaway here is that organizations must have a robust testing approach—one that includes establishing baseline expectations during upfront testing and then comparing ongoing performance to those baselines. This applies to both internally developed AI systems and purchased software with embedded AI. As noted by the World Health Organization, the responsibility for AI use rests with the healthcare provider or payer, even when using third-party AI software.
Now let’s address fairness. Ensuring fairness in AI systems has been a hot topic in many discussions. Fairness means that AI systems should not produce biased or partial results. This requires having policies and enforcement controls in place to test for bias in datasets used to train models. It also means implementing tools to ensure that AI systems, once deployed, do not generate biased outcomes.
A great example comes from Singapore’s Veritas initiative, which outlines four principles for the fair and proper use of AI and data analytics. The key takeaway is that organizations must test AI systems for bias before deployment and continuously monitor them to ensure fairness throughout their lifecycle. This is essential to avoid partiality in both internally developed models and third-party software.
6. Importance of AI Governance Framework
Why is it important to implement an AI governance framework now?
I’ve spoken with a lot of customers who say, “Dave, I get it. There are all these different publications and guidelines, but can’t we wait a couple of years?” The answer is no, you really can’t. As I mentioned at the outset of this webinar, regulatory authorities are already stepping in and imposing bans or fines.
These AI regulations have severe implications for healthcare organizations and pharmaceutical companies. For example, the FTC banned Rite Aid from using AI facial recognition technology for five years. In the EU, ClearView AI was fined 20 million euros three times for a total of 60 million euros. These are not just recommendations; they’re guidelines and regulations that are being actively enforced.
7. Risks of AI Governance
While AI holds enormous potential, it also comes with inherent risks—risks to the organization internally, risks to consumers and patients, and risks to the broader ecosystem. So, how can you ensure that these risks are managed appropriately? That’s where having an AI governance framework is crucial.
Another important factor is the rapid adoption of generative AI. I’ve never seen a technology spread so quickly across large organizations. Whether it's clinical care, back-office operations, marketing, or finance, generative AI is being used everywhere. Without a governance framework in place, you’re exposing your organization to what I call "AI governance debt." This is the burden of playing catch-up with 10, 20, or even 50 different AI use cases being implemented across your organization without oversight.
8. Cost of Delaying AI Governance
This AI governance debt can be very costly—not only in terms of risk, but also in terms of operational expenses. It’s far more efficient to put the right governance framework in place now than to face the high costs of trying to address these issues later.
So, it’s best to get started with AI governance today.
9. Next Steps and Opportunities
What are the next steps for your organization?
The good news is that implementing an AI governance framework doesn’t have to be a multi-year project. You can get started in as few as 90 days, and ModelOp can help you do that. Reach out to us at www.modelop.com or email us at sales@modelop.com to learn more. We’d love to discuss your AI initiatives, your challenges, and how to mitigate risks while preparing for regulatory oversight.
If you're in the healthcare space, we’ll be attending the HIMSS conference in Orlando next week. You can find us at booth #1892. Stop by and meet us in person! We’d love to talk about your AI initiatives and governance needs. You can also schedule a dedicated appointment with us by clicking on the link provided.
10. Upcoming Webinar and Events
Lastly, I want to invite you to our next episode of the Good Decisions Webinar series. It will be held on Wednesday, March 27th, at 1 PM Eastern Time. The topic for this session will be Minimum Viable Governance, focusing on the key capabilities and minimum requirements for running an effective AI governance program.
This is a must-attend webinar if you want to get started quickly with AI governance. We’ll discuss the critical functions you need to protect your business from AI-related risks and ensure compliance with the regulations and themes we covered in today’s session.
11. Invitation to Webinar
If you’ve already registered for the Good Decisions Webinar series, good news—you’re automatically enrolled in the next session, and you’ll receive reminder emails. If you haven’t registered yet, please sign up using the link on this slide. We’d love to have you join the discussion.
Thank you again for joining today’s session. We hope you found it insightful. Have a great rest of your week. Bye-bye.