AI regulations are finally driving urgency
In our last blog post, we covered the countless conversations that ModelOp has had with executives over the past year and the insights we’ve uncovered. Enterprise AI and AI governance rise to the top as the most pressing and important. One of the forcing functions in those discussions was the myriad of AI guidelines and evolving regulations helping to protect society when AI goes wrong — or (ideally) preventing it from going wrong.
So let’s discuss AI regulations, what businesses need to know, and what they need to do now.
The AI Revolution and its era-defining challenges are here
While modern AI – especially Generative AI – is still in its infancy, it has the potential to be a transformative technology, impacting society much like the Industrial Revolution or the advent of the Internet. A quick Google search for “AI Revolution” returns dozens of books, such as the highly respected The AI Revolution in Medicine. The Founder and Executive Chairman of the World Economic Forum, Klaus Schwab, writes about the concept in depth in his book and numerous articles on The Fourth Industrial Revolution.
While it’s too early to declare these proclamations as facts, despite the trends, it’s clear leaders across the globe are preparing for AI to transform society at large. This is evidenced by the 25+ AI guidelines and regulations that have already been published globally. The Singapore AI Governance Framework and the EU AI Act have served as early leaders in defining AI regulations. Since then, there have been similar publications throughout AsiaPac, Europe, South America, and North America.
In the US, President Biden’s “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” sent a strong signal at the US Federal level on their intention to regulate AI. This Executive Order was grounded in and references the National Institute of Standards and Technology (NIST) AI-RMF, which provides a framework for implementing AI governance. Even at the state level, governors, legislators, and attorney generals are demanding that organizations report on the use of AI (see the California Attorney General’s request on Aug 31, 2022 or Texas House Bill 2060).
All of this is to say that global, federal, and state-level governments are moving quickly to implement AI regulations. While reading this, you may be asking, “ If I want to use AI, what do I need to do now to prepare my organization now?”
What Do I Need to Know
While all of the aforementioned AI guidelines and regulations differ in their approach, several key themes are pervasive. The below is simply a high-level introduction to these themes. Each one warrants dedicated discussion, but I will start with a summary. Please note that the below is specifically for organizations that want to use AI systems; it is not targeted at companies/organizations that are developing AI (such as foundational models).
For organizations that intend to use AI, the following are key themes:
1. Visibility and Transparency
The first theme ensures that appropriate visibility and transparency in all AI systems are used across the organization.
a. Visibility. To properly govern an AI system, an organization must know all systems or places where AI is being used. The AI guidelines specify that an inventory of all known AI systems is created, identifying the name, description, purpose, usage, and other key elements of the AI system.
As mentioned above, within the US, state governments are aggressively pushing for all “state agencies” to provide an inventory. Here's an excerpt from Texas House Bill 2060:
"Not later than July 1, 2024, each agency in the executive and legislative branches of state government, using money appropriated to the agency by this state, shall submit an inventory report of all automated decision systems that are being developed, employed, or procured by the agency"
These US State-level bills, orders, or requests have the authority to request that all “state agencies” provide this information. While it varies, state agencies can include state-funded institutions and universities, including university healthcare provider networks.
b. Accountability. As part of this visibility, the AI guidelines and regulations request that there is a defined accountability structure for these AI systems to ensure proper oversight is provided before an AI system is used. See this excerpt from the California AG’s request to all California Healthcare Providers:
"The name or contact information of the person(s) responsible for evaluating the purpose and use of these tools and ensuring that they do not have a disparate impact based on race or other protected characteristics. "
—Office of the California Attorney General, August 22, 2022
c. Transparency. In this context, transparency is meant to indicate two tenets:
- That a person is aware that an AI system is being used, whether that means the AI system provides responses to a customer’s questions or that an AI system was used to generate content (such as a document or image).
- That a person’s information has been used in the building of an AI model.
The detailed language and enforcement of these two tenets are still being debated (especially for foundational models), but the bottom line is that organizations need to ensure that they prepare themselves to be able to answer these questions for any AI system that is used. Again, we turn to a US state-level enforcement, in this case regarding the disclosure of AI for usage in the courtroom:
"STANDING ORDER RE: ARTIFICIAL INTELLIGENCE (“AI”) IN CASES ASSIGNED TO JUDGE BAYLSON If any attorney for a party, or a pro se party, has used Artificial Intelligence (“AI”) in the preparation of any complaint, answer, motion, brief, or other paper filed with the Court and assigned to Judge Michael M. Baylson, they MUST, in a clear and plain factual statement, disclose that AI has been used in any way in the preparation of the filing and CERTIFY that each and every citation to the law, or the record in the paper, has been verified as accurate"
—UNITED STATES DISTRICT COURT FOR THE EASTERN DISTRICT OF PENNSYLVANIA, 6/6/2023
2. Safety & Security
With the AI systems identified, the second theme ensures these systems are safe and secure:
a. Safety. The fundamental goal of AI regulations is to ensure that AI systems cause no harm to human life, health, environment, or property. To do so, the organization must fully understand the scope of the usage of the AI system: what information it uses, the output of the AI system, and most importantly, the potential implications of the AI system, especially if the AI system has issues. The following from the NIST AI-RMF provides an overview of the potential “harms” that could result from an AI system that “goes wrong:”
NIST, AI - Responsible Management Framework, Page 5
b. Security. There are several aspects of security to be accounted for, but I believe are best summarized by the NIST AI-RMF:
- Privacy-enhanced: The organization must incorporate practices that help safeguard human identity, including values such as anonymity and confidentiality.
- Resilient: The AI system must be able to withstand adverse events or unexpected environmental changes. Examples include data poisoning and the stealing of training data or the intellectual property of the AI model itself.
- Secure: The AI system must be protected against and recover from cybersecurity attacks.
"AI systems that can maintain confidentiality, integrity, and availability through protection mechanisms that prevent unauthorized access and use may be said to be secure."
—NIST, AI - Responsible Management Framework, Page 15
3. Valid & Fairness
The third theme focuses on the decisions or output that the AI system is providing: are they accurate, reliable, and fair?
Specifics include:
a. Valid: The organization must ensure that the AI system is providing output that are meeting the original requirements of the AI system with accuracy and reliability. To ensure this, a robust testing and ongoing monitoring framework must be employed, leveraging consistent standards and reporting across the organization. Industries such as healthcare providers have engaged in cross-institution collaboration to advocate for robust quality management frameworks:
"By establishing a QMS (quality management systems) explicitly tailored to health AI technologies, HCOs can comply with evolving regulations and minimize redundancy and rework while aligning their internal governance practices with their steadfast commitment to scientific rigor and medical excellence."
For almost two decades, the US Banking industry has established model risk management guidelines, in which regulators (such as the Department of Treasury OCC’s SR Letter 11-7) require an independent review be conducted of a model, which involves substantial review, analysis, and testing before the model can be used.
"All model components, including input, processing, and reporting, should be subject to validation; this applies equally to models developed in-house and to those purchased from or developed by vendors or consultants. The rigor and sophistication of validation should be commensurate with the bank's overall use of models, the complexity and materiality of its models, and the size and complexity of the bank's operations. Validation involves a degree of independence from model development and use. Generally, validation should be done by people who are not responsible for development or use and do not have a stake in whether a model is determined to be valid. Independence is not an end in itself but rather helps ensure that incentives are aligned with the goals of model validation."
—US Department of Treasury, OCC, S11-7
b. Fair: AI systems must be fair and equal for all persons, by addressing harmful bias and discrimination. There is already substantial publication on this topic, but in short, organizations need to ensure that (1) if the organization is developing (training) the AI model, the training data is free from explicit or implicit bias; (2) the organization is conducting thorough testing of the AI system outcomes against established metrics to ensure equality and equity; (3) the organization has ongoing evaluation of AI system outcomes when used in practice, again to ensure equal and equitable outcomes.
The Singapore government sponsored the Veritas initiative, which provides a clean summary of the fundamental principles to ensure Fairness in AI:
(AIDA = Artificial intelligence and data analytics)
How do I get started?
Given the numerous regulations and the variations across each one, it can seem daunting to organizations to figure out how to get started. Luckily, based on our team’s vast experience in helping organizations implement AI Governance, we have compiled 3 steps to help your organization get started:
- Deploy an AI Governance Inventory: This ensures that you have the appropriate visibility, accountability, and transparency of all AI systems across the organization.
- Implement basic controls: This enforces the appropriate governance policies to ensure that all AI systems are safe, secure, valid, and fair.
- Report on AI Governance Adherence: This provides clarity and assurance to executives, the board of directors, and customers that your organization is adhering to the best practices and regulations.
Why Start Now?
Reason 1: AI Regulations have “teeth”
But AI is just like GDPR or the California CCPA, which took years to implement and enforce, right? Wrong.
While it is true that a broad-sweeping EU regulation like the EU AI Act will take time to mobilize the authorities, the reality is that regulators are already taking action against egregious violations of these AI regulations and guidelines.
For example, in the EU, Clearview AI was fined €20M for “illegal collection and processing of biometric data belonging to French citizens.” Italy and Greece fined the company the same amount for a total of €20M in fines.
In the US, the Federal Trade Commission (FTC) banned Rite Aid from using facial recognition technology for 5 years after the FTC found that this AI technology “falsely flagged the consumers as matching someone who had previously been identified as a shoplifter or other troublemaker.”
The EU AI Act goes well beyond just banning the use of AI to imposing substantial financial fines for regulatory violations. In particular, the EU AI Act has proposed fines of up to 7% of annual turnover — a considerable amount for any business:
Credit: https://www.holisticai.com/blog/penalties-of-the-eu-ai-act
Reason 2: The risks of AI
As mentioned in our last blog post, AI has great potential. However, the risk is too substantial to “address later”. These risks include:
- Financial risk
- Brand risk
- IP risk
- Regulatory risk
Reason 3: AI Governance Debt
Given the rapid rise of Generative AI, it is a bit of the “wild west” of AI for most organizations, where siloed teams are taking disparate approaches, using different processes and tools. If organizations don’t corral these diverging AI programs, organizations are exposing themselves to substantial “AI Governance Debt” that will require multi-million dollar investments to clean up. Activities include:
- Cataloging the numerous existing AI systems.
- Trying to determine the accountability structure for these systems.
- Creating “missing” artifacts as dictated by the regulations: documentation, tests, reviews, approvals.
- Implementing “missing” controls: testing, change, access, reviews, etc.
- Enforcing ongoing compliance measures.
Conclusion: Get your enterprise ready yesterday
This post only scratched the surface of the AI guidelines and regulations that have already been published. The key takeaway is that AI regulations are here now and here to stay. If your organization does not take the appropriate governance measures, the regulatory authorities can and will step in.
Are you ready?