Home > AI Governance > Ai Regulations

AI Regulations & Standards

Between 2018 and 2024 there was a flurry of guidelines and regulations on AI use introduced around the globe by both nation states and supranational entities.

National Level Guidance

USA

  • Executive Order 13960 | Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government
  • California Attorney General AI/ML Governance | Request to all California Healthcare Providers
  • (2023) NIST AI Risk Management Framework

UK

  • (2018) AI in the UK | Ready, Willing, and Able

Canada

  • (2019) Directive on Automated Decision-Making

Japan

  • (2019) Social Principles of Human-Centric AI

Singapore

  • (2020) Model AI Governance Framework

Australia

  • (2019) AI Ethics Framework

Supranational Level Guidance & Regulations

European Union

  • (2021) The Ethics of AI
  • (2024) European Union Artificial Intelligence Act

UN Educational Scientific and Cultural Organization

  • (2021) Recommendation on the Ethics of AI

Organization of Economic Cooperation and Development

  • (2019) AI Principles

Japan

  • (2019) Social Principles of Human-Centric AI

Singapore

  • (2020) Model AI Governance Framework

Australia

  • (2019) AI Ethics Framework

From AI Guidance to AI Regulation

The EU AI Act is the first set of regulations with the force of law specifically focused on AI use.  It seeks to codify into law the 2019 ethics guidelines for trustworthy AI that were developed by the AI High Level Expert Group (HLEG).

The EU AI Act takes a tiered approach to risk, recognizing three categories of AI uses:

  1. those uses that present unacceptable levels of risk to society and should be banned outright;
  2. those uses that present a high level of risk and should be subject to the highest levels of scrutiny and
  3. those uses that present minimal or limited risk and thus require a lower level of oversight.

The Act applies to all companies, irrespective to where they are located, that offer AI products or services in the EU market.  So, like the EU’S General Data Protection Regulation, the impact of this new Act will be felt far beyond the borders of the European continent.

With so many AI use guidance documents being issued by so many governmental entities around the globe, it seems certain that more governments will follow the path taken in the EU - evolving guidance into AI specific regulations that will have the force of law.

Some Existing Laws Already Regulate AI Use

While the EU AI Act is the first comprehensive legal framework targeting AI, it is by no means the first set of laws with provisions that impact the use of AI within the enterprise.  

In particular Banks have been subject to regulations on the use of AI for some time.  That is because banks are increasingly leveraging AI to develop AI models for use in areas such as credit risk assessment, fraud detection, trading algorithms, anti-money laundering (AML) monitoring, and customer relationship management.  

These models are already subject to regulation by oversight entities charged with ensuring the health of the global banking system.  For example, U.S Federal Reserve and OCC SR-11 provides banks guidance on model risk management.  It provides a framework to identify, manage, and mitigate model risks across all areas of a financial institution's operations.

In addition, AI leverages massive data sets to construct models.  Much of this data is subject to existing regulations focused on the use of sensitive data or data privacy rights.  GDPR, HIPAA and PCI are just a few examples of existing regulations that may impact the use of AI within the enterprise.

State of Enterprise AI Governance Readiness

According to McKinsey’s “2023 State of AI” report less than half of companies surveyed considered regulatory compliance associated with generative AI use relevant.  Even fewer, less than a third, believe that they should take action to mitigate risks associated with regulatory compliance. The low percentage of respondents is consistent with prior McKinsey State of AI surveys going back to at least 2020.

Businesses need to reassess whether they can continue to treat AI regulatory compliance as a low priority given the twin considerations of (1) The huge amount of interest shown by governmental entities around the globe to regulate AI use and (2) the potentially high costs of regulatory non-compliance.

As enterprises consider their ability to respond to current AI regulations or new AI regulations, as it relates to both traditional AI and Generative AI, overwhelmingly they find that they lack visibility into their inventory of AI models and that their ability to react to internal or external queries about AI use is reactive, manual and time consuming.

AI Misuse Penalties Can Be Costly

Given the apparent lack of preparedness of most organizations, business leaders should take note of the potential financial risks associated with regulatory non-compliance.  

For instance, violations of the EU AI Act can run as high as 40M Euros or 7% of annual turnover (revenue). It is not only AI specific penalties that can create financial liabilities for corporations.

  • In 2013, Target Corporation, paid nearly $20M in settlements to the US government for violations of PCI.  
  • In 2018, Anthem, a healthcare provider, paid a $16M settlement to the US Government for violations related to HIPAA.
  • In 2021, Amazon was fined nearly one billion dollars by Luxemburg for GDPR compliance failures.

Examples of AI Ethical Use Failures

There have already been many well documented cases where businesses have fallen short of current guidelines for ethical and responsible AI development.  

A couple of representative examples are:

Recruitment AI Bias (2018)  

Amazon scrapped an AI-based recruitment tool after discovering it was biased against women. The AI had been trained on resumes submitted over a 10-year period, which were predominantly from men, leading to biased recommendations. This raised concerns about discrimination, damaged Amazon’s reputation, and highlighted the risks of biased AI in hiring.

Credit Card Discrimination (2019)

Apple and Goldman Sachs faced allegations that the Apple Card's AI algorithm discriminated against women, offering them lower credit limits than men of similar financial profiles. This led to a need to review and adjust their AI models to ensure fairness and compliance with anti-discrimination laws.

What Good AI Governance Looks Like

If enterprises are to succeed at ensuring that AI accelerates innovation while at the same time safeguarding the business against increased operational and regulatory risk, they will need to make sure that they have a robust AI governance framework in place that allows them to scale governance with an ever-growing number of AI models, many of them based on generative AI.

Key attributes that need to be part of a modernized AI governance framework include:

Comprehensive Inventory


Visibility into all your AI initiatives with a dynamic inventory that integrates with your priority AI systems

Light Controls

Implement a risk-based compliance approach and enforce the requisite controls for all AI systems.

Robust Reporting

On demand reporting into AI usage, risks, and adherence across internally developed, proprietary, vendor, and embedded AI

Related Resources

ModelOp Center

Govern and Scale All Your Enterprise AI Initiatives with ModelOp Center

ModelOp is the leading AI Governance software for enterprises and helps safeguard all AI initiatives — including both traditional and generative AI, whether built in-house or by third-party vendors — without stifling innovation.

Through automation and integrations, ModelOp empowers enterprises to quickly address the critical governance and scale challenges necessary to protect and fully unlock the transformational value of enterprise AI — resulting in effective and responsible AI systems.

eBook
Whitepaper
4/30/2024

Minimum Viable Governance

Must-Have Capabilities to Protect Enterprises from AI Risks and Prepare for AI Regulations, including the EU AI Act

Read more
Download This White Paper

To See How ModelOp Center Can Help You Scale Your Approach to AI Governance

Download