EU AI Act
EU AI Act, introduced in 2024, is the world's first comprehensive legal framework for AI regulation, categorizing AI systems by risk level and imposing strict compliance obligations on high-risk AI providers and deployers to ensure ethical and transparent AI use.
What is the EU AI Act
Introduced in 2024, the European Union Artificial Intelligence Act (EU AI Act) is the world’s first comprehensive legal framework regulating the use of AI within member states. The act takes a tiered approach to the risks AI poses, applying different compliance requirements based on the type of risk each category of AI represents. The Act will affect the activities of all organizations that operate, either directly or indirectly, within the boundaries of the European Union (EU).
The act further breaks outs compliance obligations for high-risk AI use by Providers and Deployers. Providers can be thought of as the entity that develops an AI system and makes it available to the marketplace. Deployers are the entities that take those AI systems and make them part of their own products and/or services. There are other roles defined within the Act, but the vast majority of compliance requirements relate to Providers and Deployers.
Within the provisions of the act, General Purpose AI (GPAI) models are not considered part of the risk tiered uses of AI. The act lays out separate provisions for the Providers and Deployers of GPAI models.
The act provides for severe penalties related to various categories of infractions. For instance, entities that violate prohibitions on banned AI uses can be fined up to 40M Euro of 7% of turnover.
EU AI Act Risk Categories
The risk tiering approach of the Act lays out four categories of AI risk. Compliance requirements and penalties for compliance infractions vary depending on the level of risk associated with the AI use in question.
Four Categories of Risk
- Unacceptable Risk: Systems in this category are banned outright due to the high-risk of violating fundamental rights.
- High Risk: These systems are deemed to present high-risk as they could negatively impact the rights or safety of individuals. These systems are subject to stringent compliance measures.
- Limited Risk: These kinds of systems present lower levels of risk than high-risk systems but are still subject to transparency requirements. Individuals who interact with AI systems in this category must be able to clearly understand when they are interacting with an AI system.
- Minimal Risk: These are systems that present little risk of harm to the rights of individuals. Spam filters and AI powered games are cited as examples of AI systems in this category of risk. The EU AI Act does not impose any obligations on Providers or Deployers of these kinds of systems.
Unacceptable Risk AI
Prohibited AI uses, addressed in Article 5 of the Act, include the following:
- AI-enabled subliminal or manipulative techniques that can be used to persuade individuals to engage in unwanted behaviors.
- AI systems that exploit the vulnerabilities of individuals due to their age, disability or a specific social or economic situation.
- AI systems used for social scoring—where systems evaluate or classify individuals based on their social behavior—leading to detrimental or unfavorable treatment of these individuals.
- AI systems for making assessments of individuals in order to assess or predict the risk, based solely on the profiling of that individual, whether that individual will commit a crime.
- AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV.
- AI systems that infer emotions of an individual in the areas of workplace and educational institutions (except where the use of the AI system is intended to be used for medical or safety reasons).
- AI systems used for biometric categorization of individuals based on their biometric data, to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation.
High Risk AI
Annex III of the EU AI Act specifies the major AI uses that should be classified as high-risk. These include:
- AI systems covered by existing industry sector or product safety regulations such as those for medical devices, vehicles, and toys.
- AI systems involved in non-banned uses of biometric identification, or emotion recognition.
- AI systems used to manage critical infrastructure, like energy grids or transportation systems.
- AI systems used in education or vocational training.
- AI systems used in recruitment, performance evaluation, or other aspects of workplace management.
- AI systems used in areas like financial services such as credit scoring, insurance pricing.
- AI systems used in determining appropriate responses to requests for emergency services.
- AI systems used in influencing elections and voter behavior.
- AI systems used in law enforcement, migration and border control, or the administration of justice.
Limited and Minimal Risk AI
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Limited and Minimal Risk AI
Within the EU AI Act, the major categories identified where AI use poses a high-risk are so broad and encompassing that without identifying general use cases that pose lower levels of risk, nearly every use of AI that isn’t banned would be considered high-risk.
The Act therefore designates some uses of AI, irrespective of the category they fall into, as presenting limited risk. The primary obligation associated with limited risk AI systems is transparency. Individuals interacting with limited risk AI systems must be informed that they are interacting with an AI system.
The criteria for this exemption are that the AI system meet one or more of the following criteria:
- Performing narrow procedural tasks
- Making improvements to the results of previously completed human activities
- Detecting decision-making patterns or deviations from prior decision-making patterns without replacing or influencing human assessments
In addition to the criteria above, AI systems, including GPAI systems, generating synthetic audio, image, video or text content, are generally considered limited risk. Chatbot implementations and AI systems that generate “deepfakes” are cited as examples of AI systems present a limited risk of harm.
High Risk AI – Provider Obligations
Under the EU AI Act, Providers of AI systems deemed to be high-risk have the following obligations:
- Establish a comprehensive risk management system, a quality management system and a robust data governance and management process.
- Maintain technical documentation that demonstrates that the AI system complies with all relevant requirements.
- Keep meticulous records including the automatic logging of events.
- Share information with deployers that ensures the proper use of the AI system.
- Provide human oversight intended to prevent or minimize the risks to health, safety or fundamental rights.
- Ensure that AI systems achieve an appropriate level of accuracy, robustness, and cybersecurity.
- Carry out ongoing post-market monitoring that ensures AI systems are functioning as expected.
Providers of high-risk AI systems must also:
- Undergo a rigorous conformity assessment process.
- Obtain an EU declaration of conformity and affix the CE marking to their systems.
- Register their AI system in an EU or national database.
- Implement corrective actions if they believe their AI system is not compliant with the provisions of the Act.
- Report any serious incidents to the appropriate authorities.
- Fully cooperate with authorities investigating issues associated with their AI systems.
High Risk AI – Deployer Obligations
Deployer obligations under the EU AI Act include:
- Provide human oversight of the deployed AI system.
- Ensure that input data is relevant and sufficiently representative in view of the intended purpose.
- Monitor the operation of the high-risk AI system on the basis of the instructions for use provided by the Provider.
- Maintain the logs automatically generated by that high-risk AI system if under their control.
- Inform any affected workers and their representatives that they will be subject to the use of a high-risk AI system.
- Inform end users affected by the use of an AI system that they are interacting with an AI system.
- Only use AI systems that have been properly registered in the appropriate EU or national database (public authorities must also register deployments in the EU Database).
- Cooperate fully with the relevant authorities in any action those authorities take in relation to a deployed AI system.
- Obtain judicial authorization for exempted use of any post-remote biometric identification.
Deployers also have a duty to inform the provider and relevant authorities if they believe an AI system they have deployed poses a risk of harm. In such cases they must also suspend use of that system.
GPAI Model Obligations
GPAI models are AI models that display significant generality, are capable to competently perform a wide range of distinct tasks and that can be integrated into a variety of downstream systems. Well known examples include GPT-4, DALL-E, and Google BERT.
Obligations of Providers of GPAI
- Keep up-to-date technical documentation of the model, including its training and testing process and the results of its evaluation.
- Make available information and documentation to providers of AI systems who intend to integrate the GPAI model into their AI systems.
- Make sure that policies exist that ensure that GPAI systems comply with EU law on copyright and related rights.
- Make publicly available a detailed summary about the content used for training of GPAI models.
- As necessary, cooperate with the European Commission and the national competent authorities investigating any issues associated with a GPAI system.
GPAI Models with Systemic Risk
Systemic risk is a risk that is specific to some high-impact GPAI systems. Models with systemic risk can have a significant impact on the EU market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, or fundamental rights. These risks are exacerbated when a model can be propagated at scale across the value chain.
Obligations of Providers of GPAI with Systemic Risk
Providers of GPAI system with systemic risk are responsible for all of the base obligations associated with GPAI systems. In addition, they are responsible for the following additional obligations:
- Notify the European Commission of any GPAI system that meets the criteria of a GPAI system with systemic risk.
- Perform model evaluation in accordance with standardized protocols and tools reflecting the state of the art, including conducting and documenting adversarial testing of the model with a view to identifying and mitigating systemic risks.
- Assess and mitigate possible systemic risks, including their sources, that may stem from the development, the placing on the market, or the use of the model.
- Keep track of, document, and report, to the AI Office and national authorities (as appropriate), relevant information about serious incidents and possible corrective measures to address them.
- Ensure an adequate level of cybersecurity protection for the model and its physical infrastructure.
EU AI Act Compliance Penalties
Penalties for non-compliance are broken into four categories of infractions.
The Act outlines the following maximum penalties for non-compliance.
- Prohibited AI Use: Fines up to €40M or 7% of annual turnover
- Data or Transparency related: €20M or 4% of annual turnover
- Other Obligations: €15M Euros or 3% of annual turnover
- Incorrect Information: A €7.5M or 1% of annual turnover
National market surveillance authorities under provisions of the EU 2019/1020 Market Surveillance Regulation are responsible for enforcing the provisions of the Act for all things other than GPAI related provisions. GPAI compliance enforcement is left to the newly formed EU AI Office which was brought into existence by the creation of the Act.
While the fines that can be incurred for non-compliance are substantial, they may well be just a fraction of the total financial liability associated with major breaches of the Act. Other potential financial liabilities include litigation and judgements as well as the potential loss of revenue that may occur as the direct consequence of a damaged company brand.
EU AI Act Timeline
2026, February
Guidelines on the practical implementation of the Act, including a comprehensive list of practical examples of use cases of AI systems by risk tiering
2026, August
The Act becomes generally applicable. Specifically, obligations on high-risk AI systems listed in Annex III.
2027, August
Obligations on high-risk systems apply to products already required to undergo third-party conformity assessments (toys, medical devices, etc)
2030, December
AI systems that are components of the large-scale IT systems listed in Annex X and that have been placed on the market or put into service before 2 August 2027 must be brought into compliance with the Act.
What You Should Do Now
Here are a series of next steps you should take to prepare to meet the obligations of the EU AI Act:
For All AI
- Understand which risk tiers your AI systems fall into and categorize them appropriately
- Understand your obligations under the Act based on the risk posed by your AI systems
For High-Risk AI and GPAI Models
- Define and Document Use Cases
- Conduct Risk Assessments
- Implement Risk Management
- Ensure Data Protection Compliance
- Maintain Detailed Records
- Provide User Transparency
- Implement Human Oversight
- Implement Continuous Monitoring
Framework for Achieving Compliance
The above steps lay out what must be done to comply with the EU AI Act. But without a strong framework that ensures these steps happen in a methodical and repeatable way most organizations will be extremely challenged to meet the obligations of the Act. This is where the concept of Minimum Viable Governance (MVG) comes in.
The MVG approach to governance focuses on right sizing the effort involved in establishing an AI governance program - not too much, not too little, but just enough to protect the organization while maintaining AI innovation cycles.
MVG involves three core facets:
- Establishing a governance inventory to ensure visibility into all AI usage.
- Applying lightweight controls to manage verification, evidence, and approvals without overwhelming innovation.
- Implementing streamlined reporting to achieve transparency and understand how AI is being used.
Govern and Scale All Your Enterprise AI Initiatives with ModelOp Center
ModelOp is the leading AI Governance software for enterprises and helps safeguard all AI initiatives — including both traditional and generative AI, whether built in-house or by third-party vendors — without stifling innovation.
Through automation and integrations, ModelOp empowers enterprises to quickly address the critical governance and scale challenges necessary to protect and fully unlock the transformational value of enterprise AI — resulting in effective and responsible AI systems.
To See How ModelOp Center Can Help You Scale Your Approach to AI Governance