EU AI Act
EU AI Act, introduced in 2024, is the world's first comprehensive legal framework for AI regulation, categorizing AI systems by risk level and imposing strict compliance obligations on high-risk AI providers and deployers to ensure ethical and transparent AI use.
What is the EU AI Act?
Introduced in 2024, the European Union Artificial Intelligence Act (EU AI Act) is the world’s first comprehensive legal framework regulating the use of AI within member states. The act takes a tiered approach to the risks AI poses, applying different compliance requirements based on the type of risk each category of AI represents. The Act will affect the activities of all organizations that operate, either directly or indirectly, within the boundaries of the European Union (EU).
The act further breaks outs compliance obligations for high-risk AI use by Providers and Deployers. Providers can be thought of as the entity that develops an AI system and makes it available to the marketplace. Deployers are the entities that take those AI systems and make them part of their own products and/or services. There are other roles defined within the Act, but the vast majority of compliance requirements relate to Providers and Deployers.
Within the provisions of the act, General Purpose AI (GPAI) models are not considered part of the risk tiered uses of AI. The act lays out separate provisions for the Providers and Deployers of GPAI models.
The act provides for severe penalties related to various categories of infractions. For instance, entities that violate prohibitions on banned AI uses can be fined up to 40M Euro of 7% of turnover.
EU AI Act Risk Categories
The risk tiering approach of the Act lays out four categories of AI risk. Compliance requirements and penalties for compliance infractions vary depending on the level of risk associated with the AI use in question.
Four Categories of Risk
- Unacceptable Risk: Systems in this category are banned outright due to the high-risk of violating fundamental rights.
- High Risk: These systems are deemed to present high-risk as they could negatively impact the rights or safety of individuals. These systems are subject to stringent compliance measures.
- Limited Risk: These kinds of systems present lower levels of risk than high-risk systems but are still subject to transparency requirements. Individuals who interact with AI systems in this category must be able to clearly understand when they are interacting with an AI system.
- Minimal Risk: These are systems that present little risk of harm to the rights of individuals. Spam filters and AI powered games are cited as examples of AI systems in this category of risk. The EU AI Act does not impose any obligations on Providers or Deployers of these kinds of systems.

Unacceptable Risk AI
Prohibited AI uses, addressed in Article 5 of the Act, include the following:
- AI-enabled subliminal or manipulative techniques that can be used to persuade individuals to engage in unwanted behaviors.
- AI systems that exploit the vulnerabilities of individuals due to their age, disability or a specific social or economic situation.
- AI systems used for social scoring—where systems evaluate or classify individuals based on their social behavior—leading to detrimental or unfavorable treatment of these individuals.
- AI systems for making assessments of individuals in order to assess or predict the risk, based solely on the profiling of that individual, whether that individual will commit a crime.
- AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV.
- AI systems that infer emotions of an individual in the areas of workplace and educational institutions (except where the use of the AI system is intended to be used for medical or safety reasons).
- AI systems used for biometric categorization of individuals based on their biometric data, to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation.
High Risk AI
Annex III of the EU AI Act specifies the major AI uses that should be classified as high-risk. These include:
- AI systems covered by existing industry sector or product safety regulations such as those for medical devices, vehicles, and toys.
- AI systems involved in non-banned uses of biometric identification, or emotion recognition.
- AI systems used to manage critical infrastructure, like energy grids or transportation systems.
- AI systems used in education or vocational training.
- AI systems used in recruitment, performance evaluation, or other aspects of workplace management.
- AI systems used in areas like financial services such as credit scoring, insurance pricing.
- AI systems used in determining appropriate responses to requests for emergency services.
- AI systems used in influencing elections and voter behavior.
- AI systems used in law enforcement, migration and border control, or the administration of justice.
Limited and Minimal Risk AI
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Limited and Minimal Risk AI
Within the EU AI Act, the major categories identified where AI use poses a high-risk are so broad and encompassing that without identifying general use cases that pose lower levels of risk, nearly every use of AI that isn’t banned would be considered high-risk.
The Act therefore designates some uses of AI, irrespective of the category they fall into, as presenting limited risk. The primary obligation associated with limited risk AI systems is transparency. Individuals interacting with limited risk AI systems must be informed that they are interacting with an AI system.
The criteria for this exemption are that the AI system meet one or more of the following criteria:
- Performing narrow procedural tasks
- Making improvements to the results of previously completed human activities
- Detecting decision-making patterns or deviations from prior decision-making patterns without replacing or influencing human assessments
In addition to the criteria above, AI systems, including GPAI systems, generating synthetic audio, image, video or text content, are generally considered limited risk. Chatbot implementations and AI systems that generate “deepfakes” are cited as examples of AI systems present a limited risk of harm.
High Risk AI – Provider Obligations
Under the EU AI Act, Providers of AI systems deemed to be high-risk have the following obligations:
- Establish a comprehensive risk management system, a quality management system and a robust data governance and management process.
- Maintain technical documentation that demonstrates that the AI system complies with all relevant requirements.
- Keep meticulous records including the automatic logging of events.
- Share information with deployers that ensures the proper use of the AI system.
- Provide human oversight intended to prevent or minimize the risks to health, safety or fundamental rights.
- Ensure that AI systems achieve an appropriate level of accuracy, robustness, and cybersecurity.
- Carry out ongoing post-market monitoring that ensures AI systems are functioning as expected.
Providers of high-risk AI systems must also:
- Undergo a rigorous conformity assessment process.
- Obtain an EU declaration of conformity and affix the CE marking to their systems.
- Register their AI system in an EU or national database.
- Implement corrective actions if they believe their AI system is not compliant with the provisions of the Act.
- Report any serious incidents to the appropriate authorities.
- Fully cooperate with authorities investigating issues associated with their AI systems.
High Risk AI – Deployer Obligations
Deployer obligations under the EU AI Act include:
- Provide human oversight of the deployed AI system.
- Ensure that input data is relevant and sufficiently representative in view of the intended purpose.
- Monitor the operation of the high-risk AI system on the basis of the instructions for use provided by the Provider.
- Maintain the logs automatically generated by that high-risk AI system if under their control.
- Inform any affected workers and their representatives that they will be subject to the use of a high-risk AI system.
- Inform end users affected by the use of an AI system that they are interacting with an AI system.
- Only use AI systems that have been properly registered in the appropriate EU or national database (public authorities must also register deployments in the EU Database).
- Cooperate fully with the relevant authorities in any action those authorities take in relation to a deployed AI system.
- Obtain judicial authorization for exempted use of any post-remote biometric identification.
Deployers also have a duty to inform the provider and relevant authorities if they believe an AI system they have deployed poses a risk of harm. In such cases they must also suspend use of that system.
GPAI Model Obligations
GPAI models are AI models that display significant generality, are capable to competently perform a wide range of distinct tasks and that can be integrated into a variety of downstream systems. Well known examples include GPT-4, DALL-E, and Google BERT.
Obligations of Providers of GPAI
- Keep up-to-date technical documentation of the model, including its training and testing process and the results of its evaluation.
- Make available information and documentation to providers of AI systems who intend to integrate the GPAI model into their AI systems.
- Make sure that policies exist that ensure that GPAI systems comply with EU law on copyright and related rights.
- Make publicly available a detailed summary about the content used for training of GPAI models.
- As necessary, cooperate with the European Commission and the national competent authorities investigating any issues associated with a GPAI system.
GPAI Models with Systemic Risk
Systemic risk is a risk that is specific to some high-impact GPAI systems. Models with systemic risk can have a significant impact on the EU market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, or fundamental rights. These risks are exacerbated when a model can be propagated at scale across the value chain.
Obligations of Providers of GPAI with Systemic Risk
Providers of GPAI system with systemic risk are responsible for all of the base obligations associated with GPAI systems. In addition, they are responsible for the following additional obligations:
- Notify the European Commission of any GPAI system that meets the criteria of a GPAI system with systemic risk.
- Perform model evaluation in accordance with standardized protocols and tools reflecting the state of the art, including conducting and documenting adversarial testing of the model with a view to identifying and mitigating systemic risks.
- Assess and mitigate possible systemic risks, including their sources, that may stem from the development, the placing on the market, or the use of the model.
- Keep track of, document, and report, to the AI Office and national authorities (as appropriate), relevant information about serious incidents and possible corrective measures to address them.
- Ensure an adequate level of cybersecurity protection for the model and its physical infrastructure.
EU AI Act Compliance Penalties
Penalties for non-compliance are broken into four categories of infractions.
The Act outlines the following maximum penalties for non-compliance.
- Prohibited AI Use: Fines up to €40M or 7% of annual turnover
- Data or Transparency related: €20M or 4% of annual turnover
- Other Obligations: €15M Euros or 3% of annual turnover
- Incorrect Information: A €7.5M or 1% of annual turnover
National market surveillance authorities under provisions of the EU 2019/1020 Market Surveillance Regulation are responsible for enforcing the provisions of the Act for all things other than GPAI related provisions. GPAI compliance enforcement is left to the newly formed EU AI Office which was brought into existence by the creation of the Act.
While the fines that can be incurred for non-compliance are substantial, they may well be just a fraction of the total financial liability associated with major breaches of the Act. Other potential financial liabilities include litigation and judgements as well as the potential loss of revenue that may occur as the direct consequence of a damaged company brand.
EU AI Act Timeline
2026, February
Guidelines on the practical implementation of the Act, including a comprehensive list of practical examples of use cases of AI systems by risk tiering
2026, August
The Act becomes generally applicable. Specifically, obligations on high-risk AI systems listed in Annex III.
2027, August
Obligations on high-risk systems apply to products already required to undergo third-party conformity assessments (toys, medical devices, etc)
2030, December
AI systems that are components of the large-scale IT systems listed in Annex X and that have been placed on the market or put into service before 2 August 2027 must be brought into compliance with the Act.
What You Should Do Now
Here are a series of next steps you should take to prepare to meet the obligations of the EU AI Act:
For All AI
- Understand which risk tiers your AI systems fall into and categorize them appropriately
- Understand your obligations under the Act based on the risk posed by your AI systems
For High-Risk AI and GPAI Models
- Define and Document Use Cases
- Conduct Risk Assessments
- Implement Risk Management
- Ensure Data Protection Compliance
- Maintain Detailed Records
- Provide User Transparency
- Implement Human Oversight
- Implement Continuous Monitoring
Framework for Achieving Compliance
The above steps lay out what must be done to comply with the EU AI Act. But without a strong framework that ensures these steps happen in a methodical and repeatable way most organizations will be extremely challenged to meet the obligations of the Act. This is where the concept of Minimum Viable Governance (MVG) comes in.
The MVG approach to governance focuses on right sizing the effort involved in establishing an AI governance program - not too much, not too little, but just enough to protect the organization while maintaining AI innovation cycles.
MVG involves three core facets:
- Establishing a governance inventory to ensure visibility into all AI usage.
- Applying lightweight controls to manage verification, evidence, and approvals without overwhelming innovation.
- Implementing streamlined reporting to achieve transparency and understand how AI is being used.
The EU AI Act FAQs:
What is the purpose of the EU AI Act?
The purpose of this act is to drive responsible innovation. Improving the functioning of the internal market and promoting the uptake of human-centric and trustworthy AI protects against the harmful effects of AI and supports innovation (“EU AI Act Insights,” n.d.).
What is the scope of the EU AI Act?
It’s extra-territorial, and international companies, even if they are not based in the EU, may still find themselves subject to the AI Act. The EU AI Act imposes obligations on providers, importers, distributors and deployers of AI systems and General-Purpose AI Models (GPAIMs) (“EU AI Act Insights,” n.d.).
What is the risk-based approach of the EU AI Act?
The EU AI Act adopts a risk-based approach, classifying AI systems into four distinct categories based on their potential impact:
- Unacceptable Risks: AI practices that pose a clear threat to individuals' safety or fundamental rights are prohibited. This includes systems designed for subliminal manipulation or those exploiting vulnerabilities of specific groups.
- High-Risk AI Systems: These systems are subject to stringent requirements due to their significant impact on critical areas such as public services, law enforcement, and safety components. Examples include AI applications in healthcare diagnostics, transportation management, and educational assessments (Pinset Masons, 2024).
- Limited Risk: AI systems that interact with users without significant implications fall under this category. They are required to meet specific transparency obligations, such as informing users that they are engaging with an AI system. Chatbots and AI-generated content tools are typical examples.
- Minimal Risk: This category encompasses AI systems with negligible or no impact on users' rights or safety. Such systems are permitted with minimal regulatory intervention. Examples include spam filters and AI applications in video games.
What constitutes a high-risk AI system under the EU AI Act?
High-risk AI systems are those that significantly affect critical sectors and can potentially impact individuals' rights or safety. This includes general-purpose AI models when applied in sensitive contexts, such as:
- Healthcare: AI systems used for patient diagnostics or treatment recommendations.
- Transportation: AI applications in autonomous driving or traffic management systems.
- Education: AI tools employed in student evaluations or admission processes.
To manage the risks associated with these applications, the Act mandates a thorough risk assessment process and requires that human oversight mechanisms be in place to intervene when necessary (Pinset Masons, 2024).
What are examples of high-risk AI systems?
High-risk AI systems include applications used in critical infrastructure, public services, and safety-related areas, such as medical diagnostics, financial services, and law enforcement AI tools. Specific examples include:
- Critical infrastructure:
AI systems controlling safety components in areas like road traffic, water supply, electricity grids. - Biometrics:
Facial recognition, remote biometric identification, emotion recognition systems. - Education and training:
AI used to assess student performance, allocate learning pathways, or monitor cheating. - Employment:
AI systems used for recruitment, candidate screening, performance evaluation, and job matching - Access to essential services:
AI for credit scoring, risk assessment in insurance, and access to healthcare - Law enforcement and justice:
AI systems used to analyze evidence, identify suspects, or assess criminal risk
Migration and border control:
AI systems for document verification, risk assessment, and border access decisions
What requirements must high-risk AI systems abide by to comply with the EU AI Act?
- Establish a risk management system throughout the AI system’s lifecycle
- Conduct data governance, ensuring that training, validation and testing datasets are relevant, sufficiently representative and, to the best extent possible, free of errors and complete according to the intended purpose.
- Draw up technical documentation to demonstrate compliance and provide authorities with the information to assess that compliance.
- Design their high risk AI system for record-keeping to enable it to automatically record events relevant for identifying national level risks and substantial modifications throughout the system’s lifecycle.
- Provide instructions for use to downstream deployers to enable the latter’s compliance.
- Design their high risk AI system to allow deployers to implement human oversight.
- Design their high risk AI system to achieve appropriate levels of accuracy, robustness, and cybersecurity.
- Establish a quality management system to ensure compliance.
What are the transparency requirements for AI systems?
The EU AI Act imposes several transparency obligations on AI providers to ensure users are adequately informed:
- Disclosure: Users must be notified when they are interacting with an AI system.
- Purpose Explanation: The intended use and functionality of the AI system should be clearly communicated.
- Data Usage: Information regarding the data sources and processing methods utilized by the AI system must be accessible to users.
These requirements aim to promote informed user interactions and foster trust in AI technologies (PWC, 2024).
Who oversees compliance with the EU AI Act?
Compliance with the EU AI Act is monitored and enforced by a combination of authorities:
- European Commission: Provides overarching guidance and coordination among member states.
- National Authorities: Each EU member state designates specific bodies responsible for supervising AI activities within their jurisdiction.
- Market Surveillance Authorities: Tasked with ensuring that AI systems in the market comply with the established regulations and standards.
These entities work collaboratively to ensure consistent enforcement and to address any non-compliance issues effectively (Orrick, 2024).
What obligations do businesses face under the EU AI Act?
Businesses developing or deploying AI systems, particularly those classified as high-risk, are required to:
- Implement Robust Risk Management: Establish processes to identify and mitigate potential risks associated with their AI systems.
- Ensure Human Oversight: Develop mechanisms that allow human intervention in AI decision-making processes to prevent adverse outcomes.
- Maintain Transparency: Provide clear information about the AI system's operations, data usage, and decision-making criteria.
- Conduct Regular Audits: Periodically assess and document the AI system's compliance with the Act's requirements.
Adhering to these obligations is crucial to align with the legal framework and to promote ethical AI practices (PWC, 2024).
How does the EU AI Act affect small businesses and startups?
Recognizing the diverse capacities of organizations, the EU AI Act incorporates proportional requirements to minimize the regulatory burden on small businesses and startups. This approach ensures that while compliance is mandatory, the obligations are scaled according to the size and resources of the organization, thereby supporting innovation and competitiveness among smaller entities (PWC, 2024).
What are the impacts of the EU AI Act?
The Act regulates various roles in the AI lifecycle and is likely to result in significant compliance obligations for employers that use AI in respect of their workforce and customers (“EU AI Act Insights,” n.d.).
What are the fines, penalties, or consequences of violating the EU AI Act?
The penalties for non-compliance are up to the higher range of EUR 35 million (USD 38 million) or 7% of the company's global annual turnover (i.e. revenue) in the previous financial year (“EU AI Act Insights,” n.d.).
In the EU AI Act, there is a tiered risk system (unacceptable risk, high risk, limited risk, minimal risk), and the fines, penalties and consequences vary depending on the risk level.

How does the EU AI Act define fundamental rights?
The EU Artificial Intelligence Act does not provide a specific definition of "fundamental rights." Instead, it emphasizes the protection of these rights by requiring deployers of high-risk AI systems to conduct Fundamental Rights Impact Assessments (FRIAs) prior to deployment. This approach aligns with the Charter of Fundamental Rights of the European Union, which outlines rights across six categories: dignity, freedoms, equality, solidarity, citizens' rights, and justice. By mandating FRIAs, the AI Act ensures that AI systems are developed and deployed in a manner that respects and upholds these essential rights (Dauzier et al., 2024).
What emphasis or focus does the EU AI Act place on fundamental rights and human oversight?
The Act places a strong emphasis on protecting fundamental rights by mandating human oversight of AI systems. This ensures that AI applications do not operate in isolation but are subject to human judgment, particularly in scenarios where decisions can significantly impact individuals' lives. By embedding human oversight, the Act aims to prevent misuse and to uphold ethical standards in AI deployment (“EU AI Act Insights”, n.d.)
How does the EU AI Act ensure fairness in AI deployment?
To promote fairness, the Act requires that AI systems, especially those used in public services, adhere to strict transparency and non-discrimination standards. This includes conducting impact assessments to identify and mitigate potential biases, ensuring equitable outcomes, and maintaining public trust in AI-driven processes (PWC, 2024).
How does the EU AI Act align with global AI governance trends?
The EU AI Act is the first comprehensive law to regulate artificial intelligence (AI) in the world. It's also the first law of its kind to be passed by a major regulator.
The EU AI Act positions the EU as a leader in AI governance by establishing a comprehensive regulatory framework that promotes ethical AI development. According to Forbes (2024), the Act’s risk-based approach serves as a model for other regions, encouraging transparency, accountability, and human rights protection in AI deployment (Keller, 2024).
The Act's extraterritorial influence, known as the “Brussels Effect,” compels global companies to comply with EU standards to maintain market access, influencing AI regulations worldwide. However, Forbes notes concerns that strict requirements could hinder innovation and competitiveness for European businesses (Keller, 2024).
Despite challenges, the EU AI Act provides a blueprint for responsible AI governance, balancing ethical oversight with the need for technological advancement.
Conclusion: The EU AI Act Has Teeth and More Deadlines Are Approaching
Addressing varied AI regulations across countries and regions requires a strategic approach. Businesses must focus on key AI Regulatory themes to ensure compliance, including:
- Governance Inventory: Documenting AI systems, their scope, limitations, and data used.
- Controls: Implementing process, change, and access controls.
- Testing & Validation: Ensuring models are tested and independently reviewed, including for ethical concerns.
- Ongoing Reviews: Continuous monitoring and periodic audits of AI systems.
- Risk Management: Identifying and mitigating risks through structured frameworks.
By proactively addressing these themes, businesses can help to ensure regulatory compliance while responsibly leveraging AI's potential.
References
Apostle , J. (2024, September 13). The EU AI act: Oversight and enforcement. Orrick.
Art. 14 human oversight - EU AI act. Art. 14 Human Oversight - EU AI Act. (n.d.).
European Commission . (n.d.). Ai Act. Shaping Europe’s Digital Future.
FairNow. (2025, January 29). Understanding the EU AI act: AI governance for organizations. FairNow.
PricewaterhouseCoopers. (2024, June 3). Navigating the path to EU AI Act Compliance. PwC.
Govern and Scale All Your Enterprise AI Initiatives with ModelOp Center
ModelOp is the leading AI Governance software for enterprises and helps safeguard all AI initiatives — including both traditional and generative AI, whether built in-house or by third-party vendors — without stifling innovation.
Through automation and integrations, ModelOp empowers enterprises to quickly address the critical governance and scale challenges necessary to protect and fully unlock the transformational value of enterprise AI — resulting in effective and responsible AI systems.
To See How ModelOp Center Can Help You Scale Your Approach to AI Governance