January 31, 2025

The EU Artificial Intelligence Act Unacceptable Risk Deadline is Here: FAQs Answered

February 2, 2025 is the EU AI Act’s Deadline for Prohibiting AI Systems with Unacceptable Risk

The European Union Artificial Intelligence (AI) Act introduces a comprehensive legal framework to regulate AI systems based on their level of risk. This regulation aims to protect fundamental rights, ensure transparency, and promote human oversight. Compliance deadlines are rapidly approaching, with the first major enforcement date set for February 2nd, 2025, when AI systems classified as presenting unacceptable risks will be prohibited. Examples of AI with unacceptable risk that will be prohibited under the act include systems that are deceptive, exploitative, or discriminatory (e.g. social scoring, facial recognition, biometrics inferences), and that result in significant harm, including to civil rights. For example, AI-based pre-employment screening tests used to quantify attributes such as personality, attitudes, integrity, and emotional intelligence may fall into this category. Employers increasingly rely on these AI tools to streamline recruitment processes, but they pose ethical and bias risks and will be considered potentially "unacceptable" or "high risk" under the EU AI Act. Similar laws in New York City, Illinois, and Maryland require employers to conduct annual bias audits of automated hiring tools.

Following in February 2026, “high risk” AI systems will commence being subject to extensive requirements including transparency and conformity assessments. This requirement will have a much broader and significant impact on enterprises, as AI systems including critical infrastructure, safety components, educational training, and essential private and public services — such as healthcare and banking — will fall under its purview. As the first deadline and then the others approach, businesses and developers must understand these regulations to align their AI practices accordingly.

To help enterprises and AI leaders prepare to comply with the EU AI Act, here’s a list of answers to important and frequently asked questions about the EU AI Act and its impact on enterprises.

The EU AI Act FAQs:

What is the purpose of the EU AI Act?

The purpose of this act is to drive responsible innovation. Improving the functioning of the internal market and promoting the uptake of human-centric and trustworthy AI protects against the harmful effects of AI and supports innovation (“EU AI Act Insights,” n.d.).

What is the scope of the EU AI Act?

It’s extra-territorial, and international companies, even if they are not based in the EU, may still find themselves subject to the AI Act. The EU AI Act imposes obligations on providers, importers, distributors and deployers of AI systems and General-Purpose AI Models (GPAIMs) (“EU AI Act Insights,” n.d.).

What is the risk-based approach of the EU AI Act?

The EU AI Act adopts a risk-based approach, classifying AI systems into four distinct categories based on their potential impact:

  1. Unacceptable Risks: AI practices that pose a clear threat to individuals' safety or fundamental rights are prohibited. This includes systems designed for subliminal manipulation or those exploiting vulnerabilities of specific groups.
  2. High-Risk AI Systems: These systems are subject to stringent requirements due to their significant impact on critical areas such as public services, law enforcement, and safety components. Examples include AI applications in healthcare diagnostics, transportation management, and educational assessments (Pinset Masons, 2024). 
  3. Limited Risk: AI systems that interact with users without significant implications fall under this category. They are required to meet specific transparency obligations, such as informing users that they are engaging with an AI system. Chatbots and AI-generated content tools are typical examples.
  4. Minimal Risk: This category encompasses AI systems with negligible or no impact on users' rights or safety. Such systems are permitted with minimal regulatory intervention. Examples include spam filters and AI applications in video games.

What constitutes a high-risk AI system under the EU AI Act?

High-risk AI systems are those that significantly affect critical sectors and can potentially impact individuals' rights or safety. This includes general-purpose AI models when applied in sensitive contexts, such as:

  • Healthcare: AI systems used for patient diagnostics or treatment recommendations.
  • Transportation: AI applications in autonomous driving or traffic management systems.
  • Education: AI tools employed in student evaluations or admission processes.

To manage the risks associated with these applications, the Act mandates a thorough risk assessment process and requires that human oversight mechanisms be in place to intervene when necessary (Pinset Masons, 2024). 

What are examples of high-risk AI systems?

High-risk AI systems include applications used in critical infrastructure, public services, and safety-related areas, such as medical diagnostics, financial services, and law enforcement AI tools. Specific examples include:

  • Critical infrastructure:
    AI systems controlling safety components in areas like road traffic, water supply, electricity grids. 
  • Biometrics:
    Facial recognition, remote biometric identification, emotion recognition systems. 
  • Education and training:
    AI used to assess student performance, allocate learning pathways, or monitor cheating. 
  • Employment:
    AI systems used for recruitment, candidate screening, performance evaluation, and job matching 
  • Access to essential services:
    AI for credit scoring, risk assessment in insurance, and access to healthcare 
  • Law enforcement and justice:
    AI systems used to analyze evidence, identify suspects, or assess criminal risk
    Migration and border control:
    AI systems for document verification, risk assessment, and border access decisions 

What requirements must high-risk AI systems abide by to comply with the EU AI Act? 

  • Establish a risk management system throughout the AI system’s lifecycle
  • Conduct data governance, ensuring that training, validation and testing datasets are relevant, sufficiently representative and, to the best extent possible, free of errors and complete according to the intended purpose.
  • Draw up technical documentation to demonstrate compliance and provide authorities with the information to assess that compliance.
  • Design their high risk AI system for record-keeping to enable it to automatically record events relevant for identifying national level risks and substantial modifications throughout the system’s lifecycle.
  • Provide instructions for use to downstream deployers to enable the latter’s compliance.
  • Design their high risk AI system to allow deployers to implement human oversight.
  • Design their high risk AI system to achieve appropriate levels of accuracy, robustness, and cybersecurity.
  • Establish a quality management system to ensure compliance.

What are the transparency requirements for AI systems?

The EU AI Act imposes several transparency obligations on AI providers to ensure users are adequately informed:

  • Disclosure: Users must be notified when they are interacting with an AI system.
  • Purpose Explanation: The intended use and functionality of the AI system should be clearly communicated.
  • Data Usage: Information regarding the data sources and processing methods utilized by the AI system must be accessible to users.

These requirements aim to promote informed user interactions and foster trust in AI technologies (PWC, 2024).  

Who oversees compliance with the EU AI Act?

Compliance with the EU AI Act is monitored and enforced by a combination of authorities:

  • European Commission: Provides overarching guidance and coordination among member states.
  • National Authorities: Each EU member state designates specific bodies responsible for supervising AI activities within their jurisdiction.
  • Market Surveillance Authorities: Tasked with ensuring that AI systems in the market comply with the established regulations and standards.

These entities work collaboratively to ensure consistent enforcement and to address any non-compliance issues effectively (Orrick, 2024). 

What obligations do businesses face under the EU AI Act?

Businesses developing or deploying AI systems, particularly those classified as high-risk, are required to:

  • Implement Robust Risk Management: Establish processes to identify and mitigate potential risks associated with their AI systems.
  • Ensure Human Oversight: Develop mechanisms that allow human intervention in AI decision-making processes to prevent adverse outcomes.
  • Maintain Transparency: Provide clear information about the AI system's operations, data usage, and decision-making criteria.
  • Conduct Regular Audits: Periodically assess and document the AI system's compliance with the Act's requirements.

Adhering to these obligations is crucial to align with the legal framework and to promote ethical AI practices (PWC, 2024).  

How does the EU AI Act affect small businesses and startups?

Recognizing the diverse capacities of organizations, the EU AI Act incorporates proportional requirements to minimize the regulatory burden on small businesses and startups. This approach ensures that while compliance is mandatory, the obligations are scaled according to the size and resources of the organization, thereby supporting innovation and competitiveness among smaller entities (PWC, 2024).  

What are the impacts of the EU AI Act?

The Act regulates various roles in the AI lifecycle and is likely to result in significant compliance obligations for employers that use AI in respect of their workforce and customers (“EU AI Act Insights,” n.d.).

What are the fines, penalties, or consequences of violating the EU AI Act?

The penalties for non-compliance are up to the higher range of EUR 35 million (USD 38 million) or 7% of the company's global annual turnover (i.e. revenue) in the previous financial year (“EU AI Act Insights,” n.d.).

In the EU AI Act, there is a tiered risk system (unacceptable risk, high risk, limited risk, minimal risk), and the fines, penalties and consequences vary depending on the risk level.

Sources: FairNow AI, n.d.; European Parliament and Council of the European Union, 2024

How does the EU AI Act define fundamental rights?

The EU Artificial Intelligence Act does not provide a specific definition of "fundamental rights." Instead, it emphasizes the protection of these rights by requiring deployers of high-risk AI systems to conduct Fundamental Rights Impact Assessments (FRIAs) prior to deployment. This approach aligns with the Charter of Fundamental Rights of the European Union, which outlines rights across six categories: dignity, freedoms, equality, solidarity, citizens' rights, and justice. By mandating FRIAs, the AI Act ensures that AI systems are developed and deployed in a manner that respects and upholds these essential rights (Dauzier et al., 2024).

What emphasis or focus does the EU AI Act place on fundamental rights and human oversight?

The Act places a strong emphasis on protecting fundamental rights by mandating human oversight of AI systems. This ensures that AI applications do not operate in isolation but are subject to human judgment, particularly in scenarios where decisions can significantly impact individuals' lives. By embedding human oversight, the Act aims to prevent misuse and to uphold ethical standards in AI deployment (“EU AI Act Insights”, n.d.)

How does the EU AI Act ensure fairness in AI deployment?

To promote fairness, the Act requires that AI systems, especially those used in public services, adhere to strict transparency and non-discrimination standards. This includes conducting impact assessments to identify and mitigate potential biases, ensuring equitable outcomes, and maintaining public trust in AI-driven processes (PWC, 2024).  

How does the EU AI Act align with global AI governance trends?

The EU AI Act is the first comprehensive law to regulate artificial intelligence (AI) in the world. It's also the first law of its kind to be passed by a major regulator. 

The EU AI Act positions the EU as a leader in AI governance by establishing a comprehensive regulatory framework that promotes ethical AI development. According to Forbes (2024), the Act’s risk-based approach serves as a model for other regions, encouraging transparency, accountability, and human rights protection in AI deployment (Keller, 2024). 

The Act's extraterritorial influence, known as the “Brussels Effect,” compels global companies to comply with EU standards to maintain market access, influencing AI regulations worldwide. However, Forbes notes concerns that strict requirements could hinder innovation and competitiveness for European businesses (Keller, 2024). 

Despite challenges, the EU AI Act provides a blueprint for responsible AI governance, balancing ethical oversight with the need for technological advancement.

Conclusion: The EU AI Act Has Teeth and More Deadlines Are Approaching

Addressing varied AI regulations across countries and regions requires a strategic approach. Businesses must focus on key AI Regulatory themes to ensure compliance, including:

  • Governance Inventory: Documenting AI systems, their scope, limitations, and data used.
  • Controls: Implementing process, change, and access controls.
  • Testing & Validation: Ensuring models are tested and independently reviewed, including for ethical concerns.
  • Ongoing Reviews: Continuous monitoring and periodic audits of AI systems.
  • Risk Management: Identifying and mitigating risks through structured frameworks.

By proactively addressing these themes, businesses can help to ensure regulatory compliance while responsibly leveraging AI's potential.

References 

Apostle , J. (2024, September 13). The EU AI act: Oversight and enforcement. Orrick. 

Art. 14 human oversight - EU AI act. Art. 14 Human Oversight - EU AI Act. (n.d.). 

Braun, Dr. M., Vallery, A., & Benizri, I. (2024, April 8). Prohibited AI practices-a deep dive into Article 5 of the European Union’s AI Act. Wilmer Hale.

Braun, Dr. M., Vallery, A., & Benizri, I. (2024, July 17). What are high-risk AI systems within the meaning of the EU’s AI Act, and what requirements apply to them?

Dauzier, J., Waem, H., & Demircan, M. (2024, March 12). Fundamental rights impact assessments under the EU AI act: Who, what and how?. Technology’s Legal Edge.

European Commission . (n.d.). Ai Act. Shaping Europe’s Digital Future. 

The European Parliament and the Council of the European Union. (n.d.). Regulation - EU - 2024/1689 - en - EUR-lex. EU

European Union Agency for Fundamental Rights. (n.d.). EU Charter of Fundamental Rights. European Union Agency for Fundamental Rights.

FairNow. (2025, January 29). Understanding the EU AI act: AI governance for organizations. FairNow.

Keller, D. (2024, November 12). Council post: AI Regulation, global governance and Challenges. Forbes.

Lewis Silkin. (n.d.). EU AI act: Are you ready for 2 February 2025 – the ban on prohibited AI systems comes into force!. Lewis Silkin LLP - Homepage.

PricewaterhouseCoopers. (2024, June 3). Navigating the path to EU AI Act Compliance. PwC.

Rauer, Dr. N., & Kempf, A.-L. (2024, February 13). A guide to high-risk AI systems under the EU AI act. Pinsent Masons.

Read the full article
Get the Latest News in Your Inbox
Share this post