AI Act Risk Classification Guide: 4 Levels, Obligations and Timeline
The EU AI Act (Regulation 2024/1689), which entered into force on 1 August 2024, establishes the world's first comprehensive legal framework for artificial intelligence. Its core mechanism is a risk-based classification system that determines what obligations apply to any given AI system. Understanding this classification is the starting point for any compliance effort — whether you develop AI systems, deploy them in your business, or use AI-enabled tools from third-party vendors.
This guide covers all four risk levels, provides concrete examples for each, maps the obligations that apply at each level, and summarizes the phased implementation timeline.
The Four Risk Levels
Level 1 — Unacceptable Risk (Prohibited Practices)
Article 5 of Regulation 2024/1689 prohibits a set of AI practices outright. These represent cases where the risks to fundamental rights and safety are deemed inherently incompatible with EU values, and no risk mitigation measures can render them acceptable.
Prohibited practices include:
- Subliminal manipulation: AI systems that use subliminal techniques to manipulate persons without their awareness to influence behavior in ways that cause harm
- Exploitation of vulnerabilities: Systems that exploit vulnerabilities related to age, disability, or socioeconomic situation to distort behavior harmfully
- Social scoring by public authorities: AI systems used by public authorities to evaluate or classify individuals based on social behavior or personal characteristics that result in detrimental treatment in unrelated social contexts
- Real-time remote biometric identification in public spaces: Used by law enforcement, with narrow exceptions for targeted searches for specific victims or prevention of terrorism
- Biometric categorization inferring sensitive attributes: Systems that categorize individuals by race, political opinion, religion, or sexual orientation from biometric data
- Emotion recognition in workplaces and educational institutions: With limited exceptions for medical or safety purposes
- Predictive policing: AI systems that assess individual risk of criminal behavior based solely on profiling or personality traits
Organizations must audit their AI deployments against this list. Using a prohibited system, even unknowingly, creates significant legal exposure from August 2025 (the date when prohibited practices provisions apply).
Level 2 — High Risk
High-risk AI systems are allowed but subject to the most demanding set of obligations in the regulation. Article 6 and Annex III define high-risk systems across two categories.
Category 1: AI systems that are themselves safety components of products already covered by EU sectoral legislation (medical devices, machinery, civil aviation, motor vehicles, etc.). These must comply with AI Act requirements in addition to the sectoral rules.
Category 2: Standalone AI systems in specific areas listed in Annex III:
- Biometric identification and categorization of natural persons
- Management and operation of critical infrastructure (water, gas, electricity, transport)
- Education and vocational training (access, progression, assessment)
- Employment and workers management (recruitment screening, promotion decisions, task allocation, performance monitoring)
- Access to essential private and public services (creditworthiness, social benefits eligibility, emergency services dispatch)
- Law enforcement (risk assessment of individuals, polygraph equivalents, evidence reliability)
- Migration, asylum and border control (risk assessment, examination of applications)
- Administration of justice and democratic processes (AI applied to facts and laws in disputes)
Concrete high-risk examples:
- A CV screening tool used in recruitment
- A credit scoring model used by banks
- A student performance prediction system used by universities
- A patient triage system in emergency medicine
- A customs risk profiling system
Obligations for high-risk AI providers:
- Establish a quality management system (Article 17)
- Conduct a conformity assessment before market placement
- Register in the EU database for high-risk AI systems (Article 71)
- Implement a risk management system (Articles 9)
- Ensure technical robustness and accuracy (Article 15)
- Maintain technical documentation (Article 11, Annex IV)
- Enable logging and audit trail capabilities (Article 12)
- Ensure transparency toward deployers (Article 13)
- Provide human oversight mechanisms (Article 14)
- Conduct post-market monitoring (Article 72)
Level 3 — Limited Risk
Limited risk AI systems are subject primarily to transparency obligations. The rationale is that users who understand they are interacting with an AI can make informed decisions.
Obligations:
- Chatbots and conversational AI: Must inform users they are interacting with an AI system (Article 50(1))
- AI-generated synthetic content (deepfakes): Must be labeled as artificially generated or manipulated (Article 50(4))
- Emotion recognition and biometric categorization: Where not prohibited, must inform the persons exposed
There are no pre-market conformity assessments required. The obligations are operational: ensure the disclosure happens at the point of interaction.
Examples: Customer service chatbots, AI writing assistants that generate public-facing content, AI-enhanced images used in commercial communications.
Level 4 — Minimal or No Risk
The vast majority of AI systems fall into this category. The regulation does not prohibit or significantly restrict their development or use. No mandatory compliance obligations are imposed, though voluntary codes of conduct are encouraged.
Examples: AI-powered spam filters, manufacturing quality control systems using computer vision, recommendation engines for internal tools, AI-assisted document drafting for personal use.
General Purpose AI (GPAI) Models — A Separate Track
In addition to the four-tier risk classification, Regulation 2024/1689 introduces specific rules for General Purpose AI models (Chapter V, Articles 51-56). These apply to providers of large foundation models such as large language models.
All GPAI providers must:
- Maintain technical documentation
- Provide instructions for downstream providers and deployers
- Comply with EU copyright law and provide a summary of training data
GPAI models with systemic risk (defined as models trained with compute exceeding 10^25 FLOPs, or designated by the Commission) face additional obligations:
- Adversarial testing
- Serious incident reporting to the Commission
- Cybersecurity protections
- Energy efficiency reporting
GPAI rules largely applied from August 2025.
Implementation Timeline
The AI Act applies progressively across 2025–2027:
| Date | What applies | |---|---| | 1 August 2024 | Entry into force | | 2 February 2025 | Prohibited practices (Article 5) | | 2 August 2025 | GPAI rules + governance/penalties | | 2 August 2026 | High-risk AI systems (Annex III, standalone) | | 2 August 2027 | High-risk AI systems (Annex I, safety components) |
Note that Article 4 (AI Literacy) applies from 2 February 2025. This means organizations that deploy or use AI systems must have appropriate literacy programs in place now.
Compliance Steps by Risk Level
| Risk Level | Immediate Actions | |---|---| | Unacceptable | Audit all AI systems against Art. 5 prohibited list — cease use if prohibited | | High Risk | Initiate quality management system, technical documentation, conformity assessment | | Limited | Implement user disclosure at interaction point | | Minimal | Document in AI inventory; monitor for reclassification | | GPAI (using) | Verify provider compliance; review DPA and instructions for use |
How WarDek Supports AI Act Classification
WarDek's AI compliance module includes a structured risk classification workflow aligned with Annex III and Article 6. It generates the required technical documentation templates, tracks conformity assessment status, and provides alerting when regulatory guidance updates affect your AI inventory.
Explore WarDek's AI Act compliance features and start your AI inventory today.
Key Takeaways
The AI Act's risk classification is not self-evident — identical technology can be high-risk or minimal-risk depending on its use case. A computer vision model used in a factory QC process is minimal risk; the same model used for border control is high-risk. Classification must be done by use case, not by technology type. The timeline is compressed: prohibited practices already apply, GPAI rules are live, and high-risk obligations for standalone systems apply from August 2026. There is no time to defer the inventory and classification exercise.
For related reading, see our guide on AI Act obligations for SMEs.