Compliance

AI Act Risk Classification: 4 Levels & Obligations

EU AI Act risk classification explained: examples, obligations per level, prohibited practices, and 2024-2027 timeline.

10 April 20267 min readWarDek Team

AI Act Risk Classification Guide: 4 Levels, Obligations and Timeline

The EU AI Act (Regulation 2024/1689), which entered into force on 1 August 2024, establishes the world's first comprehensive legal framework for artificial intelligence. Its core mechanism is a risk-based classification system that determines what obligations apply to any given AI system. Understanding this classification is the starting point for any compliance effort — whether you develop AI systems, deploy them in your business, or use AI-enabled tools from third-party vendors.

This guide covers all four risk levels, provides concrete examples for each, maps the obligations that apply at each level, and summarizes the phased implementation timeline.

The Four Risk Levels

Level 1 — Unacceptable Risk (Prohibited Practices)

Article 5 of Regulation 2024/1689 prohibits a set of AI practices outright. These represent cases where the risks to fundamental rights and safety are deemed inherently incompatible with EU values, and no risk mitigation measures can render them acceptable.

Prohibited practices include:

Organizations must audit their AI deployments against this list. Using a prohibited system, even unknowingly, creates significant legal exposure from August 2025 (the date when prohibited practices provisions apply).

Level 2 — High Risk

High-risk AI systems are allowed but subject to the most demanding set of obligations in the regulation. Article 6 and Annex III define high-risk systems across two categories.

Category 1: AI systems that are themselves safety components of products already covered by EU sectoral legislation (medical devices, machinery, civil aviation, motor vehicles, etc.). These must comply with AI Act requirements in addition to the sectoral rules.

Category 2: Standalone AI systems in specific areas listed in Annex III:

Concrete high-risk examples:

Obligations for high-risk AI providers:

Level 3 — Limited Risk

Limited risk AI systems are subject primarily to transparency obligations. The rationale is that users who understand they are interacting with an AI can make informed decisions.

Obligations:

There are no pre-market conformity assessments required. The obligations are operational: ensure the disclosure happens at the point of interaction.

Examples: Customer service chatbots, AI writing assistants that generate public-facing content, AI-enhanced images used in commercial communications.

Level 4 — Minimal or No Risk

The vast majority of AI systems fall into this category. The regulation does not prohibit or significantly restrict their development or use. No mandatory compliance obligations are imposed, though voluntary codes of conduct are encouraged.

Examples: AI-powered spam filters, manufacturing quality control systems using computer vision, recommendation engines for internal tools, AI-assisted document drafting for personal use.

General Purpose AI (GPAI) Models — A Separate Track

In addition to the four-tier risk classification, Regulation 2024/1689 introduces specific rules for General Purpose AI models (Chapter V, Articles 51-56). These apply to providers of large foundation models such as large language models.

All GPAI providers must:

GPAI models with systemic risk (defined as models trained with compute exceeding 10^25 FLOPs, or designated by the Commission) face additional obligations:

GPAI rules largely applied from August 2025.

Implementation Timeline

The AI Act applies progressively across 2025–2027:

| Date | What applies | |---|---| | 1 August 2024 | Entry into force | | 2 February 2025 | Prohibited practices (Article 5) | | 2 August 2025 | GPAI rules + governance/penalties | | 2 August 2026 | High-risk AI systems (Annex III, standalone) | | 2 August 2027 | High-risk AI systems (Annex I, safety components) |

Note that Article 4 (AI Literacy) applies from 2 February 2025. This means organizations that deploy or use AI systems must have appropriate literacy programs in place now.

Compliance Steps by Risk Level

| Risk Level | Immediate Actions | |---|---| | Unacceptable | Audit all AI systems against Art. 5 prohibited list — cease use if prohibited | | High Risk | Initiate quality management system, technical documentation, conformity assessment | | Limited | Implement user disclosure at interaction point | | Minimal | Document in AI inventory; monitor for reclassification | | GPAI (using) | Verify provider compliance; review DPA and instructions for use |

How WarDek Supports AI Act Classification

WarDek's AI compliance module includes a structured risk classification workflow aligned with Annex III and Article 6. It generates the required technical documentation templates, tracks conformity assessment status, and provides alerting when regulatory guidance updates affect your AI inventory.

Explore WarDek's AI Act compliance features and start your AI inventory today.

Key Takeaways

The AI Act's risk classification is not self-evident — identical technology can be high-risk or minimal-risk depending on its use case. A computer vision model used in a factory QC process is minimal risk; the same model used for border control is high-risk. Classification must be done by use case, not by technology type. The timeline is compressed: prohibited practices already apply, GPAI rules are live, and high-risk obligations for standalone systems apply from August 2026. There is no time to defer the inventory and classification exercise.

For related reading, see our guide on AI Act obligations for SMEs.

#AI Act#risk classification#EU regulation#2024/1689#high-risk AI#GPAI

Scan your site for free

WarDek detects the vulnerabilities mentioned in this article in seconds.

Back to Compliance