The EU AI Act establishes a four-tier risk classification framework that determines the obligations attaching to each AI system. Understanding where a system falls within this framework is the precondition for every subsequent compliance activity.
Tier 1: Prohibited Practices (Article 5). Systems that deploy subliminal manipulation, exploit vulnerabilities of specific groups, implement social scoring by public authorities, or perform untargeted facial recognition scraping are prohibited outright. Emotion recognition in workplaces and educational institutions (except where intended for medical or safety reasons), criminal risk assessment solely through profiling, and real-time remote biometric identification in publicly accessible spaces (outside narrow law enforcement exceptions) also fall within this tier. These systems cannot proceed through the AISDP process; their existence triggers immediate escalation and cessation.
Tier 2: High-Risk Systems (Annex III and Article 6). Systems falling within the eight Annex III domains (biometrics; critical infrastructure; education and vocational training; employment, workers management and access to self-employment; access to and enjoyment of essential private services and essential public services and benefits; law enforcement; migration, asylum and border control management; administration of justice and democratic processes) or constituting safety components of products governed by Annex I harmonisation legislation require the full AISDP comprising all twelve modules, conformity assessment, CE marking, and EU database registration.
Tier 3: Limited Risk. Systems triggering transparency obligations, such as chatbots, emotion recognition systems, biometric categorisation systems, and systems generating or manipulating synthetic content, require a Standard AISDP addressing transparency measures.
Tier 4: Minimal Risk. Systems that do not trigger any of the above categories require a Baseline AISDP confirming the classification rationale.
The Article 6(3) exception allows certain systems that would otherwise be classified as high-risk to be treated as lower risk, provided two conditions are both satisfied: the system performs narrow procedural tasks, improves previously completed human activities, or detects decision-making patterns without replacing human assessment; and the system does not pose a significant risk of harm to the health, safety, or fundamental rights of natural persons. Both criteria must be met, and any reliance on the exception must be rigorously documented.
Key outputs
- Classification Decision Record (CDR) with risk tier determination
- Article 6(3) exception assessment (where applicable)