Art. 6(3) Functional Criterion
Article 6(3) allows certain systems that would otherwise be classified as high-risk to be treated as lower risk if two conditions are both satisfied. The first is the functional criterion. The system’s function must fall within one of the specified categories: performing narrow procedural tasks, improving the results of previously completed human activities, or detecting decision-making patterns without replacing human assessment.
The functional criterion requires the AI System Assessor to analyse what the system actually does in its deployment context, not its theoretical capability. A system that could replace human assessment but is deployed solely to assist human decision-makers may satisfy the functional criterion; the same system deployed to make autonomous decisions would not. The analysis must be grounded in the system’s actual deployment configuration, operational procedures, and the contractual commitments governing its use.
Satisfying the functional criterion alone is insufficient; the risk criterion must also be met. The AI System Assessor documents the functional analysis with specific evidence addressing which specified category applies and why.
Key outputs
- Functional analysis against Article 6(3) specified categories
- Grounding in actual deployment configuration (not theoretical capability)
- Evidence-based determination with documented reasoning
- Module 6 AISDP documentation
Art. 6(3) Risk Criterion — No Significant Risk
The second condition for the Article 6(3) exception is the risk criterion: the system must not pose a significant risk of harm to the health, safety, or fundamental rights of natural persons. This assessment considers the severity of potential harm, the number of persons potentially affected, the vulnerability of those persons, the reversibility of harm, and the availability of redress mechanisms.
The risk criterion analysis should treat the exception as a hypothesis to be tested against evidence, not as a convenient exit from compliance obligations. A conservative approach is warranted: if the analysis is borderline, treating the system as high-risk is the safer position. The consequences of incorrectly claiming the exception (deploying a non-compliant high-risk system) are substantially more severe than the cost of unnecessarily complying with high-risk requirements.
Both the Legal and Regulatory Advisor and the AI Governance Lead must review and approve any claim of the Article 6(3) exception. Their approval confirms that the risk analysis is thorough, that the conclusion is defensible, and that the organisation accepts the residual risk of the classification decision.
Key outputs
- Risk criterion analysis (severity, population, vulnerability, reversibility, redress)
- Hypothesis-testing approach with conservative default
- Legal and Regulatory Advisor and AI Governance Lead dual approval
- Module 6 AISDP documentation
Review & Treatment — Hypothesis Tested Against Evidence
The Article 6(3) exception assessment is treated as a hypothesis to be tested, not a conclusion to be confirmed. The AI System Assessor assembles evidence both supporting and challenging the exception claim. Supporting evidence might include the system’s narrow procedural scope, the availability of human override, and the low severity of potential harm. Challenging evidence might include the system’s influence on consequential decisions, the vulnerability of affected populations, and the difficulty of detecting errors.
The assessor weights the evidence and reaches a determination. If the determination favours the exception, both criteria are documented with the specific evidence supporting each. If the determination does not favour the exception, the system is treated as high-risk and the full AISDP proceeds. The analysis is retained regardless of outcome; an assessor or competent authority reviewing the CDR should be able to see that the exception was genuinely tested.
This rigorous treatment protects the organisation against enforcement risk. A competent authority that encounters a system claiming the Article 6(3) exception will scrutinise the analysis carefully; a superficial or one-sided analysis will not survive that scrutiny.
Key outputs
- Evidence assembled both supporting and challenging the exception
- Weighted determination with documented reasoning
- Full analysis retained regardless of outcome
- Module 6 AISDP documentation