v2.4.0 | Report Errata
docs governance docs governance

Systems triggering Article 50 transparency obligations include chatbots and conversational AI (which must inform users they are interacting with an AI system), emotion recognition systems (which must inform exposed persons), biometric categorisation systems (which must inform categorised persons), and systems generating or manipulating synthetic content including deepfakes (which must label outputs as artificially generated or manipulated).

These systems require a standard AISDP addressing the specific transparency measures applicable to their category. The standard AISDP is lighter than the full high-risk AISDP; it focuses on the transparency controls, the technical mechanisms for delivering the required disclosures, and the evidence that the disclosures are effective and comprehensible.

A system may be both limited-risk (triggering Article 50 obligations) and high-risk (falling within Annex III). In such cases, the full high-risk obligation set applies, and the Article 50 transparency obligations are subsumed within the Module 8 transparency documentation.

Key outputs

  • Article 50 category determination
  • Standard AISDP scoped to transparency obligations
  • Dual classification handling where both limited and high-risk apply
  • Module 6 and Module 8 AISDP documentation
On This Page