v2.4.0 | Report Errata
docs governance docs governance

Article 5 prohibits eight categories of AI practice. Subliminal, manipulative, or deceptive techniques that materially distort behaviour. Exploitation of vulnerabilities arising from age, disability, or social or economic situation. Social scoring by public authorities or on their behalf. Untargeted facial recognition scraping for database building. Emotion recognition in workplaces or educational institutions (outside narrow medical and safety exceptions). Risk assessment of natural persons for criminal offending based solely on profiling. Biometric categorisation systems that individually categorise natural persons based on biometric data to deduce or infer sensitive attributes such as race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation (outside narrow law enforcement exceptions). Real-time remote biometric identification in publicly accessible spaces (outside narrow law enforcement exceptions).

Systems falling within any of these categories cannot proceed through the AISDP process. Their identification triggers immediate escalation to the AI Governance Lead and the Legal and Regulatory Advisor, followed by cessation of operation. The risk assessment must screen for prohibited practices before any other analysis begins.

The screening should be thorough. Some systems may inadvertently perform a prohibited function through a secondary capability or an emergent behaviour. The AI System Assessor documents the screening analysis, confirming which prohibited categories were considered and why the system does not fall within any of them.

Key outputs

  • Screening against all eight prohibited practice categories
  • Immediate escalation and cessation procedure if triggered
  • Documented screening analysis confirming non-applicability
  • Module 6 AISDP documentation
On This Page