v2.4.0 | Report Errata
docs operations docs operations

Level 2: AI System Operators — Personnel & Function Level 2 comprises the human operators who interact with the AI system’s outputs in daily operation. For a recruitment system, these are the recruiters using the screening tool. For a credit scoring system, these are the credit analysts reviewing the model’s recommendations. They exercise the override, intervention, and escalation capabilities documented in AISDP Module 7. Operators are trained and certified on the system’s capabilities, limitations, and known failure modes. They understand the meaning of confidence indicators and explanation outputs, know when and how to override recommendations, and have a clear, low-friction escalation pathway for reporting concerns. They must recognise patterns suggesting the system is behaving differently from its documented intended purpose. Level 2 provides the most direct observation of the system’s real-world behaviour. Automated monitoring (Level 1) detects technical anomalies; human operators detect decision quality issues, contextual inappropriateness, and fairness concerns that metrics alone cannot capture. Key outputs

  • Human operators providing real-time output oversight
  • Trained and certified on system capabilities and limitations
  • Override, intervention, and escalation capability
  • Direct observation of real-world behaviour quality

Level 2: AI Literacy Requirements (Art. 4) Article 4 requires AI literacy for all persons involved in AI system operation. For Level 2 operators, this means understanding how the system works at a conceptual level, knowing what it does and does not do without needing the underlying mathematics. Operators must recognise the difference between the system’s recommendations and their own professional judgement. Operators understand the risks of automation bias and the importance of independent evaluation. They know the signs of output drift: the system suddenly recommending a different proportion of candidates, or consistently disagreeing with the operator’s assessment for a particular case type. They understand that the system’s confidence score reflects statistical calibration, not certainty. AI literacy for operators is delivered through hands-on training with the actual oversight interface, not generic AI awareness courses. The training programme is refreshed annually and after any substantial system modification. Key outputs

  • Conceptual understanding of system behaviour
  • Automation bias awareness and independent evaluation skills
  • Output drift recognition capability
  • Hands-on training with the actual system interface

Level 2: Escalation Triggers Level 2 escalation triggers include a pattern of outputs inconsistent with the operator’s professional judgement, outputs appearing to disadvantage a particular group of affected persons, situations the system’s training did not anticipate (novel input types, unusual circumstances), and any case where the operator believes the system may be causing harm. These triggers depend on operator judgement rather than automated thresholds. The escalation pathway must be low-friction: a single button or form in the oversight interface that captures the operator’s concern, the affected case identifier, and the reason for escalation. High-friction escalation pathways (requiring a formal written report, supervisor pre-approval, or multi-step process) suppress legitimate escalations. Escalation data is captured, aggregated, and analysed as part of the human oversight monitoring. A declining escalation rate over time warrants investigation. Key outputs

  • Four categories of operator escalation trigger
  • Low-friction escalation pathway in the oversight interface
  • Escalation data captured for human oversight monitoring
  • Judgement-based triggers complementing automated detection

Level 2: Non-Retaliation Protection Operators must escalate concerns without fear of negative consequences. If operators believe that raising concerns will lead to reprimand, performance penalties, or career disadvantage, they will not escalate, and the organisation loses its most valuable source of real-world feedback. An explicit non-retaliation commitment for good-faith AI concern reporting is communicated by the AI Governance Lead during operator training, reinforced by management, and enforceable through the organisation’s whistleblower protection mechanisms. The commitment extends to override decisions: an operator who overrides the system’s recommendation in good faith and turns out to be wrong should not face negative consequences for exercising the judgement the Article 14 framework requires of them. Non-retaliation is verified annually by the Internal Audit Assurance Lead through confidential operator surveys and escalation pattern analysis. A sudden drop in escalation rates following a personnel change or management communication warrants investigation. Key outputs

  • Explicit non-retaliation commitment communicated during training
  • Coverage extends to override decisions and escalations
  • Annual verification through confidential surveys
  • Module 7 AISDP documentation
On This Page