Tiered Programme — Five Levels The AI Governance Lead tiers AI literacy training to the individual’s role in the oversight pyramid. Level 1 (Engineering): deep technical training on model behaviour, failure modes, monitoring tools, and incident response. Level 2 (Operators): practical training on the specific system, capabilities and limitations, confidence indicators, override procedures, and escalation pathways. Level 3 (Product Management): AI compliance obligations, business-compliance metric relationships, deployer management, and affected person rights. Level 4 (Compliance, Legal, DPO): EU AI Act requirements, AISDP structure, conformity assessment, and GDPR interaction. Level 5 (Executive): portfolio overview, risk posture, compliance status, and regulatory environment. Each tier receives training calibrated to the decisions and actions required of that role. Generic “AI awareness” training satisfies the spirit of Article 4 for no tier; each needs targeted content. Key outputs
- Five-tier training programme aligned to oversight pyramid
- Role-specific content for each tier
- Generic AI awareness insufficient for any tier
- Module 7 AISDP documentation
Operator Training — Hands-On, Calibration & Scenario Exercises Level 2 operator training must be custom to the specific system. Generic AI literacy training does not prepare an operator to review specific cases in the specific domain with the specific interface. Training includes hands-on exercises using the actual oversight interface, worked examples from the system’s domain, calibration exercises where the operator reviews cases with known outcomes (testing whether the operator can identify the system’s errors), and scenario exercises practising the override and break-glass procedures. Calibration exercises are particularly valuable for automation bias detection. An operator who consistently agrees with the system’s recommendation on cases where the system is known to be wrong is exhibiting automation bias and requires additional training or workload adjustment. Key outputs
- Hands-on training with actual oversight interface
- Domain-specific worked examples
- Calibration exercises with known-outcome cases
- Override and break-glass scenario practice
Training Cadence (Initial, Annual, Event-Triggered) The training programme includes initial training before a person assumes their role in the oversight pyramid, periodic refresher training at least annually, and event-triggered training after a significant incident, after a substantial system modification, or after a regulatory update. Completion is tracked by the AI Governance Lead in a learning management system (Docebo, TalentLMS, Moodle) and retained as Module 7 evidence. The LMS generates compliance reports showing, for each person in the oversight pyramid, their current training status, last completed training date, and any overdue refreshers. Overdue refreshers trigger automated reminders escalating to the AI Governance Lead. Key outputs
- Three-cadence training (initial, annual refresher, event-triggered)
- LMS tracking with compliance reporting
- Automated overdue reminders
- Module 7 AISDP evidence
Records & Certification (LMS Tracking, Operator Certification) Training completion records are retained as evidence for the AISDP’s quality management documentation. For operators of high-risk AI systems, certification records confirm that the operator has completed required training and demonstrated competence through the calibration and scenario exercises. Certification is a prerequisite for operating the system. An operator whose certification has lapsed (refresher overdue) should not operate the system until recertified. The LMS enforces this through automated access control where feasible, or through the AI Governance Lead’s manual oversight of the certification register. Key outputs
- Training completion records retained as Module 7 evidence
- Operator certification as prerequisite for system operation
- Lapsed certification prevents system operation
- LMS or manual certification register