Phase 7: Operational Monitoring — Owner & Outputs
Phase 7 is ongoing and begins at deployment. The AI Governance Lead owns this phase.
The post-market monitoring system operates continuously, collecting metrics across five dimensions: performance, fairness, data drift, operational health, and human oversight. Alerts are generated and triaged according to the severity framework described above, which define three tiers. The AI Governance Lead convenes quarterly PMM review meetings examining monitoring trends, operator escalation patterns, deployer feedback, complaint volumes, and the non-conformity register.
The Internal Audit Assurance Lead conducts an annual oversight audit, testing monitoring infrastructure, escalation pathways, break-glass procedures, training currency, and non-retaliation commitments. Serious incidents are detected, triaged, reported, investigated, and remediated in accordance with the Article 73 process, with evidence preserved and systems left unaltered prior to authority notification.
System changes flow through the version control and CI/CD framework. Each change is assessed against the substantial modification thresholds. Changes crossing the threshold trigger a new conformity assessment cycle (returning to Phase 5). Changes below the threshold are documented in the AISDP change history (Module 12).
Regulatory developments are monitored by the Legal and Regulatory Advisor and assessed for impact. The AISDP is maintained as a living document; each material change creates a new version, and the version history demonstrates continuous compliance discipline.
Key outputs
- Monthly PMM reports
- Quarterly review meeting minutes and action items
- Annual oversight audit report
- Serious incident reports (as required, within mandated timelines)
- AISDP version updates
- Updated risk register entries
- Regulatory horizon scanning summaries
Phase 7: Feedback Loop (PMM → Decision → Action → Validation → AISDP)
The PMM feedback loop is the operational mechanism that ensures monitoring findings translate into system improvements. Its value depends entirely on execution; findings that accumulate in dashboards without triggering action represent a compliance failure.
The loop follows a defined cycle: a PMM finding (alert, report, or deployer feedback) is identified; a decision authority determines the appropriate action; engineering implements the fix; validation gates confirm the fix is effective; the AISDP is updated; and the evidence pack records the complete cycle as a traceable record. Decision authority is tiered by impact. Threshold adjustments can be authorised by the Technical SME. Model retraining on updated data requires Technical Owner authorisation with notice to the AI Governance Lead. Architecture changes or hyperparameter shifts require AI Governance Lead approval and a substantial change assessment. System suspension or withdrawal requires AI Governance Lead sign-off with immediate notice to the Legal and Regulatory Advisor and affected deployers.
PMM-triggered remediation competes with feature development and other engineering priorities. Organisations should establish a PMM action backlog separate from the general engineering backlog. Critical actions (compliance threshold breaches, serious incident corrective actions) override all other engineering work. Warning-level actions are scheduled within the next development sprint.
The feedback loop is itself monitored through meta-metrics: time from finding to decision, time from decision to completed fix, the share of findings resulting in system changes versus those accepted as within tolerance, and the share of fixes that successfully resolve the originating finding. A feedback loop with a median response time of six months is materially different from one with a median of two weeks, and the difference directly affects the organisation’s ability to maintain compliance under Article 72.
Key outputs
- Feedback loop cycle records (per finding)
- PMM action backlog
- Feedback loop meta-metrics dashboard