Personnel (0.25–0.5 FTE per System) PMM requires dedicated analytical capacity. The PMM analyst (or team, for larger deployments) reviews monitoring dashboards, investigates alerts, prepares PMM reports, and coordinates with the engineering team on remediation. For a medium-complexity high-risk system, a reasonable estimate is 0.25 to 0.5 FTE of dedicated PMM analytical effort, supplemented by engineering support during alert investigation and remediation. The PMM analyst role requires a combination of data science skills (to interpret metrics and investigate anomalies), regulatory awareness (to assess compliance implications of findings), and operational discipline (to maintain the monitoring infrastructure and reporting cadence). For organisations with multiple systems, a centralised PMM team achieves economies of scale through shared tooling and cross-system pattern analysis. Staffing continuity is critical. A PMM function that operates effectively for six months but then loses its analyst to another project degrades rapidly. The AI Governance Lead treats PMM staffing as a committed operational expense. Key outputs
- 0.25–0.5 FTE dedicated PMM analyst per system
- Engineering support during investigation and remediation
- Centralised team for multi-system economies of scale
- Committed operational staffing, not discretionary
Infrastructure & Re-Validation Testing The monitoring infrastructure (data collection, storage, computation, alerting, dashboards) has ongoing compute and storage costs that grow over time as data accumulates. Organisations project these costs over the system’s expected lifetime, factoring in the ten-year retention obligation. Periodic re-validation testing at defined intervals (quarterly or biannual) provides a scheduled check beyond alert-driven investigation. Re-validation exercises rerun the full performance, fairness, and robustness test suite against current production data, establishing a fresh baseline and detecting slow degradation that continuous monitoring may not capture. Incident response imposes unplanned costs: engineering effort for investigation and remediation, legal effort for reporting and authority interaction, and operational effort for deployer communication. A contingency budget ensures the organisation can respond without diverting resources from other critical activities. Key outputs
- Ongoing infrastructure cost projection over system lifetime
- Quarterly or biannual re-validation testing
- Contingency budget for incident response
- Module 12 AISDP documentation
Budget Heuristic (15–25% of Annual Dev Cost) As a planning estimate, organisations should budget between 15% and 25% of the system’s annual development cost for ongoing PMM and compliance maintenance. This figure varies significantly by system complexity, risk level, and deployment scale, but it provides a starting point for financial planning. The budget covers personnel (PMM analyst, engineering support), infrastructure (compute, storage, tooling licences), testing (periodic re-validation exercises), deployer support (feedback management, audit visits), and incident response contingency. For systems deployed across multiple jurisdictions, the budget includes the incremental multi-jurisdiction costs. The AI Governance Lead validates the budget annually during the strategic review, adjusting for changes in system complexity, deployment scale, or regulatory requirements. Key outputs
- 15–25% of annual development cost as PMM budget heuristic
- Five cost categories (personnel, infrastructure, testing, deployer support, incident response)
- Annual validation and adjustment
- Module 12 AISDP documentation