Threshold Calibration — Derivation & Quarterly Review Threshold calibration determines how sensitive the alerting system is. Thresholds set too low generate excessive alerts (contributing to alert fatigue and desensitisation); thresholds set too high miss genuine compliance issues until they become severe. Initial thresholds are derived from the system’s validation performance. The critical threshold corresponds to the AISDP-declared minimum acceptable value. The warning threshold is set at a level that provides sufficient lead time for investigation and remediation before the critical threshold is breached. For drift metrics, thresholds are calibrated against the natural variability observed during the validation period. Thresholds are reviewed quarterly at the PMM governance meeting. The review examines alert volume per threshold (are thresholds generating the right number of alerts?), false positive rate (are alerts leading to genuine issues or benign findings?), and detection latency (are warning alerts providing enough lead time before critical alerts?). Threshold adjustments are documented with their rationale and approved by the AI Governance Lead. Key outputs
- Initial derivation from validation performance
- Warning threshold calibrated for lead time before critical
- Quarterly review of alert volume, false positive rate, and detection latency
- Documented adjustments approved by AI Governance Lead