v2.4.0 | Report Errata
docs development docs development

The five post-training fairness metrics (selection rate ratio, equalised odds, predictive parity, calibration within groups, and counterfactual fairness) capture different fairness concepts, and they can conflict with each other. A model that achieves equalised odds may fail predictive parity. A model that achieves calibration within groups may violate the four-fifths rule. Mathematical impossibility results (Chouldechova, 2017; Kleinberg et al., 2016) demonstrate that perfect satisfaction of multiple fairness criteria simultaneously is impossible except in trivial cases.

The organisation must decide which fairness concept takes priority for its specific system, and document the rationale. This decision is not purely technical; it is an ethical and policy choice that the AI Governance Lead makes with input from the Technical SME, the Legal and Regulatory Advisor, and the Business Owner.

The rationale should consider the system’s intended purpose and the nature of the decisions it supports, the consequences of different types of errors for different subgroups, the regulatory and legal expectations in the deployment domain (employment law may emphasise selection rate parity; financial services regulation may emphasise calibration), and the preferences of affected persons and stakeholders to the extent ascertainable.

The AISDP records the prioritised fairness concept, the rationale, the acceptance thresholds for the prioritised metric, and the monitoring approach for the non-prioritised metrics (which remain relevant even if they are not the primary target). The decision is revisited at each major review cycle and whenever post-market monitoring reveals fairness-relevant changes.

Key outputs

  • Fairness concept prioritisation decision
  • Rationale document with stakeholder input
  • Acceptance thresholds per fairness metric
On This Page