v2.4.0 | Report Errata
docs development docs development

Predictive parity asks whether positive predictions are equally accurate across subgroups. If the model’s positive predictions are correct 85% of the time for one subgroup but only 65% for another, individuals in the second subgroup face a higher risk of being incorrectly subjected to the system’s consequences.

This metric is particularly important for high-stakes decisions such as credit denial, job rejection, or benefits eligibility, where a false positive imposes a real cost on the affected person. In a recruitment screening system, a positive prediction (candidate recommended for interview) that is less reliable for one demographic group means that group experiences a higher rate of “wasted” interviews or false encouragement, while negative predictions (candidate not recommended) that are less reliable mean qualified candidates are disproportionately screened out.

The AISDP records the positive predictive value (precision) per protected subgroup, the parity thresholds applied, and the disparity between the best-performing and worst-performing subgroups. The report should contextualise the metric by explaining what predictive parity means for the specific deployment: which real-world consequences flow from false positives and false negatives, and how disparities in predictive accuracy translate into differential harm.

Predictive parity can conflict with equalised odds; a model that achieves one may fail the other. Article 81 addresses the fairness concept prioritisation decision that resolves such conflicts.

Key outputs

  • Positive predictive value per protected subgroup
  • Parity assessment and disparity measurement
  • Contextualised impact analysis
On This Page