v2.4.0 | Report Errata
docs development docs development

The explanation generation component requires unit tests verifying four properties. Coverage tests confirm that explanations are produced for every inference, with no silent omissions. Attribution sum tests verify that, for additive explanation methods such as SHAP, the feature attributions sum to the expected value (the difference between the model’s output and the base value). Rounding errors or implementation bugs can cause attribution sums to diverge.

Fidelity tests verify that the explanation accurately represents the model’s actual decision-making process. The fidelity metric measures how well the explanation (a simplified representation) approximates the model’s actual behaviour. If the fidelity score falls below a defined threshold, the explanation may be misleading, which undermines Article 13’s transparency objective and Article 14’s human oversight requirement.

Format tests verify that explanations are correctly structured for their target audience. Operator-facing explanations (detailed, technical) and affected-person-facing explanations (plain language, accessible) have different format requirements. The tests confirm that each format meets its specification, that mandatory fields are populated, and that the explanation’s content is consistent with the inference output it describes.

Key outputs

  • Coverage tests confirming explanation generation for every inference
  • Attribution sum validation for additive methods
  • Fidelity threshold tests per explanation method
  • Format validation for operator and affected-person audiences
On This Page