Explainability Summary Model-agnostic methods: SHAP (feature attribution), LIME (local surrogate models). Model-specific methods: GradCAM (vision models), attention weights (transformer models). Article 86 right to explanation requires that affected persons receive meaningful information about AI involvement in decisions affecting them. The explanation methodology, scope, and limitations are documented in Module 3; the delivery mechanism is documented in Module 8. See for the detailed treatment. Key outputs
- SHAP, LIME, GradCAM, attention weights
- Article 86 right to explanation
- Methodology, scope, and limitations documented