v2.4.0 | Report Errata
docs development docs development

Article 10(5) permits the processing of special category personal data (race, ethnicity, health, sexual orientation, religious belief, trade union membership, genetic and biometric data) strictly to support bias monitoring and detection, subject to specific conditions. This provision resolves a tension: meaningful bias detection is frequently impossible without access to the demographic data that data protection law restricts.

Before processing real special category data, the organisation must demonstrate that such processing is strictly necessary for the purposes of ensuring bias monitoring, detection, and correction, and that the purpose cannot be achieved through less intrusive means. This strict necessity test requires the organisation to actually attempt the alternatives, evaluate their adequacy, and document the results. Synthetic data frequently falls short because it fails to capture the correlational structure between protected characteristics and other features with sufficient fidelity. Anonymised data may not preserve the subgroup structure needed for disaggregated fairness analysis. The assessment should compare bias metrics computed on synthetic/anonymised data against metrics computed on a small, carefully governed sample of real data to quantify the adequacy gap.

Where the sufficiency test concludes that alternatives are insufficient, the organisation may process special category data under Article 10(5), but the purpose must be strictly limited to bias monitoring and detection. The data must not be used for model training, feature engineering, or any other purpose. The governance workflow requires a Special Category Data Processing Request from the Technical SME, a GDPR Article 9 compliance review by the DPO Liaison, and approval from the AI Governance Lead.

Key outputs

  • Sufficiency test results (synthetic and anonymised alternatives)
  • Special Category Data Processing Request
  • DPO Liaison compliance review
  • AI Governance Lead approval

Pseudonymisation & Automatic Deletion

Processing special category data under Article 10(5) requires rigorous technical and organisational safeguards. The AISDP documents the five-layer safeguard architecture and the automatic deletion or anonymisation process.

Isolation requires the special category data to be stored in a dedicated, physically or logically separated environment, inaccessible from the main development and production data stores. Pseudonymisation replaces direct identifiers with pseudonymous keys, with the mapping table stored separately under stricter access controls (HashiCorp Vault or equivalent). Encryption applies AES-256 at rest and TLS 1.3 in transit. Access control restricts processing to named individuals with documented business need, with all access events logged in an immutable audit trail. For the highest assurance, confidential computing (Azure SGX, AWS Nitro, Google Confidential VMs) runs the bias computation within a hardware-secured enclave.

Automatic deletion is triggered after the bias detection purpose is complete. The DPO Liaison verifies deletion technically, confirming removal from all storage locations including backups, caches, and derived datasets. A simple delete command is insufficient; verification must confirm that no residual copies exist. Where anonymisation rather than deletion is applied, a re-identification risk assessment confirms that individuals cannot reasonably be re-linked.

Module 4 records whether special category data was processed, the specific categories and purpose, the safeguards applied, the processing dates and scope, the deletion or anonymisation schedule, the verification results, and the DPO Liaison’s attestation of compliance.

Key outputs

  • Five-layer safeguard implementation documentation
  • Deletion/anonymisation verification record
  • DPO Liaison compliance attestation

RAG-Specific Governance

On This Page