Immutable Logging — Append-Only & Cryptographic Hash Chains
Article 12 requires automatic recording of events during the system’s operation. The word “automatic” is significant: logging must be a structural property of the system, not something that depends on application code remembering to write a log entry. OpenTelemetry provides the architectural pattern, instrumenting the application at the framework level to capture traces, spans, and structured log events automatically.
Immutability is enforced at the storage layer. The simplest approach is WORM (Write Once Read Many) storage. AWS S3 Object Lock in compliance mode prevents deletion or modification of log objects for a defined retention period; no user, including the account administrator, can override the lock. Azure Immutable Blob Storage and Google Cloud Logging retention locks offer equivalent capabilities.
For organisations requiring the highest assurance, cryptographic hash chains add tamper evidence. Each log entry includes a hash of the preceding entry, creating a chain that breaks visibly if any entry is modified or deleted. This is computationally inexpensive and can be implemented as a thin layer on top of any logging backend. The combination of WORM storage and hash chains provides both prevention (logs cannot be altered) and detection (alterations are visible), satisfying the immutability requirement for Article 12 compliance.
Key outputs
- OpenTelemetry instrumentation across all system layers
- WORM-configured log storage (S3 Object Lock, Azure Immutable Blob, or equivalent)
- Cryptographic hash chain implementation (where required)
- Module 10 documentation of the immutability mechanism
Comprehensive Event Coverage (Nine Event Types)
Gaps in logging create blind spots that undermine auditability. The logging layer must capture every material event in the system’s operation. A minimum event set is specified comprising nine event types.
Data ingestion events record the source, timestamp, record count, and quality check result. Feature computation events record the feature version and computation status. Inference events record the input hash, model version, raw output, and confidence score. Post-processing events record the rules applied, the original output, and the modified output. Explanation events record the method used and the feature attributions generated. Operator events record the review timestamp, the decision made, and the override rationale if applicable.
Configuration change events record what changed, who changed it, and when. Deployment events record the version deployed and the approval evidence. Monitoring alert events record the alert type, severity, and initial response. Each event must include a correlation ID that ties it to the specific inference request, enabling end-to-end trace retrieval. The comprehensiveness of the event coverage is validated through audit exercises that attempt to reconstruct the full history of a sample of inference requests.
Key outputs
- Logging schema covering all nine event types
- Correlation ID implementation for end-to-end traceability
- Audit exercise results confirming coverage completeness
- Module 10 AISDP documentation
Log-Based Drift Detection
Aggregated log data feeds the monitoring layer’s drift detection algorithms. Changes in inference patterns, error rates, or operator behaviour that are detectable in the logs provide early warning of outcome drift. The logging layer is therefore not merely an archival function; it is an active input to the system’s ongoing compliance monitoring.
Log-based drift detection analyses trends across the nine event types described above. An increase in data ingestion quality check failures may indicate upstream source changes. A shift in the distribution of model confidence scores may indicate input drift. A change in operator override rates may signal degradation in the model’s recommendations. Each of these trends is detectable through statistical analysis of the structured log data.
The detection algorithms operate on aggregated log data, not individual records. They compute rolling statistics, compare them against historical baselines, and trigger alerts when statistically significant deviations are detected. These alerts feed into the severity-based escalation framework described above, ensuring that log-derived signals receive appropriate attention and response.
Key outputs
- Log aggregation pipeline feeding drift detection algorithms
- Statistical baselines and deviation thresholds per event type
- Alert integration with the post-market monitoring framework
- Module 10 and Module 12 evidence records
Regulatory Export Capability — On-Demand NCA Format Conversion
The logging layer must support export of logs in formats suitable for regulatory inspection. National competent authorities (NCAs) may request access to system logs as part of market surveillance, incident investigation, or routine inspection. The export must be available on demand, within the response timelines expected by the relevant authority.
The regulatory export capability converts the internal log format into a structured, portable format that an external reviewer can consume without requiring access to the organisation’s logging infrastructure. This typically means exporting to standardised formats such as JSON, CSV, or XML, with accompanying metadata describing the schema, the time period covered, and the completeness of the export.
The export process should support filtering by time range, event type, and correlation ID, so that the organisation can provide precisely the records requested without disclosing unrelated operational data. Access to the export function is restricted to authorised personnel, and every export event is itself logged, creating an audit trail of regulatory data disclosures. The Technical SME tests the export capability periodically to confirm that it produces complete, accurate, and timely results.
Key outputs
- Log export function supporting NCA-compatible formats (JSON, CSV, XML)
- Filtering by time range, event type, and correlation ID
- Export event logging for audit trail
- Periodic testing of export completeness and accuracy