v2.4.0 | Report Errata
docs development docs development

Technical Traceability (Model, Code, Data, Infrastructure, Input — All Hash-Verified)

Technical traceability answers the question: for a given system output, what produced it? The answer must identify the exact model version (serialised model file referenced by hash), the exact code version (Git commit SHA for every service involved), the exact data versions (training dataset version, feature transformation version, configuration version), the exact infrastructure state (container image versions, environment configuration), and the exact input data (the specific record processed, captured in the logging layer).

This traceability chain enables the engineering team to reproduce any historical inference, diagnose the root cause of unexpected outputs, and demonstrate to a technical auditor that the system’s behaviour is deterministic and traceable. The model registry, code repository, data versioning system, container registry, and logging infrastructure must be integrated so that a single query against the composite version identifier retrieves the complete provenance chain.

The serving infrastructure tags every inference request with the composite system version at the point of execution. This tag is embedded in the log record and cannot be modified after the fact. Model artefacts are stored with cryptographic hashes verified at load time. Feature transformation code is shared between training and serving pipelines to eliminate training-serving skew. Deployment events are recorded in an immutable deployment ledger.

Key outputs

  • End-to-end traceability chain from inference to full provenance
  • Composite version tagging on every inference request
  • Cryptographic hash verification at model load
  • Module 10 AISDP documentation

Business & Outcome Traceability (Alignment, Experience, Satisfaction, Overrides, Complaints)

Business traceability answers a different question from technical traceability: is the system achieving the outcomes it was designed to achieve, and are those outcomes aligned with the organisation’s stated intent? This dimension is owned by product management and business stakeholders, not by the engineering team. It requires different metrics, cadences, and reporting formats.

The product manager or business owner should track five dimensions. Outcome alignment asks whether the system’s actual deployment outcomes are consistent with the intended purpose documented in AISDP Module 1. Affected person experience asks whether individuals are receiving the transparency, explanations, and redress pathways documented in Module 8. Deployer satisfaction assesses whether deployer organisations find the system useful, trustworthy, and aligned with their own compliance obligations.

Override and intervention patterns track what proportion of recommendations are modified by human operators and what those modifications reveal. Complaint and escalation volumes track whether affected persons are raising concerns and whether those concerns are being resolved. A translation layer between technical metrics and business outcomes is needed: a 0.02-point AUC-ROC drop is a technical fact whose business significance depends on its real-world impact on affected persons and deployers.

Key outputs

  • Business traceability metrics across five dimensions
  • Translation layer between technical and business metrics
  • Periodic business outcome reporting
  • Module 12 and Module 1 AISDP evidence

Deployment Ledger (Before/After State, Authoriser, Evidence, Immutable Record)

The deployment ledger is an immutable record of every deployment event in the system’s lifecycle. Each entry captures the before-state (the version of each artefact prior to the deployment), the after-state (the version of each artefact after the deployment), the identity of the person who authorised the deployment, and the evidence reviewed as part of the authorisation (validation gate results, governance approvals, substantial modification determinations).

The ledger provides the definitive record of the system’s version history in production. Given any point in time, the ledger identifies exactly which combination of code, model, data, configuration, and infrastructure was deployed. Combined with the inference logging, this enables precise reconstruction of the system’s state for any historical inference.

The deployment ledger should be implemented as an append-only data structure, either in the version control system (as tagged commits with structured metadata) or in a dedicated audit log with immutability protections (WORM storage, cryptographic hash chains). GitOps tools such as ArgoCD and Flux naturally produce a deployment ledger through their Git-based workflow: every deployment change is a Git commit, providing an immutable audit trail of what was deployed, when, by whom, and through which approval process.

Key outputs

  • Immutable deployment ledger with before/after state records
  • Authoriser identity and evidence references per entry
  • Append-only implementation (GitOps, WORM storage, or hash chains)
  • Module 10 and Module 12 AISDP evidence
On This Page