Phase 3: Architecture & Design — Owner & Outputs
Phase 3 runs during Weeks 4 to 8. The Technical Owner leads, with contributions from the Technical SME and AI System Assessor.
The objective is to design the system architecture informed by the risk assessment, select the model approach, and establish the data governance framework. The phase begins with the Statement of Business Intent, documenting the system’s purpose, constraints, prohibited outcomes, ethical framework, and transparency commitment across four audiences: deployers, affected persons, regulators, and internal stakeholders.
Model selection follows the compliance criteria described above, evaluating each candidate against six dimensions: documentability, testability, auditability, bias detectability, maintainability, and determinism. The full spectrum of decisioning approaches is assessed, including heuristic systems, statistical models, neural networks, and LLMs. Model origin risk, copyright risk, and nation-alignment risk are all evaluated.
The layered architecture is designed with per-layer compensating controls against intent and outcome drift. The data governance framework is established, and version control strategy, CI/CD pipeline design, and infrastructure-as-code approach are defined. The cybersecurity threat model is developed using STRIDE/PASTA methodology.
Key outputs
- Statement of Business Intent (approved by AI Governance Lead and Business Owner)
- Model selection rationale document (AISDP Module 3)
- System architecture document with dependency maps (AISDP Module 3)
- Data governance plan (AISDP Module 4)
- Version control and CI/CD design (AISDP Module 2)
- Cybersecurity threat model (AISDP Module 9)
Phase 3: Governance Gate (Architecture Review)
Phase 3 concludes with a governance gate: the Technical SME , Legal and Regulatory Advisor , and AI Governance Lead conduct a formal architecture review with sign-off confirming that the design satisfies the risk mitigation plan.
This review verifies that every risk identified in Phase 2 has a corresponding architectural control. The eight-layer reference architecture should demonstrate per-layer protections against both intent drift and outcome drift. The model selection rationale must address compliance criteria scores, and any model origin risks or IP exposure must have documented mitigations.
The Legal and Regulatory Advisor confirms that the architecture supports all applicable regulatory requirements, that the data governance plan addresses Article 10 obligations, and that the cybersecurity threat model aligns with Article 15. The review should also verify that the version control and CI/CD design will produce compliance evidence as a byproduct of the engineering workflow, rather than leaving documentation as a retrospective exercise.
Architectural decisions made at design time have downstream implications for the system’s eventual decommissioning. Systems designed with clear infrastructure-as-code definitions, isolated credential namespaces, and modular data storage are substantially easier to decommission in a controlled and auditable manner. The architecture review should consider decommission-readiness as a non-functional requirement.
Key outputs
- Signed architecture review record
- Confirmation that design satisfies the risk mitigation plan
Phase 3: Eight-Layer Reference Architecture & Per-Layer Controls
The reference architecture structures a high-risk AI system as eight layers, each providing specific compensating protections.
Layer 1 (Data Ingestion) enforces schema validation, input range enforcement based on training data distributions, prohibited feature blocking as a hard technical control, and data minimisation for GDPR compliance. Distribution monitoring at this layer computes real-time summary statistics against the training baseline. Layer 2 (Feature Engineering) maintains training-serving consistency through feature stores and a single computation specification, monitors feature distributions against the training baseline, and maintains a feature registry with proxy variable flags and justifications.
Layer 3 (Model Inference) enforces model version pinning with cryptographic hash verification, confidence thresholding (below-threshold cases routed to human review), and output constraint enforcement using schema validation. Layer 4 (Post-Processing) applies documented business rules with override logging, monitors threshold stability, and re-evaluates fairness on production data with periodic threshold recalibration.
Layer 5 (Explainability) generates explanations using methods appropriate to the model (SHAP, LIME, GradCAM, attention), validates explanation fidelity against model sensitivity, and provides audience-appropriate abstraction for operators and affected persons. Layer 6 (Human Oversight Interface) enforces mandatory review workflows that prevent auto-acceptance, deploys automation bias countermeasures (data-first display, dwell time enforcement, calibration cases), captures override rationale, and monitors override rates and sub-60-second review times.
Layer 7 (Logging and Audit) captures immutable, append-only records with cryptographic hash chains across nine event types, supports log-based drift detection, and provides on-demand regulatory export in NCA-specified formats. Layer 8 (Monitoring) operates intent alignment dashboards comparing real-time metrics against AISDP thresholds, performs statistical anomaly detection with severity-based escalation, and monitors five drift dimensions: input, output, fairness, error, and override.
Key outputs
- Per-layer architecture specification (AISDP Module 3)
- Per-layer control documentation