Five artefacts are produced during the model selection process. Together they provide the documented evidence trail from candidate evaluation through to the formal selection decision.
Model Selection Record
The Model Selection Record is a core component of AISDP Module 3, documenting the organisation’s rationale for its model architecture choice. Annex IV requires describing “the key design choices and their rationale,” and the Model Selection Record is the primary artefact that satisfies this requirement.
The Record covers the system’s functional requirements, the compliance requirements derived from the risk assessment (including minimum explainability, testability, and auditability standards), the candidate architectures evaluated (including traditional heuristic and statistical approaches), the evaluation methodology (datasets, metrics, compliance criteria scoring), the evaluation results presented as a comparison table, the recommended selection with rationale including trade-offs accepted, and the governance approval record.
The scope extends beyond the primary decision-making model. Every learned component in the system architecture requires an entry: embedding models, re-ranking models, classification heads, auxiliary monitoring models, and safety classifiers. Each entry is proportionate to the component’s influence on the final output. The AI System Assessor verifies that the Record is complete against the architecture diagram; any model component visible in the architecture without a corresponding entry is a documentation gap.
This documentation serves two audiences. The Technical SME reviews it for technical soundness. The Classification Reviewer and any notified body review it for evidence that the organisation made a considered, risk-aware choice and did not simply default to the most complex available model.
Key outputs
- Model Selection Record (complete, covering all model components)
- Compliance criteria comparison table
- Governance approval record
Model Origin Risk Assessments
The Model Origin Risk Assessment is an artefact documenting the provenance risk profile of each model component in the system. It consolidates the analysis into a structured assessment per component.
For each model component, the assessment records the origin category (in-house, open-source, or proprietary), the provenance documentation available, the gaps in provenance documentation, the compensating controls applied (sentinel testing, output filtering, continuous monitoring), and the residual origin risk after controls. The assessment also records the GPAI provider due diligence performed (where applicable), including the Article 25(3) information request and the provider’s response.
In-house models offer the greatest control over documentation and governance; the risk profile centres on process discipline and whether the development team followed the documented methodology. Open-source models carry provenance, governance, and testing gaps that must be compensated. Proprietary models may have documentation gaps where the provider refuses disclosure, requiring compensating evaluation. The assessment rates each component on a consistent scale and aggregates the ratings into an overall model origin risk profile for the system.
Key outputs
- Per-component model origin risk assessment
- Aggregated system-level origin risk profile
IP & Licensing Analysis
The IP and Licensing Analysis is a consolidated artefact addressing copyright, personal data, and licence risks across all model components, training data sources, and third-party dependencies. It brings together the assessments into a single document.
The analysis covers training data copyright status and legal basis for each data source, open-source component licence terms and compatibility with the system’s commercial and regulatory context, proprietary model contractual representations regarding IP, personal data consent verification status, and residual IP risks with ratings and compensating controls.
The document is maintained as a living artefact. Changes to model components, data sources, or licence terms trigger updates. The Legal and Regulatory Advisor reviews the analysis at each major phase gate, and the AI Governance Lead signs off on residual IP risk acceptance. The analysis is referenced from AISDP Modules 3 and 4, and it forms part of the evidence pack reviewed during conformity assessment.
Key outputs
- IP and Licensing Analysis document
- Legal and Regulatory Advisor review record
- AI Governance Lead residual risk acceptance
Fine-Tuning Provider Boundary Determination
The Fine-Tuning Provider Boundary Determination is the artefact documenting whether the organisation’s fine-tuning of a GPAI model triggers provider status under Article 25(1)(b). It consolidates the analysis into a formal determination.
The artefact records the base model identified (provider, model name, version), the fine-tuning activity performed (methodology, data, scope), the three-criteria assessment (intended purpose change, risk profile change, safety testing invalidation), the determination reached (provider status triggered or not triggered), and the rationale with supporting evidence. Where the case is borderline, the decision flow documentation is incorporated by reference.
If provider status is triggered, the artefact also records the obligation transfer determination: which Article 16 obligations are assumed, which are partially satisfied by the GPAI provider’s existing artefacts, and which fall exclusively on the downstream organisation. The role assignments and timelines for fulfilling assumed obligations are documented.
The determination is approved by the AI Governance Lead and the Legal and Regulatory Advisor. It is retained in the evidence pack and referenced from AISDP Module 3. If challenged by a competent authority, this artefact provides the organisation’s documented reasoning for its compliance posture.
Key outputs
- Fine-Tuning Provider Boundary Determination document
- AI Governance Lead and Legal and Regulatory Advisor approval
- Obligation transfer matrix (where provider status is triggered)
Compliance Criteria Scoring Matrix
The Compliance Criteria Scoring Matrix is the artefact that brings together the six compliance criterion scores for each candidate model architecture evaluated during model selection. It is the quantitative backbone of the Model Selection Record.
The matrix presents each candidate architecture as a row, with columns for documentability, testability, auditability, bias detectability, maintainability, and determinism. Each cell contains the score (strong, adequate, or weak) and a brief evidence-based justification. The matrix includes weighting: for a high-risk system in the employment domain where human oversight is paramount, explainability-adjacent criteria (documentability, auditability, bias detectability) carry higher weight; for a safety-critical system, testability and determinism carry higher weight. The weights are documented and approved by the AI Governance Lead before the evaluation begins, preventing post-hoc rationalisation of a preferred choice.
The matrix also includes non-compliance columns: performance metrics (accuracy, precision, recall), cost estimates, and the IP risk profile for each candidate. This enables the selection decision to weigh compliance criteria alongside traditional engineering criteria in a single view.
The completed matrix, with the weights, scores, justifications, and the resulting recommendation, is stored in the evidence pack and forms the core of the model selection rationale presented in AISDP Module 3. It should be retrievable if a notified body or competent authority asks why a particular model architecture was chosen over alternatives.
Key outputs
- Compliance Criteria Scoring Matrix (completed)
- Weighting rationale approved by AI Governance Lead
- Selection recommendation derived from the matrix