AI Act Model Origin Risk Assessment
Every third-party model component undergoes a vendor risk assessment before adoption, proportionate to its criticality. For foundation model providers, the assessment covers the provider’s data governance practices (training data composition, personal data inclusion, copyright compliance), security certifications (SOC 2, ISO 27001), data handling commitments (whether inference inputs are used for training, retention periods), contractual commitments regarding model versioning and change notification, and incident response capabilities.
For data providers, the assessment covers data provenance and licensing, data quality controls, data handling and security practices, and compliance with applicable data protection legislation. For embedding model and tokeniser providers, the assessment focuses on provenance verification, version stability, and the potential for silent behavioural changes.
The model origin risk assessment integrates with the model selection process described above. The selection record should document both the functional evaluation (fitness for intended purpose, performance characteristics, architectural suitability) and the security/compliance evaluation (vendor risk, supply chain exposure, contractual coverage). The security team retains vendor risk assessments as Module 9 evidence, reviewed annually and re-conducted when the vendor’s service scope or security posture changes materially.
Key outputs
- Pre-adoption vendor risk assessment per third-party model component
- Combined functional and security/compliance evaluation
- Annual review with change-triggered re-assessment
- Module 3 and Module 9 AISDP evidence
DORA Third-Party Risk
Financial entities subject to DORA face more prescriptive third-party risk management requirements than the AI Act alone imposes. DORA requires a comprehensive register of all ICT third-party service providers, pre-contractual risk assessments, specific contractual clauses, and ongoing monitoring of providers classified as critical.
For AI systems consuming third-party model APIs, the model provider is an ICT third-party service provider under DORA. The model selection record must therefore satisfy DORA’s pre-contractual assessment requirements in addition to the AI Act’s model origin risk analysis. The DORA risk assessment covers financial stability, business continuity, security certifications, data handling, and concentration risk. Concentration risk is particularly relevant for AI systems: if multiple critical financial services depend on the same foundation model provider, provider failure affects all of them simultaneously.
Where a financial entity designates an AI model provider as a critical ICT third-party service provider, DORA’s enhanced oversight requirements apply: more intensive ongoing monitoring, enhanced contractual protections, and contingency planning for provider failure. Module 3 should address the contingency plan, including multi-provider strategies or fallback to internally hosted models. If the system is not subject to DORA, this article is documented as not applicable.
Key outputs
- DORA-compliant pre-contractual risk assessment per provider
- Concentration risk assessment for shared model providers
- Contingency planning for critical provider failure
- Module 9 and Module 3 AISDP documentation
Contractual Provisions
Contractual provisions with third-party providers address six domains. Audit rights grant the organisation (and, where applicable, the financial supervisor) the right to audit the provider’s security practices, data handling, and compliance controls. Security SLAs define the provider’s commitments regarding availability, incident response, vulnerability management, and security certifications.
Data location provisions specify where the provider stores and processes the organisation’s data, ensuring compliance with EU data residency requirements. Sub-outsourcing restrictions require the provider to notify the organisation of any sub-processing arrangements and obtain approval before engaging sub-processors that handle the organisation’s data. Exit strategy provisions define the process for transitioning away from the provider, including data portability, model artefact return, and transition timeline commitments.
For DORA-scoped systems, the contractual provisions must satisfy the specific requirements of Articles 28–30. DORA requires that the contract address the right to terminate in the event of significant performance shortfalls, the provider’s obligation to cooperate with the financial supervisor, and the provider’s obligation to participate in the entity’s resilience testing programme. The contractual provisions are documented in Module 9.
Key outputs
- Six-domain contractual framework (audit, SLAs, data location, sub-outsourcing, exit, DORA-specific)
- DORA-compliant clauses where applicable
- Provider notification and approval requirements
- Module 9 AISDP documentation
Ongoing Provider Monitoring
Supply chain risk does not remain static. Foundation model providers update their models, sometimes changing behaviour in ways that affect downstream systems. Data providers may alter their data collection practices or experience data breaches. Ongoing provider monitoring ensures that changes in the supply chain are detected, assessed, and addressed before they affect the system’s compliance posture.
Four monitoring activities are required. Subscribing to the provider’s changelog via RSS, webhook, or email notifications detects version changes, feature modifications, and deprecation announcements. Running sentinel tests at regular intervals detects behavioural changes that the provider may not announce. Reviewing the provider’s terms of service periodically detects material changes to data handling, availability commitments, or liability terms. Tracking the provider’s security posture through incident disclosures, compliance certifications, and audit reports detects security degradation.
The Technical SME assesses any material change for its impact on the downstream system’s compliance profile. A provider that changes its content filtering, modifies its API behaviour, or retrains its model may alter the downstream system’s outputs without any change to the downstream system’s code. Material changes are documented in the risk register with an impact assessment. The MITRE ATLAS navigator provides a structured way to track the evolving threat landscape for AI supply chain attacks.
Key outputs
- Four-activity monitoring programme (changelog, sentinel tests, ToS review, security tracking)
- Material change impact assessment documented in the risk register
- MITRE ATLAS threat landscape tracking
- Module 9 AISDP evidence