When a high-risk system incorporates a general-purpose AI model, the downstream provider bears full responsibility for compliance under Article 16, yet has limited visibility into the GPAI model’s training data, architecture, and behavioural characteristics. Understanding the GPAI provider’s obligations under Article 53 is essential for managing this asymmetry.
Article 53 requires GPAI model providers to draw up and make available technical documentation, provide information and documentation to downstream providers integrating the model into high-risk systems, establish a copyright compliance policy, and publish a summary of the training data content. The specific content requirements for GPAI provider documentation are set out in Annex XI (general GPAI models) and Annex XII (models presenting systemic risk). Article 25(3) entitles the downstream provider to request specific information so that the high-risk system can comply with the Act. If a GPAI provider refuses or fails to respond to a properly formulated Article 25(3) request, the downstream provider should document the refusal and consider reporting it to the AI Office or relevant national competent authority, since the refusal may itself constitute non-compliance by the GPAI provider.
The AI System Assessor should submit a structured information request covering training data governance (sources, methodology, geographic and demographic coverage, known biases, copyright measures), model architecture and behaviour (architecture family, parameter count, alignment approach, known failure modes), versioning and change policy (deprecation policy, change notification commitments), data handling practices (whether inference inputs are retained, whether they are used for further training), and safety and security (red-teaming methodology, vulnerability disclosure policy).
Where the GPAI provider participates in the Code of Practice under Article 56, the downstream provider can reasonably expect compliance with its transparency commitments. The first general-purpose AI Code of Practice was published on 4 August 2025; organisations should verify whether their GPAI provider has signed the Code and assess adherence against its specific commitments. Where the Code of Practice has not yet matured into a stable compliance benchmark, or where the provider does not participate, information gaps are likely to be wider and compensating controls more demanding. In either case, the downstream provider should not rely on the Code of Practice alone; the structured information request under Article 25(3) remains the primary mechanism for obtaining the disclosures needed for AISDP compliance. For GPAI models classified as presenting systemic risk under Article 51, the provider bears additional obligations under Article 55, including model evaluations, adversarial testing, serious incident reporting, and cybersecurity protection. Article 51(2) establishes a rebuttable presumption that a GPAI model presents systemic risk when the cumulative amount of computation used for its training, measured in floating point operations (FLOPs), exceeds 10^25. The Commission may update this threshold by delegated act. The downstream provider should request access to the GPAI provider’s systemic risk documentation and assess which inherited risks are covered by the provider’s own controls.
Contractual risk transfer must also be assessed. Where the GPAI provider’s terms of service limit liability or disclaim responsibility for downstream use, the resulting gap in risk allocation is recorded in the risk register.
Key outputs
- Structured GPAI provider information request record
- GPAI disclosure register (per Code of Practice commitment area)
- Inherited risk analysis (AISDP Module 6)
- Contractual risk gap documentation