v2.4.0 | Report Errata
docs security docs security

CRA Scope & Product Classification

The CRA scope determination addresses whether the AI system qualifies as a product with digital elements. Standalone software installed on the deployer’s infrastructure qualifies. An AI system embedded in a physical product (medical device, industrial control system, autonomous vehicle component) qualifies through the product. A purely cloud-hosted SaaS system, consumed entirely via API, may fall outside scope; Commission interpretation of the SaaS boundary is still evolving as of early 2026.

Products within scope are classified as default, important (Class I or Class II), or critical. Default products use self-assessment for CRA conformity. Important products require EU-type examination or production quality assurance modules. Critical products require European cybersecurity certification. High-risk AI systems in critical infrastructure, healthcare, or industrial control may qualify as important or critical under the CRA, creating a conformity assessment interaction with the AI Act’s Annex VI internal assessment.

Module 9 records the CRA scope determination (including the system’s delivery model and the reasoning for or against CRA applicability), the product classification, the resulting CRA conformity assessment route, and the coordination plan with the AI Act conformity assessment. If the determination is borderline, treating the system as within scope is the safer position.

Key outputs

  • CRA scope determination with delivery model analysis
  • Product classification (default, important, critical)
  • CRA conformity assessment route identification
  • Module 9 AISDP documentation

CRA Condition — AI-Specific Threat Coverage

CRA Article 12 provides that high-risk AI systems which are also products with digital elements and which comply with the CRA’s essential cybersecurity requirements (Annex I, Parts I and II) shall be deemed to comply with the cybersecurity requirements of AI Act Article 15. This deemed compliance covers cybersecurity specifically; accuracy and robustness under Article 15 remain independently governed by the AI Act. Crucially, CRA Recital 51 requires that the CRA conformity assessment also consider AI-specific attack vectors, including adversarial attacks and training data poisoning. A CRA assessment that evaluates network security, update mechanisms, and vulnerability handling, without evaluating the AI system’s resilience to adversarial inputs, data poisoning, or model extraction, does not fully satisfy the condition.

Verification that the CRA assessment scope explicitly includes the AI-specific threat categories from the threat model is the Conformity Assessment Coordinator’s responsibility. The categories include adversarial examples, data poisoning, prompt injection, model extraction, membership inference, and information disclosure. If the CRA assessment does not cover these categories, deemed compliance cannot be relied upon for them, and Module 9 must address them independently.

The Conformity Assessment Coordinator documents which AI-specific threats are covered by the CRA assessment and which require independent Module 9 treatment. This analysis forms part of the two-layer Module 9 structure described above.

Key outputs

  • Verification of CRA assessment coverage of AI-specific threat categories
  • Documentation of covered and uncovered categories
  • Deemed compliance determination per threat category
  • Module 9 AISDP documentation

Module 9 Two-Layer Structure

When the CRA deemed compliance pathway applies, the AI System Assessor structures Module 9 in two layers. The first layer references the CRA conformity assessment and identifies which Article 15 sub-requirements are satisfied by the CRA evidence. Traditional cybersecurity work (network security, encryption, access control, vulnerability management) that is already covered by CRA conformity evidence does not need to be duplicated.

The second layer documents the AI-specific cybersecurity measures that extend beyond the CRA’s scope: adversarial ML testing, data poisoning controls, model-specific threat modelling, prompt injection defences, model extraction protections, and the other AI-native threats covered in the threat modelling section.

Both the AI Act competent authority and the CRA notified body (if applicable) can then see clearly which requirements are addressed by which evidence, without duplication or ambiguity. This structure reduces documentation effort whilst maintaining full compliance coverage. If the CRA does not apply (per the scope determination above), Module 9 uses a single-layer structure covering all cybersecurity requirements independently.

Key outputs

  • Two-layer Module 9 structure (CRA cross-reference layer, AI-specific layer)
  • Clear mapping of requirements to evidence sources
  • Elimination of duplication between CRA and AI Act evidence
  • Module 9 AISDP documentation
On This Page