v2.4.0 | Report Errata
docs security docs security

STRIDE (Traditional Software Threats — Six Categories)

STRIDE is a threat classification framework that categorises traditional software threats into six categories: Spoofing (impersonating a user or system), Tampering (modifying data or code), Repudiation (denying an action), Information Disclosure (exposing protected data), Denial of Service (making a system unavailable), and Elevation of Privilege (gaining unauthorised access).

For high-risk AI systems, STRIDE provides the baseline threat taxonomy covering the system’s traditional software attack surface. Each attack surface point (data ingestion APIs, operator interfaces, administrative endpoints, inter-service communication, model serving infrastructure) is assessed against all six STRIDE categories. However, STRIDE was not designed for machine learning systems and does not address threats that exploit the model’s learning and inference processes. Data poisoning, adversarial examples, model extraction, and prompt injection fall outside STRIDE’s scope.

The threat modelling exercise therefore uses STRIDE as one component of a combined framework. STRIDE covers the traditional software threats; MITRE ATLAS covers the AI-specific threats; OWASP Top 10 for LLM Applications provides a focused checklist for LLM-based systems. The combined taxonomy ensures comprehensive coverage. The Technical SME documents the threat model as a structured artefact using IriusRisk or OWASP Threat Dragon and maintains it as a living document.

Key outputs

  • STRIDE analysis per attack surface point
  • Structured threat model artefact (IriusRisk or OWASP Threat Dragon)
  • Integration with MITRE ATLAS and OWASP LLM Top 10
  • Module 9 AISDP documentation

MITRE ATLAS (AI-Specific Threat Taxonomy — Seven Phases)

MITRE ATLAS (Adversarial Threat Landscape for AI Systems) provides the taxonomic foundation for AI-specific threats. Analogous to MITRE ATT&CK; for traditional cyber threats, ATLAS catalogues real-world adversarial techniques against ML systems, organised into seven phases: reconnaissance (discovering model architecture and training data characteristics), resource development (building adversarial capabilities), initial access (gaining query access to the model), execution (submitting adversarial inputs), persistence (maintaining access or influence), evasion (avoiding detection), and impact (the actual harm achieved).

Each technique in the ATLAS matrix has real-world case studies and documented mitigations. The taxonomy provides a structured vocabulary for discussing AI-specific threats and ensures that the threat model covers the full range of adversarial techniques, not only those the team has encountered or read about. ATLAS is particularly valuable for threat modelling sessions involving participants with varying levels of ML security expertise, as the matrix provides a systematic checklist.

For the AISDP, the threat modelling exercise enumerates threats at each attack surface point using the combined STRIDE + ATLAS taxonomy. Threats above the risk acceptance threshold (scored using the likelihood × impact methodology) receive documented mitigations mapped to the compensating controls. The ATLAS-derived threats and their mitigations feed directly into Module 9.

Key outputs

  • ATLAS-based enumeration of AI-specific threats per attack surface
  • Risk scoring (likelihood × impact) for each identified threat
  • Mitigation mapping to Section 8 compensating controls
  • Module 9 AISDP documentation

OWASP Top 10 for LLM Applications

The OWASP Top 10 for LLM Applications (2025, v2.0) provides a focused checklist for systems incorporating large language models. It covers ten threat categories: Prompt Injection (LLM01), Sensitive Information Disclosure (LLM02), Supply Chain (LLM03), Data and Model Poisoning (LLM04), Improper Output Handling (LLM05), Excessive Agency (LLM06), System Prompt Leakage (LLM07), Vector and Embedding Weaknesses (LLM08), Misinformation (LLM09), and Unbounded Consumption (LLM10).

Each category maps to specific AISDP modules. Prompt injection, improper output handling, and system prompt leakage affect Module 9. Data and model poisoning affects both Module 4 (Data Governance) and Module 9. Excessive agency has significant overlaps with Module 7 (Human Oversight) and Module 1 (System Identity). Vector and embedding weaknesses are particularly relevant for RAG-based systems, overlapping with the RAG-specific governance in and Module 4. Misinformation overlaps with Module 7 (Human Oversight) through automation bias countermeasures. Detailed coverage of each category appears in, with attack vectors, practical control strategies, and documentation requirements for the AISDP. Two categories new to the 2025 edition, system prompt leakage and vector and embedding weaknesses, should receive specific threat model coverage for any system incorporating LLMs or RAG architectures.

For systems that do not incorporate LLMs, several categories remain relevant to any ML system: data and model poisoning, supply chain vulnerabilities, sensitive information disclosure, and unbounded consumption. The Technical SME should assess which categories apply to the specific system and document the determination in the threat model.

Key outputs

  • Assessment of each OWASP LLM category against the specific system
  • Mapping of applicable categories to AISDP modules
  • Determination and rationale for non-applicable categories
  • Module 9 AISDP documentation

PASTA (Process for Attack Simulation and Threat Analysis)

PASTA (Process for Attack Simulation and Threat Analysis) is a risk-centric threat modelling methodology that provides a structured seven-stage process: define objectives, define the technical scope, application decomposition, threat analysis, vulnerability analysis, attack modelling and simulation, and risk and impact analysis. Unlike STRIDE (which is threat-classification-focused) and ATLAS (which is taxonomy-focused), PASTA emphasises the attacker’s perspective and the business impact of each threat.

PASTA’s risk-centric approach aligns well with the EU AI Act’s emphasis on risk management (Article 9). Each identified threat is assessed not only for technical severity but for its potential impact on fundamental rights, affected persons, and the system’s compliance posture. This business-impact dimension is often absent from purely technical threat modelling exercises.

For AISDP purposes, PASTA can serve as the overarching methodology that organises the threat modelling exercise, with STRIDE and ATLAS providing the threat taxonomies used within PASTA’s threat analysis stage. The four-stage approach described in (scope attack surfaces, enumerate threats using STRIDE + ATLAS, assess risk using likelihood × impact scoring, define mitigations) is compatible with PASTA’s structure. The choice of methodology should be documented in Module 9 alongside the resulting threat model.

Key outputs

  • PASTA methodology applied to the AI system’s threat model
  • Integration of STRIDE and ATLAS within the PASTA framework
  • Risk-centric assessment with business and fundamental rights impact
  • Module 9 AISDP documentation
On This Page