Research Library

The compliance
literature you need
to cite.

Whitepapers, technical analyses, regulatory guides, and practitioner reports on EU AI Act implementation. Published by Standard Intelligence and co-authored with compliance practitioners.

14
Publications
14 publications
Whitepaper
GPAI Article 53 Systemic risk
GPAI Model Obligations: What Providers Must Document Under Article 53

General-purpose AI models face a distinct obligation set that differs substantially from the high-risk system framework. This paper examines what constitutes a GPAI model, when systemic risk designations apply, and what technical and non-technical documentation providers must prepare and maintain.

Report
Article 26 FRIA Deployers
The Deployer's Burden: Article 26 Obligations and Fundamental Rights Impact Assessment

Deployers of high-risk AI systems carry obligations that are often underestimated relative to providers. This report maps Article 26 requirements, the Article 27 FRIA process, and the practical steps deployers need to take before putting a system into use — including what to demand from their providers.

Guide
Annex III Article 6 Classification
A Decision-Maker's Guide to AI System Risk Classification Under the EU AI Act

Classification is the first obligation and the most consequential — it determines everything that follows. This guide walks through Article 6, all eight Annex III categories, the prohibited practices under Article 5, and the edge cases where classification is genuinely ambiguous. Includes a worked decision tree.

Analysis
Article 10 GDPR Training data
Training Data Governance at the Intersection of the EU AI Act and GDPR

Article 10 imposes data governance requirements that overlap substantially with GDPR Articles 5, 9, and 22. This analysis maps the interaction, identifies where the obligations conflict, and provides a practical framework for organisations managing datasets that contain personal data and special categories.

Whitepaper
Article 72 Article 73 Post-market
Post-Market Monitoring Plans: Satisfying Article 72 Without Building a Second Compliance Programme

Article 72 monitoring plans must be embedded in the technical documentation before a system goes live, not bolted on after. This paper covers what a defensible monitoring plan contains, how to integrate it with existing model performance infrastructure, and what triggers a mandatory AISDP review.

Report
Annex V Notified bodies Conformity
Conformity Assessment Pathways: Self-Assessment Versus Notified Body Involvement

Not all high-risk AI systems require third-party assessment. This report maps which Annex III categories require notified body involvement, what the assessment modules entail, how to select and engage a notified body, and what the Declaration of Conformity under Article 47 must contain in each case.

Guide
Article 4 AI literacy Workforce
Article 4 in Practice: Building an AI Literacy Programme Your Organisation Can Actually Sustain

Article 4 requires providers and deployers to ensure their staff have sufficient AI literacy. This guide defines what sufficiency means in practice, how to structure a tiered literacy programme across different workforce roles, and how to evidence compliance to a supervisory authority without creating bureaucratic overhead.

Analysis
Enforcement NCA Penalties
Enforcement Architecture: How National Competent Authorities Will Supervise AI Act Compliance

The EU AI Act delegates enforcement to national competent authorities, but the supervisory architecture varies across member states. This analysis examines which authorities have been designated, how market surveillance is expected to operate, what triggers an investigation, and what organisations can do to reduce enforcement risk.

Whitepaper
Annex VIII EU Database Registration
EU AI Database Registration: What to Submit, When, and to Whom

High-risk AI providers and deployers must register their systems in the EU AI database before placing them on the market. This paper covers the Annex VIII data requirements, who bears the registration obligation in provider/deployer arrangements, and the process for maintaining registration accuracy over the system's lifecycle.

Report
Article 25 Value chain Handover
Shared Accountability: Mapping Obligations Across the AI Value Chain

When a provider supplies a high-risk AI system to a deployer, obligations do not transfer cleanly — they overlap. This report maps the Article 25 handover requirements, what providers must deliver, what deployers must verify, and how organisations acting as both provider and deployer for different systems should manage the resulting complexity.

Guide
Article 14 Human oversight System design
Designing for Human Oversight: Meeting Article 14 Requirements Without Undermining System Utility

Article 14 requires that high-risk AI systems can be effectively overseen by natural persons during operation. This guide examines what effective oversight means technically and operationally, how to document oversight measures in Annex IV, and how to avoid designing oversight mechanisms that exist only on paper.

Analysis
Article 5 Prohibited AI Biometrics
Drawing the Line: Article 5 Prohibited Practices and Where the Boundaries Actually Sit

Article 5 prohibitions have been in force since February 2025. This analysis examines each prohibition in turn — social scoring, subliminal manipulation, exploitation of vulnerabilities, biometric categorisation, real-time remote biometric identification — and addresses the boundary questions that practitioners are still working through in ambiguous deployments.

Report
Financial services Sector guide Annex III §5
EU AI Act Compliance in Financial Services: Credit, Insurance, and Consumer Assessment Systems

Financial services firms face concentrated exposure under Annex III, Section 5, which covers AI systems used in credit scoring, insurance underwriting, and access to essential services. This sector-specific report covers what falls in scope, the interaction with existing EBA and EIOPA AI guidance, and the practical documentation burden for institutions with large AI portfolios.

Guide
Annex III §4 Employment HR systems
AI in Hiring and HR: A Compliance Guide for Organisations Using AI in Employment Decisions

Annex III, Section 4 places recruitment, CV screening, promotion, and performance monitoring AI systems in the high-risk category. This guide walks HR and legal teams through the full Annex IV documentation obligation, FRIA requirements for employment contexts, and the human oversight design considerations specific to AI used in employment decisions.

New research, direct to your inbox.
One email per publication. No marketing, no tracking pixels, unsubscribe any time.

Ready to get structured?

Early access is open for organisations with high-risk AI systems that need to be compliant by 2 August 2026. Provisioning takes under 60 seconds.