EU AI Act — Plain English

The Act is complex.
Your obligations
aren't ambiguous.

Fear, uncertainty, and doubt around the EU AI Act are understandable — the regulation is 144 pages of cross-referencing Articles, Annexes, and recitals. But for most organisations, the obligations are precise and manageable. Here is what you actually need to know.

See how we help →
€35m
Maximum fine for prohibited AI system violations
Prohibited systems (Article 5) €35m or 7% global turnover
High-risk documentation failures €15m or 3% global turnover
Incorrect information to authorities €7.5m or 1.5% global turnover
Whichever figure is higher applies. SME caps may apply; national competent authorities determine enforcement approach.
Clearing the air

What people say.
What is actually true.

Myth
"Every AI tool we use needs to be registered."

The Act applies categorically to high-risk AI systems defined in Annex III, and prohibits a narrow set of practices under Article 5. Most minimal-risk tools — chatbots, content filters, basic recommendation engines — carry only transparency obligations, if any at all.

Reality
Scope depends entirely on use case and risk classification.

Step one is always classification. If your system does not appear in Annex III, is not built on a GPAI model with systemic risk, and does not fall under Article 5 prohibitions, your obligations may be limited to transparency notices and basic conformity requirements.

Myth
"Compliance requires an army of lawyers and months of work."

The obligations are substantial for high-risk systems, but they are also precisely specified. Annex IV lists exactly what technical documentation must contain. The burden comes from not having structured tooling — not from the regulation itself.

Reality
Structured tooling transforms a compliance project into a governed workflow.

Standard Intelligence maps every Annex IV requirement to a questionnaire section, routes sections to the right team members, and generates submission-ready documentation. The same work that takes months in spreadsheets takes weeks in a purpose-built platform.

Myth
"The deadlines are so far away we have time to wait."

The prohibited practices rules have been in force since August 2024. GPAI model obligations applied from August 2025. High-risk system full technical documentation requirements apply from 2 August 2026. Preparedness takes time.

Reality
The most important deadline is five months away.

Annex IV documentation for high-risk AI systems must be complete and in order by 2 August 2026. For organisations with multiple systems, or complex approval chains, starting now is not early — it is appropriate. Early access on Standard Intelligence opens 1 June 2026.

Enforcement Timeline

Every deadline, in order.

The regulation entered into force on 1 August 2024 and applies in phases. Here is what has already applied and what is coming.

1 August 2024
Regulation enters into force
EU AI Act (Regulation EU 2024/1689) enters into force. The 24-month main application period begins.
2 February 2025
Prohibited AI practices (Article 5) applicable
Systems using unacceptable-risk practices — social scoring, subliminal manipulation, real-time remote biometric identification in public spaces — are prohibited. Fines up to €35m or 7% of global turnover.
2 August 2025
GPAI model obligations applicable
Providers of general-purpose AI models must comply with transparency requirements (Article 53). Models with systemic risk face additional obligations including adversarial testing and incident reporting.
1 June 2026
Standard Intelligence early-access opens
The full AISDP creation workflow is available to early-access tenants. High-risk AI providers should begin documentation immediately to allow time for multi-stage approval, certification, and review before 2 August.
2 August 2026 ⚠
Full application — high-risk AI systems
Complete Annex IV technical documentation required for all high-risk AI systems (Article 11). Post-market monitoring plans (Article 72), Fundamental Rights Impact Assessments (Article 27), and the EU AI database registration obligations (Annex VIII) all apply in full. Fines up to €15m or 3% of global turnover for documentation failures.
2 August 2027
High-risk systems in regulated products (Annex I)
AI systems in products already covered by existing EU harmonisation legislation — medical devices, machinery, aviation — get a full three-year transition period from entry into force.
Who it applies to

Your role in the value chain determines your obligations.

The Act distinguishes between providers, deployers, importers, distributors, and authorised representatives. Each role carries distinct obligations — and a single organisation may hold multiple roles simultaneously, depending on how it interacts with each AI system.

Provider
Develops and places an AI system on the market. Carries the heaviest obligations: full Annex IV technical documentation, conformity assessment, CE marking, and post-market monitoring.
Deployer
Puts a high-risk AI system into use. Responsible for Article 26 obligations: using the system in line with provider instructions, implementing human oversight, conducting Fundamental Rights Impact Assessments, and monitoring for operational anomalies.
Importer
Places a third-country provider's AI system on the EU market. Must verify that the provider has conducted the required conformity assessment and that the technical documentation is available.
Authorised Representative
Acts on behalf of a third-country provider within the EU. Holds the Declaration of Conformity and technical documentation, and acts as the point of contact for supervisory authorities.

Ready to get structured?

Early access is open for organisations with high-risk AI systems that need to be compliant by 2 August 2026. Provisioning takes under 60 seconds.