EU AI Act enforcement: 2 August 2026

You use OpenAI / Anthropic APIs? You have EU AI Act obligations.

If your SaaS integrates third-party AI APIs, you're likely a deployer under the EU AI Act. Scan your AI stack to see exactly what you owe — in 2 minutes, free.

Scan Your AI Stack Free

Your Role & Risk Level

Are you a provider, deployer, or distributor? What's your risk classification? We tell you based on your actual AI stack.

Your Exact Obligations

Article-by-article breakdown of what you must do, by when, and how long it takes. No guessing.

Provider Request Templates

Pre-written emails to send to OpenAI, Anthropic, and others requesting the documentation you need from them.

Who this is for

Most SMB SaaS teams calling third-party AI APIs assume the AI Act is a problem for the model providers. It isn't. Under Art. 3(4), anyone "using an AI system under their authority" in the course of professional activity is a deployer. Calling OpenAI's API inside your product makes you one.

You are in scope if:

  • You ship a product to EU users that calls OpenAI, Anthropic, Google, or Mistral APIs.
  • You're a Data Protection Officer or compliance lead trying to figure out the does-the-EU-AI-Act-apply-to-me question for your stack.
  • You read the regulation, hit Art. 4 AI literacy and Art. 50 transparency, and realised you have homework regardless of risk class.
  • You want to know which obligations apply on 2 August 2026 vs which already applied from 2 February 2025.

You are not the audience if:

  • You train your own foundation model and need a GPAI Code of Practice walkthrough — that's a different page.
  • You operate Annex III high-risk systems (CV-screening, credit scoring, biometric ID) and need a full conformity-assessment partner — you need outside counsel, not a scanner.
  • You want a one-click "compliant" badge. We don't sell that. Nobody honest does.

How the scan works

Three steps. About two minutes. No account, no credit card, no email-gate before the result.

01

Tell us your stack

Pick the AI APIs you call (OpenAI, Anthropic, Google, Mistral, others) and how you use them: chatbot, summarisation, classification, generation, decision support.

02

Tell us your context

Who sees the output, what decisions it informs, and whether you process personal data. This is what separates Art. 50 transparency from Art. 26 high-risk duties.

03

Get your obligations

A scoped list of obligations with article references, deadline dates, and a starter pack of documentation templates and provider-request emails.

What the scan returns

Example output : not your data

An anonymised slice of what a B2B SaaS calling OpenAI for support summaries and Anthropic for an internal copilot would see. Article references link to EUR-Lex on the live page.

scanner-result : example.com
[ROLE] deployer // Art. 3(4)
[STACK] OpenAI gpt-4o : support-summary, Anthropic claude-3-5 : internal-copilot
[RISK] limited-risk (chatbot) + minimal-risk (internal)
obligations live now (since 2 Feb 2025) :
[DONE] Art. 4 AI literacy programme : staff training matrix attached
obligations live 2 Aug 2026 :
[TODO] Art. 50(1) chatbot transparency disclosure on /support
[INFO] Art. 50(2) machine-readable marking of synthetic outputs : your provider's duty, not yours
[TODO] Art. 13 provider-doc request : send to OpenAI + Anthropic
[INFO] No Annex III high-risk use detected : Art. 26 + Art. 27 FRIA do not apply
monitor :
[INFO] COM(2025) 836 Omnibus may shift Annex III to 2 Dec 2027 : not adopted
[INFO] CEN-CENELEC JTC 21 harmonised standards : target Q4 2026

Sample reflects a real obligation graph for the stack described. Your output is generated from your answers, not this template.

What this costs if you do nothing

Art. 99 sets three administrative-fine tiers, all calculated as the higher of a EUR figure or a % of worldwide turnover. SMEs (including startups) get the inverse: the lower of the two.

Art. 99(3)
EUR 35M / 7%

Prohibited practices under Art. 5 : subliminal manipulation, social scoring, real-time biometric ID in public spaces.

Art. 99(4)
EUR 15M / 3%

Most deployer + provider duties : Art. 16, 22, 23, 24, 26, 50 transparency, notified-body cooperation. This is the tier most SaaS deployers will meet.

Art. 99(5)
EUR 7.5M / 1%

Supplying incorrect, incomplete, or misleading information to notified bodies or national authorities.

Separately, the Commission may fine general-purpose AI model providers up to EUR 15M or 3% of worldwide turnover under Art. 101 for breaches of Art. 53 / 55 obligations. If you're calling those models, you don't owe Art. 101 directly, but it's why your provider-request emails will get answered.

Questions DPOs ask

Isn't the Commission's Omnibus going to delay everything?

No. The Commission published the Digital Omnibus on AI as COM(2025) 836 on 19 November 2025 proposing to shift Annex III high-risk obligations to 2 December 2027. The IMCO + LIBE committees adopted a joint report (A-10-2026-0073) on 18 March 2026. Plenary vote and trilogue have not happened. Until they do, the 2 August 2026 general-application date in Art. 113 stands. Banking on the delay is the riskiest plan in the room.

We already did GDPR / a DPIA. Doesn't that cover us?

Partially. GDPR governs personal data; the AI Act governs AI systems, and the two overlap rather than substitute. Art. 26(9) says a high-risk deployer's GDPR DPIA can satisfy parts of the AI Act assessment, but it does not cover Art. 4 AI literacy, Art. 50 transparency, the Art. 26(6) 6-month log retention, or post-market monitoring. The scanner maps which of your existing GDPR controls already discharge AI Act duties so you don't double up.

We just call OpenAI. Are we really in scope?

Yes. Art. 3(4) defines a deployer as anyone using an AI system under their own authority in the course of professional activity. Calling OpenAI's API inside your product makes you a deployer; OpenAI is the provider under Art. 3(3). You owe at minimum Art. 4 AI literacy from 2 February 2025 and (if your AI talks to users or generates synthetic content) Art. 50 transparency from 2 August 2026.

What's the actual penalty if we miss this?

Art. 99 sets three tiers, each "higher of EUR figure or % of worldwide annual turnover" : EUR 35M or 7% for prohibited practices under Art. 5, EUR 15M or 3% for the operator-obligation breaches that cover most SaaS deployers, and EUR 7.5M or 1% for supplying incorrect information to authorities. SMEs and startups get the inverse — the lower of the two figures applies under Art. 99(6). Separate Commission fines apply to GPAI model providers under Art. 101.

Are obligations actually live yet, or is this all 2026?

Mixed. Chapters I and II have applied since 2 February 2025 — that's where Art. 4 AI literacy and the Art. 5 prohibited-practices ban already bite. The general application date is 2 August 2026 for Art. 6(2) high-risk systems, Art. 26 deployer duties, Art. 27 FRIA (only for public-body deployers and certain Annex III(5)(b)/(c) deployers), and Art. 50 transparency. Annex I legacy systems (AI embedded in products under existing harmonisation legislation) get until 2 August 2027.

How do we know your answers are current?

Every load-bearing legal claim on this site has a fact ID and a source URL pointing to EUR-Lex, the Commission, the AI Office, or CEN-CENELEC. We monitor those sources, plus the EP Legislative Train for Omnibus updates and CEN-CENELEC JTC 21 (harmonised standards targeted for Q4 2026). Each obligation page carries a "last reviewed" date. When a fact changes, the date stamp moves and the change appears in the changelog. We don't promise an SLA we can't enforce — we publish the cadence and the sources so you can audit us.

See your obligations in two minutes

Free. No account. No email gate before the result.

Scan Your AI Stack Free

EU AI Act Compliance Guides

Free EU AI Act Templates