EU AI Act Compliance Guide
EU AI Act Deployer Obligations: The Complete Guide for SaaS Companies
By AIActStack · Published March 28, 2026 · Last updated April 17, 2026
If you use OpenAI, Anthropic, Google, or any third-party AI API in your product — and you have EU customers — you are a deployer under the EU AI Act. That means specific, mandatory obligations. Due in days.
1. What is a "deployer" under the EU AI Act?
The EU AI Act (Regulation 2024/1689) assigns obligations based on your role, not your company size. There are three roles:
Provider
Develops or trains an AI system and places it on the market. OpenAI, Anthropic, Google are providers.
Deployer ← This is you
Uses an AI system provided by someone else, under your own authority. Most SaaS companies using AI APIs are deployers.
Distributor
Makes an AI system available without modifying it. Think resellers. Lighter obligations.
If your SaaS product uses OpenAI's API for a customer chatbot, Claude for resume screening, or Gemini for content generation — you are a deployer. The AI provider cannot fulfill your obligations for you. They have their own.
Real examples of deployers:
- • A SaaS app using OpenAI's API for customer support chat
- • An HR platform using Claude for resume screening (high-risk)
- • A fintech using a third-party fraud detection model (limited-risk)
- • A marketing tool using Gemini for content generation
- • An e-commerce platform using AI for product recommendations
Not sure if you're a deployer?
Scan your AI stack in 2 minutes. Free, no signup required.
Scan Your AI Stack Free →2. Why this matters right now
Regulation (EU) 2024/1689 entered into force on 1 August 2024, the twentieth day after its publication in the Official Journal on 12 July 2024[src] The Regulation applies from 2 August 2026, with earlier dates for Chapter I-II (prohibited practices and AI literacy, from 2 February 2025) and Chapter V (general-purpose AI, from 2 August 2025), and a later date for Annex I legacy high-risk systems already on the market (2 August 2027)[src]
- •Fines are severe. Non-compliance with the prohibition of AI practices under Art. 5 is subject to administrative fines of up to EUR 35 000 000 or, for an undertaking, up to 7% of total worldwide annual turnover for the preceding financial year, whichever is higher[src] For SMEs (including startups), each fine under Art. 99 is capped at the lower of the percentage or absolute amount listed in paragraphs 3, 4, and 5 — not the higher[src]
- •Conformity assessments take months. High-risk systems require 6-12 months for full conformity assessment. Companies starting now are already behind.
- •Enforcement is real. Unlike GDPR's early days, the EU has designated national AI authorities in every member state. The European AI Office is operational and national AI authorities are being designated across member states.
- •The Digital Omnibus Act. Under COM(2025) 836, Annex III high-risk obligations would apply "latest by 2 December 2027" — six months later than the current 2 August 2026 date in Art. 113. The Council general approach (13 March 2026) and the IMCO+LIBE joint committee report (A-10-2026-0073, 18 March 2026) both converge on 2 December 2027 as a fixed date. NOT yet adopted as law[src] Building a compliance strategy around a maybe is reckless.
3. "Does this even apply to me?" — Common misconceptions
"We're a US company, this doesn't apply to us."
Wrong. If your AI system's output affects people in the EU (EU customers using your product), you're in scope. Same extraterritorial reach as GDPR.
"We just use APIs, we're not building AI."
Using APIs makes you a deployer. Deployers have specific, mandatory obligations. The Act explicitly covers this.
"This is only for big companies."
SMEs get reduced fines and some procedural simplifications, but the core obligations are the same. There is no small-company exemption.
"GDPR already covers this."
Partial overlap (DPIAs, data governance) but the AI Act adds entirely new requirements: human oversight, conformity assessment, risk management systems, incident reporting, transparency labeling. GDPR compliance gets you maybe 40% of the way there.
"We'll wait for enforcement actions before worrying."
Conformity assessments take 6-12 months. Technical documentation takes months to prepare. If you wait for the first enforcement action, you're already non-compliant with no path to fix it quickly.
4. Risk classification: where does your product land?
The AI Act classifies AI systems into four risk tiers. Your obligations depend entirely on which tier you're in.
Social scoring, real-time biometric identification in public spaces, manipulation of vulnerable groups. If your product does any of these, stop.
AI used for: hiring/HR screening, credit/insurance scoring, medical diagnosis, education grading, law enforcement, critical infrastructure, migration.
If you use AI for hiring or credit scoring, you are high-risk. Full stop.
Chatbots (must tell users they're talking to AI), content generation (must label AI-generated content), deep fakes (must disclose). Most SaaS products with customer-facing AI land here.
Recommendations, analytics, internal tools. Voluntary codes of conduct encouraged.
5. High-risk deployer obligations
If your AI system is classified as high-risk (hiring, credit scoring, medical diagnosis), here's what you're required to do:
| Obligation | Article | What it means | Effort |
|---|---|---|---|
| Risk Management System | Art. 9 | Establish risk identification and mitigation throughout the AI system's lifecycle | ~40h |
| Human Oversight | Art. 26(2) | Assign trained humans who can monitor, intervene, and stop the AI system | ~16h |
| DPIA | Art. 26(9) | Data Protection Impact Assessment before deploying the system | ~16h |
| System Monitoring | Art. 26(5) | Monitor AI system operation, report serious incidents to provider and authorities | ~8h |
| Log Retention | Art. 26(6) | Keep AI-generated logs for at least 6 months | ~8h |
| Disclose AI Interaction | Art. 50(1) | Inform users they are interacting with an AI system | ~4h |
| Label AI-Generated Content | Art. 50(2) | Mark AI-generated text, audio, images, and video in a machine-readable format | ~8h |
| Deep Fake Disclosure | Art. 50(4) | Disclose AI-generated or manipulated content resembling real persons or events | ~4h |
| Incident Reporting | Art. 73 | Report serious incidents within 15 days (2 days if widespread) | ~8h |
| EU Database Registration | Art. 49 | Register your high-risk AI system in the EU AI database | ~4h |
| Total estimated effort | ~116h | ||
See what this looks like for your specific stack: OpenAI deployer, high-risk, EU →
See your exact obligations
Select your AI services and use cases. Get a personalized obligation list in 2 minutes.
Scan Your AI Stack Free →6. Limited-risk obligations (chatbots, content generation)
If your AI doesn't fall into the high-risk categories but interacts with users or generates content, you have transparency obligations:
| Obligation | Article | What it means | Effort |
|---|---|---|---|
| Disclose AI Interaction | Art. 50(1) | Tell users they are interacting with an AI system | ~4h |
| Label AI Content | Art. 50(2) | Machine-readable labels on AI-generated text, images, audio, video | ~8h |
| Deep Fake Disclosure | Art. 50(4) | Disclose when content has been AI-generated or manipulated to appear realistic | ~4h |
| Total estimated effort | ~16h | ||
See the full breakdown: OpenAI deployer, limited-risk, chatbot →
7. The supply chain problem: what you need from your AI providers
This is the unique challenge for deployers of third-party AI. You are responsible for compliance of the AI system as deployed in your product, but you don't control the underlying model. OpenAI does. Anthropic does.
Some of your obligations depend on documentation from your providers. Under Articles 13 and 47, they are legally required to give you:
- Intended purpose and known limitations of their models
- Performance metrics and known biases
- Training data characteristics
- System architecture descriptions
- Conformity assessment results (if high-risk)
- Known risks and mitigation recommendations
The documentation gap
Most AI providers have not proactively sent this documentation to their API customers. You need to actively request it — and track what you've received vs. what's missing. Every day you wait is a day closer to the deadline without the documentation you legally need.
AIActStack generates ready-to-send email templates referencing the specific articles and documentation you need from each provider in your stack. Try the scanner →
8. Log retention: how long to keep AI system logs (Article 26(6))
If you deploy a high-risk AI system, you must retain the logs that the system automatically generates — for at least six months, or longer if another law (Union or Member State) or the provider's instructions for use require it. This is the deployer-side rule under Article 26(6).
The logs are created by the provider-side obligation to design high-risk AI systems with an automatic logging capability (Article 19 sets that requirement for providers). Your side of the compliance boundary is retention and access; the provider's side is the logging capability that produces the records in the first place.
Who is covered
Article 26(6) applies to deployers of high-risk AI systems (Annex III or Annex I categories). If your scan shows a risk level other than high, this specific obligation does not apply — though other record-keeping rules (e.g. GDPR) may still bite separately.
How long to retain
At least six months under Article 26(6). The Regulation's only explicit override is when applicable Union or national law imposes a longer period — for example, GDPR storage-limitation carve-outs affecting personal data, or sectoral record-keeping obligations in regulated industries (financial services, healthcare, transport). Provider instructions for use may set longer operational defaults in practice, but Art. 26(6) itself does not list them as a legal override — treat those as operational recommendations, not a statutory trigger.
Who the logs must be accessible to
Logs are an evidentiary record of how the system operated. They may be requested by market surveillance authorities, and they are the basis for the deployer's obligation to monitor the system and notify the provider, distributor, and relevant authorities when a serious incident is suspected — another duty under Article 26.
Practical setup
If you use a third-party AI provider (OpenAI, Anthropic, Google Vertex AI, Mistral, or another GPAI service) for a high-risk use case, the six-month retention obligation is yours as the deployer. Capture the inputs, outputs, and timestamps that the provider's logging surfaces in your own storage, document your retention policy, and control access so you can satisfy an authority's request.
Scan your AI stack to see whether your specific combination of services, use cases, and region triggers the Article 26(6) retention duty.
9. Step-by-step action plan
This Week
Scan your AI stack and send provider documentation requests
Identify every AI service in your product. Determine your role and risk level. Send documentation requests to each provider — this takes 10 minutes and starts the clock on getting the information you need.
Scan your stack now →This Month
Begin high-priority obligations
If you're high-risk: start your risk management system (Art. 9) and DPIA (Art. 26(9)). If limited-risk: implement AI disclosure notices and content labeling. Assign a human oversight lead.
Track your obligations →Before August 2026
Complete all obligations and document everything
Finish your conformity assessment. Register high-risk systems in the EU database. Ensure logging and monitoring systems are operational. Have incident reporting procedures documented and tested.
Map your full AI supply chain →Start today. Scan your AI stack.
Find your role, risk level, and exact obligations in 2 minutes. Get ready-to-send provider documentation request emails. Free, no signup required.
Scan Your AI Stack Free →Key dates
| Feb 2, 2025 | Prohibited practices (Art. 5) + AI literacy (Art. 4) — already in effect |
| Aug 2, 2025 | GPAI rules (Art. 51–56) + governance + penalty framework |
| Aug 2, 2026 | High-risk + transparency obligations apply — the big deadline |
| Aug 2, 2027 | Obligations for high-risk AI in Annex I (regulated products) |
Your exact obligations by AI service
Canonical deployer landing pages for the most common AI stack combinations in the EU.
- • OpenAI hiring screening (high-risk)
- • OpenAI chatbot (limited-risk)
- • Anthropic hiring screening
- • Anthropic chatbot
- • Google medical diagnosis
- • OpenAI credit scoring
- • Mistral content generation
- • Google recommendations
Scan your AI stack to see the exact obligations for your specific services and use cases.
Related Guides
- EU AI Act Compliance for OpenAI, Anthropic & Google Users →
- Does the EU AI Act Apply to Me? Decision Tree →
- GDPR to AI Act: Map Your Existing Compliance →
Compliance Templates
Get weekly EU AI Act compliance updates
Regulation changes, enforcement updates, and practical compliance tips.
This guide provides general information based on the EU AI Act text (Regulation 2024/1689). It is not legal advice. Consult a qualified legal professional for formal compliance guidance specific to your situation.
Related Guides
- EU AI Act compliance for OpenAI, Anthropic & Google users — provider-specific obligations and documentation request templates
- EU AI Act decision tree — determine your role and risk level in 5 questions
- GDPR to EU AI Act compliance bridge — map existing GDPR work to AI Act requirements
- EU AI Act curriculum — 57 free lessons covering the full Regulation
Sources
- Regulation (EU) 2024/1689 — full text of the EU AI Act (EUR-Lex)
- European AI Office — regulatory framework and implementation guidance
- Digital Omnibus on AI — European Parliament legislative train tracking the proposed timeline amendments
All legal claims in this guide are cross-referenced against the official EUR-Lex Regulation text. Claims are verified and updated within 14 days of official guidance changes.