EU AI Act Decision Tree

Does the EU AI Act Apply to Me? A Decision Tree for Third-Party AI Users

By AIActStack · Published April 4, 2026 · Last updated April 17, 2026

Five questions. That's all it takes to figure out whether you have EU AI Act obligations, what role you play, and what risk level applies. Walk through the tree below, or let the scanner do it for you in 2 minutes.

1. The decision tree

1

Does your product use AI?

Art. 3(1) defines "AI system" as a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments[src] This includes using third-party AI APIs (OpenAI, Anthropic, Google) or self-trained models.

No → Minimal concern

The AI Act doesn't apply to you. Article 4 (AI literacy) may still require your staff to understand AI if you interact with AI systems, but there are no product obligations.

Yes → Continue to Q2

You have at least some obligations. The next questions determine which ones.

2

Do any of your users access your product from the EU?

The EU AI Act has extraterritorial scope (Article 2). It doesn't matter where your company is incorporated. What matters is whether your AI system's output is used within the EU.

No, US only → Minimal risk

The EU AI Act likely doesn't apply. Voluntary codes of conduct are encouraged (Article 95) but not required. Keep monitoring — your user base may grow into the EU.

Yes (EU, US+EU, or Global) → Continue to Q3

The Act applies to you. Time to determine your role.

3

Do you use third-party AI APIs, or did you build/train the model yourself?

This determines your role under the regulation. Your role dictates which obligations apply to you.

Third-party APIs only (OpenAI, Anthropic, Google, etc.) → You're a Deployer

You use AI systems built by others, under your own authority. Most SaaS companies using AI APIs are deployers. See deployer obligations →

Self-built or fine-tuned models → You're a Provider

You develop or train AI systems and place them on the market. Provider obligations are a superset of deployer obligations — significantly more work. If you use both third-party APIs AND custom models, you're classified as a Provider (the more demanding role).

4

What do you use AI for?

Your use case determines your risk classification. This is the single biggest factor in how many obligations you have.

High Risk (Annex III)

Hiring / HR Screening Credit / Insurance Scoring Medical Diagnosis

Full obligation set: risk management (~40h), human oversight (~16h), DPIA (~16h), monitoring (~8h), log retention (~8h), incident reporting (~8h), transparency (~16h), EU database registration (~4h). Total: ~116h for deployers.

See example: OpenAI deployer, high-risk, hiring →

Limited Risk (Transparency obligations)

Customer-Facing Chatbot Content Generation Fraud Detection

Transparency obligations: disclose AI interaction (~4h), label AI content (~8h), deep fake disclosure (~4h). Total: ~16h.

See example: OpenAI deployer, limited-risk, chatbot →

Minimal Risk

Recommendation Engine Internal Tools / Analytics

No mandatory obligations. Voluntary codes of conduct encouraged (Article 95).

See example: OpenAI deployer, minimal-risk →

5

Do you use multiple AI services?

Many SaaS companies use multiple AI providers (e.g., OpenAI for chat + Anthropic for HR screening). Each AI component gets classified independently.

If any one component is high-risk, your overall stack is high-risk

Using OpenAI for a chatbot (limited-risk) and Anthropic for HR screening (high-risk)? Your stack includes a high-risk system. The high-risk obligations apply to the HR screening component specifically. The chatbot still only needs transparency obligations. Track each component separately.

Skip the tree — let the scanner do it

Select your AI services and use cases. Get your role, risk level, and complete obligation list in 2 minutes.

Scan Your AI Stack Free →

2. "I'm exempt" — common misconceptions

"We're a US company. EU regulations don't apply to us."

The EU AI Act has extraterritorial scope (Article 2). If the output of your AI system is used in the EU — even by a single EU-based customer — you're in scope. Same principle as GDPR. Company incorporation location is irrelevant.

"We just use APIs. We're not building AI."

Using APIs makes you a deployer. A "deployer" is a natural or legal person, public authority, agency, or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity[src] Your API usage is your deployment. Deployers have specific mandatory obligations that exist independently of the provider's obligations.

"Our AI use case is too small to matter."

There is no de minimis threshold. A chatbot serving 10 EU users has the same transparency obligations as one serving 10 million. The regulation is use-case-based, not scale-based. For SMEs (including startups), each fine under Art. 99 is capped at the lower of the percentage or absolute amount listed in paragraphs 3, 4, and 5 — not the higher[src] But the core obligations are the same.

"OpenAI/Anthropic will handle compliance for us."

They can't. Provider obligations and deployer obligations are separate. OpenAI must provide documentation (Article 13), maintain technical docs (Article 11), and undergo conformity assessment (Article 43). You must implement human oversight (Article 26(2)), perform DPIAs (Article 26(9)), monitor operations (Article 26(5)), and handle transparency disclosures (Article 50). Neither party can fulfill the other's obligations.

"We'll wait and see if enforcement actually happens."

Conformity assessments take 6–12 months. Technical documentation takes months. If you start after the first enforcement action, you're looking at 6–12 months of non-compliance with no way to accelerate. Companies that started in 2025 are ahead.

"The deadline might be pushed back."

Under COM(2025) 836, Annex III high-risk obligations would apply "latest by 2 December 2027" — six months later than the current 2 August 2026 date in Art. 113. The Council general approach (13 March 2026) and the IMCO+LIBE joint committee report (A-10-2026-0073, 18 March 2026) both converge on 2 December 2027 as a fixed date. NOT yet adopted as law[src] Even if delayed, Chapter I (general provisions, including Art. 4 AI literacy) and Chapter II (prohibited practices, Art. 5) apply from 2 February 2025[src] Building compliance strategy around "maybe it'll be delayed" is a gamble — and even the delayed date would require starting now.

3. The supply chain wrinkle

The decision tree gives you your role and risk level. But there's a complication the tree doesn't capture: even if your risk level is low, your AI provider's obligations still create work for you.

Under Article 13, providers are legally required to give deployers transparency information and instructions for use. You also need their risk management data (Article 9) and conformity declarations (Article 47) to fulfill your own obligations — even basic ones like writing transparency disclosures or conducting DPIAs.

The problem: most providers haven't set up structured compliance documentation programs. You'll need to proactively request it.

Action item: Regardless of your risk level, send documentation request emails to your AI providers. Our provider-specific guide has ready-to-send email templates for OpenAI, Anthropic, Google, and others.

Get your personalized obligation checklist

The scanner does everything this decision tree does, plus generates provider email templates and lets you track compliance progress.

Scan Your AI Stack Free →

4. What to do next

1

Scan your AI stack

The scanner automates this decision tree and gives you actionable output: your role, risk level, obligation list, effort estimates, and provider email templates.

2

Read the deployer guide

The complete deployer guide covers every obligation in detail with article references, effort estimates, and deadlines.

3

Request provider documentation

The provider guide has ready-to-send emails for OpenAI, Anthropic, Google, and others.

4

Map your GDPR compliance

Already GDPR compliant? The GDPR bridge guide shows what carries over — about 40–60% of the work.

Start with the scanner

2 minutes. Free. No signup required.

Scan Your AI Stack Free →

Get weekly EU AI Act compliance updates

Regulation changes, enforcement updates, and practical compliance tips.

Related Guides

Sources

All legal claims in this guide are cross-referenced against the official EUR-Lex Regulation text. Claims are verified and updated within 14 days of official guidance changes.

This guide provides general information based on the EU AI Act text (Regulation 2024/1689). It is not legal advice. Consult a qualified legal professional for formal compliance guidance specific to your situation.