The Regulation (Foundation)
What the law actually says: territorial reach, key definitions, the risk-based architecture, the phased timeline, and the Article 4 AI literacy obligation.
Part of the AI literacy training (Article 4) curriculum · Sources: Regulation (EU) 2024/1689
The EU AI Act (Regulation 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. It was formally adopted on June 13, 2024, and entered into force on August 1, 2024.
Legislative History
- April 2021: European Commission proposes the AI Act
- June 2023: European Parliament adopts its negotiating position
- December 2023: Political agreement reached (trilogue)
- March 2024: Parliament formally adopts the final text
- 12 July 2024: Published in the Official Journal (OJ L, 2024/1689)
- 1 August 2024: Enters into force (twentieth day after OJ publication per Art. 113)
Why This Law Exists
Three driving forces:
- Fundamental rights protection: AI systems making decisions about hiring, credit, healthcare, and law enforcement can violate human dignity, non-discrimination, and privacy rights guaranteed by the EU Charter of Fundamental Rights.
- Market harmonization: Without a unified EU-wide framework, each member state would create its own AI rules, fragmenting the single market. The Act creates one set of rules for all 27 member states.
- Global regulatory leadership: The "Brussels Effect" — by setting the world's first AI standard, the EU shapes global norms. Companies worldwide must comply if they serve EU customers, similar to GDPR's global impact.
What Problem It Solves
Before the AI Act, there was no legal clarity on:
- Who is responsible when an AI system causes harm (provider? deployer? both?)
- What safety standards AI systems must meet
- What documentation must exist about how AI systems work
- When humans must be able to override AI decisions
- What AI uses are simply too dangerous to allow
The AI Act has 13 Titles (113 articles, 180 recitals) and 13 Annexes. Here's the map:
| Title | Articles | What It Covers |
|---|---|---|
| I | 1-4 | General provisions: subject matter, scope, definitions, AI literacy |
| II | 5 | Prohibited AI practices |
| III | 6-49 | High-risk AI systems (classification, requirements, obligations) |
| IV | 50 | Transparency obligations for certain AI systems |
| V | 51-56 | General-purpose AI models (GPAI) |
| VI | 57-63 | Measures in support of innovation (sandboxes, SMEs) |
| VII | 64-69 | Governance (AI Board, AI Office, national authorities) |
| VIII | 70-74 | EU database for high-risk AI systems |
| IX | 75-94 | Post-market monitoring, information sharing, market surveillance |
| X | 95-98 | Codes of conduct, delegation, committee procedures |
| XI | 99-101 | Penalties |
| XII | 102-112 | Final provisions (amendments, transitional, entry into force) |
| XIII | 113 | Entry into force and application |
Key Annexes
| Annex | Purpose |
|---|---|
| I | Harmonised legislation for high-risk AI in regulated products (machinery, medical devices, etc.) |
| II | List of Union harmonisation legislation (product safety directives) |
| III | High-risk AI use cases — the 8 categories (hiring, credit, medical, etc.) |
| IV | Technical documentation requirements — 9 sections providers must document |
| V | EU declaration of conformity content |
| VI | Conformity assessment procedures (internal control) |
| VII | Conformity based on assessment of quality management system |
| VIII | Information to submit for high-risk AI system registration |
| IX | Information for registration of high-risk AI systems by deployers |
| X | EU legislation on large-scale IT systems (migration, borders) |
| XI | Technical documentation for GPAI model providers |
| XII | Transparency information for GPAI model providers |
| XIII | Criteria for designation of GPAI models with systemic risk |
The EU AI Act applies to (Article 2):
- Providers placing AI systems on the EU market or putting them into service in the EU — regardless of where the provider is established (US, Asia, anywhere)
- Deployers of AI systems located within the EU
- Providers and deployers outside the EU where the output of their AI system is used in the EU
The Extraterritorial Reach
Like GDPR, the AI Act has extraterritorial effect. A US SaaS company using OpenAI's API to serve EU customers is subject to the Act. The key trigger is not where you're based — it's where your AI system's output affects people.
What's Excluded
- AI systems used exclusively for military/defense purposes
- AI used solely for scientific research and development (before placing on market)
- AI used by natural persons for purely personal, non-professional activities
- AI systems released under free and open-source licenses (with exceptions for high-risk)
Article 3 defines 68 terms. The critical ones:
AI System
Art. 3(1) defines "AI system" as a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments[src]
This is broad. It covers: machine learning models, expert systems, statistical approaches, search and optimization methods, and more. If your software infers outputs from inputs with some autonomy, it's likely an AI system.
The Role System
| Role | Definition | Example |
|---|---|---|
| Provider | A "provider" is a natural or legal person, public authority, agency, or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge[src] | OpenAI (provides GPT-4), Anthropic (provides Claude) |
| Deployer | A "deployer" is a natural or legal person, public authority, agency, or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity[src] | A SaaS company using OpenAI's API in their product |
| Distributor | Makes an AI system available on the market without modifying it | A reseller offering a white-labeled AI product |
| Importer | Places a non-EU AI system on the EU market | EU company importing a Chinese AI surveillance system |
Role Shifting
Your role can change. If a deployer substantially modifies an AI system, they become a provider of that modified system (Article 25). Fine-tuning a model or changing its intended purpose can trigger this.
The AI Act classifies AI systems into four risk tiers, with obligations proportional to the risk:
Tier 1: Unacceptable Risk (Prohibited)
Banned outright (Article 5). No exceptions. Covered in detail in Lesson 2.1.
Tier 2: High Risk
An AI system is high-risk if either (a) it is intended to be used as a safety component of, or is itself, a product covered by Annex I Union harmonisation legislation and is required to undergo a third-party conformity assessment under that legislation (Art. 6(1)), or (b) it falls within one of the use-cases listed in Annex III (Art. 6(2)), subject to the filter in Art. 6(3)[src]
High-risk systems must meet: risk management, data governance, technical documentation, transparency, human oversight, accuracy, robustness, and cybersecurity requirements.
Tier 3: Limited Risk (Transparency Obligations)
AI that interacts with people or generates content (Article 50):
- Chatbots: must disclose AI interaction
- Content generation: must label AI-generated text, images, audio, video
- Deep fakes: must disclose AI manipulation
- Emotion recognition: must inform subjects
Tier 4: Minimal Risk
No mandatory requirements. Voluntary codes of conduct encouraged. Examples: AI-powered spam filters, recommendation engines, inventory management.
Why This Model?
The risk-based approach was chosen over alternatives (horizontal ban, sector-specific regulation, self-regulation) because:
- It's proportional — trivial AI doesn't need heavy regulation
- It's technology-neutral — applies to any AI technique, not just ML
- It's future-proof — the Commission can add new high-risk categories via Annex updates
The AI Act uses a phased rollout (Article 113). Each date below is verified against the official Regulation text and linked to the primary source.
- Chapter I (general provisions, including Art. 4 AI literacy) and Chapter II (prohibited practices, Art. 5) apply from 2 February 2025[src] — prohibited practices (Art. 5) + AI literacy (Art. 4).
- Chapter III Section 4 (notified bodies), Chapter V (general-purpose AI models), Chapter VII (governance), Chapter XII (penalties), and Art. 78 apply from 2 August 2025, with the exception of Art. 101[src] — GPAI model obligations, national authorities, penalty framework, EU governance.
- The Regulation applies from 2 August 2026, with earlier dates for Chapter I-II (prohibited practices and AI literacy, from 2 February 2025) and Chapter V (general-purpose AI, from 2 August 2025), and a later date for Annex I legacy high-risk systems already on the market (2 August 2027)[src] — the headline deadline; high-risk (Annex III) + transparency obligations apply.
- Art. 6(1) and the corresponding high-risk obligations for AI systems embedded in products covered by Annex I Union harmonisation legislation apply from 2 August 2027[src] — high-risk AI embedded in regulated products (Annex I).
What This Means for You
- Right now (2025): You must already comply with AI literacy requirements (Art. 4) and prohibited practices (Art. 5). Yes, these are already enforceable.
- By August 2026: All deployer obligations (Art. 26), transparency requirements (Art. 50), conformity assessment, EU database registration — everything in Titles III-IV must be in place.
- Conformity assessments take 6-12 months. If you haven't started by now, you're behind.
The European Commission published the "Digital Omnibus on AI" as COM(2025) 836 on 19 November 2025, proposing amendments to Regulation (EU) 2024/1689 including extensions to the application dates for high-risk AI systems and transitional provisions[src]
What It Proposes
- Under COM(2025) 836, Annex III high-risk obligations would apply "latest by 2 December 2027" — six months later than the current 2 August 2026 date in Art. 113. The Council general approach (13 March 2026) and the IMCO+LIBE joint committee report (A-10-2026-0073, 18 March 2026) both converge on 2 December 2027 as a fixed date. NOT yet adopted as law[src]
- Under COM(2025) 836, high-risk obligations for AI embedded in Annex I regulated products would apply "latest by 2 August 2028" — twelve months later than the current 2 August 2027 date in Art. 113(c). NOT yet adopted[src]
- Additional simplifications for SMEs and reducing overlap with sector-specific regulations (full scope: see the proposal text itself).
Current Status
The Council of the EU adopted a general approach on the Digital Omnibus on AI on 13 March 2026, endorsing fixed replacement dates of 2 December 2027 (standalone high-risk, Annex III) and 2 August 2028 (high-risk embedded in regulated products, Annex I). A general approach is a negotiating position, not law[src]
The European Parliament's IMCO and LIBE committees adopted a joint report on the Digital Omnibus on AI on 18 March 2026, reference A-10-2026-0073 (on file 2025/0359(COD)). Plenary vote and trilogue had not occurred by mid-April 2026 per the EP Legislative Train entry[src]
Should You Wait?
No. Building a compliance strategy around a "maybe" is reckless. The proposal may:
- Be rejected entirely
- Be adopted with different provisions
- Take longer than expected to pass
Even if parts are delayed, the fundamental requirements remain the same — you just get more time. Starting now gives you a head start regardless.
Providers and deployers must take measures to ensure a sufficient level of AI literacy among staff and any other persons dealing with the operation and use of AI systems on their behalf, having regard to their technical knowledge, experience, education, and training, and the context in which the AI systems are to be used[src]
Key Points
- Already in force: Chapter I (general provisions, including Art. 4 AI literacy) and Chapter II (prohibited practices, Art. 5) apply from 2 February 2025[src] This is not a future obligation — it's current law.
- Applies to everyone: Both providers and deployers, regardless of risk level. Even minimal-risk AI deployments trigger this.
- "Sufficient" is context-dependent: The required literacy level depends on the nature of the AI system, the risk it poses, and the person's role.
- Technical knowledge, experience, education, and training all count — there's no single prescribed format.
How to Comply
- Identify who in your organization interacts with AI systems
- Assess what level of understanding they need for their role
- Provide appropriate training (this curriculum, for example)
- Document that training was provided and when
The AI Act doesn't exist in isolation. It interacts with several other EU regulations:
| Regulation | Overlap with AI Act |
|---|---|
| GDPR | Data protection impact assessments (Art. 35 GDPR ↔ Art. 26(9) AI Act). Lawful basis for training data. Rights of data subjects in automated decisions (Art. 22 GDPR). ~40% overlap. |
| Digital Services Act (DSA) | Recommender systems transparency. Content moderation using AI. Systemic risk assessments for very large platforms. |
| Digital Markets Act (DMA) | Gatekeepers using AI for ranking, advertising, profiling. Interoperability requirements. |
| NIS2 Directive | Cybersecurity requirements for AI systems in critical infrastructure. Incident reporting obligations overlap. |
| Product Safety Directive | AI embedded in consumer products. Annex I of the AI Act cross-references product safety legislation. |
| Machinery Regulation | AI in industrial machinery and robots. Safety requirements for autonomous systems. |
The "Lex Specialis" Principle
Where sector-specific EU legislation already imposes equivalent or stricter requirements, those take precedence. The AI Act fills gaps — it doesn't override existing safety regulations.
What This Means for Deployers
If you're already GDPR compliant, you have a head start (~40% of AI Act requirements overlap). Your existing DPIA process can be extended for AI. Your data governance practices partially satisfy Art. 10 requirements.
Test Your Understanding
Answer these without looking back. Then check your answers against the lessons above.
- Your US-based SaaS company uses Claude's API to power a customer support chatbot for EU customers. What is your role under the AI Act?
Think: Are you the one who built the AI, or the one using it? - A company uses AI to screen job applicants' resumes. What risk tier is this?
Think: Is hiring/HR screening in Annex III? - Which AI Act obligation has been enforceable since February 2, 2025?
Think: What's in Phase 1 of the timeline? - Your startup fine-tunes an open-source LLM and offers it as a SaaS product. Are you exempt from the AI Act because it's open-source?
Think: What are the exceptions to the open-source exemption? - Name 3 things the AI Act requires that GDPR does NOT.
Think: What's in the ~60% that's genuinely new?
Show Answers
- Deployer. You use an AI system (Claude) under your own authority. Anthropic is the provider.
- High Risk. Employment/HR screening is explicitly listed in Annex III, Category 4.
- AI literacy (Art. 4) and prohibited practices (Art. 5). Both have been in force since Feb 2, 2025.
- No. The open-source exemption does NOT apply when you place the system on the market under your own name. You're a provider with full obligations.
- Human oversight (Art. 14/26), conformity assessment (Art. 43), supply chain documentation (Arts. 13/47), post-market monitoring (Art. 72), incident reporting (Art. 73), EU database registration (Art. 49).