AI Literacy Training for the EU AI Act

Free, self-paced AI literacy curriculum covering the EU AI Act end-to-end. 5 modules, 57 lessons. Completing this curriculum contributes to your Article 4 AI literacy obligation [src].

Claims verified: April 17, 2026 · Sources: Regulation (EU) 2024/1689

0 of 0 lessons completed

Module 1: The Regulation (Foundation)

Understanding the law itself — structure, scope, definitions

1.1 Why the EU AI Act Exists ~20 min

The EU AI Act (Regulation 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. It was formally adopted on June 13, 2024, and entered into force on August 1, 2024.

Legislative History

  • April 2021: European Commission proposes the AI Act
  • June 2023: European Parliament adopts its negotiating position
  • December 2023: Political agreement reached (trilogue)
  • March 2024: Parliament formally adopts the final text
  • 12 July 2024: Published in the Official Journal (OJ L, 2024/1689)
  • 1 August 2024: Enters into force (twentieth day after OJ publication per Art. 113)

Why This Law Exists

Three driving forces:

  1. Fundamental rights protection: AI systems making decisions about hiring, credit, healthcare, and law enforcement can violate human dignity, non-discrimination, and privacy rights guaranteed by the EU Charter of Fundamental Rights.
  2. Market harmonization: Without a unified EU-wide framework, each member state would create its own AI rules, fragmenting the single market. The Act creates one set of rules for all 27 member states.
  3. Global regulatory leadership: The "Brussels Effect" — by setting the world's first AI standard, the EU shapes global norms. Companies worldwide must comply if they serve EU customers, similar to GDPR's global impact.

What Problem It Solves

Before the AI Act, there was no legal clarity on:

  • Who is responsible when an AI system causes harm (provider? deployer? both?)
  • What safety standards AI systems must meet
  • What documentation must exist about how AI systems work
  • When humans must be able to override AI decisions
  • What AI uses are simply too dangerous to allow
The EU AI Act is not about regulating AI research — it regulates AI systems placed on the market or put into service. If your AI system affects people in the EU, you're in scope.
1.2 Structure Overview — Titles, Chapters & Annexes ~25 min

The AI Act has 13 Titles (113 articles, 180 recitals) and 13 Annexes. Here's the map:

TitleArticlesWhat It Covers
I1-4General provisions: subject matter, scope, definitions, AI literacy
II5Prohibited AI practices
III6-49High-risk AI systems (classification, requirements, obligations)
IV50Transparency obligations for certain AI systems
V51-56General-purpose AI models (GPAI)
VI57-63Measures in support of innovation (sandboxes, SMEs)
VII64-69Governance (AI Board, AI Office, national authorities)
VIII70-74EU database for high-risk AI systems
IX75-94Post-market monitoring, information sharing, market surveillance
X95-98Codes of conduct, delegation, committee procedures
XI99-101Penalties
XII102-112Final provisions (amendments, transitional, entry into force)
XIII113Entry into force and application

Key Annexes

AnnexPurpose
IHarmonised legislation for high-risk AI in regulated products (machinery, medical devices, etc.)
IIList of Union harmonisation legislation (product safety directives)
IIIHigh-risk AI use cases — the 8 categories (hiring, credit, medical, etc.)
IVTechnical documentation requirements — 9 sections providers must document
VEU declaration of conformity content
VIConformity assessment procedures (internal control)
VIIConformity based on assessment of quality management system
VIIIInformation to submit for high-risk AI system registration
IXInformation for registration of high-risk AI systems by deployers
XEU legislation on large-scale IT systems (migration, borders)
XITechnical documentation for GPAI model providers
XIITransparency information for GPAI model providers
XIIICriteria for designation of GPAI models with systemic risk
For deployers of third-party AI (most SaaS companies), the critical sections are: Title II (prohibited practices), Title III Chapter 3 (deployer obligations, Art. 26), Title IV (transparency, Art. 50), and Annex III (high-risk classification).
1.3 Scope & Extraterritorial Reach ~15 min

The EU AI Act applies to (Article 2):

  1. Providers placing AI systems on the EU market or putting them into service in the EU — regardless of where the provider is established (US, Asia, anywhere)
  2. Deployers of AI systems located within the EU
  3. Providers and deployers outside the EU where the output of their AI system is used in the EU

The Extraterritorial Reach

Like GDPR, the AI Act has extraterritorial effect. A US SaaS company using OpenAI's API to serve EU customers is subject to the Act. The key trigger is not where you're based — it's where your AI system's output affects people.

What's Excluded

  • AI systems used exclusively for military/defense purposes
  • AI used solely for scientific research and development (before placing on market)
  • AI used by natural persons for purely personal, non-professional activities
  • AI systems released under free and open-source licenses (with exceptions for high-risk)
Open-source exemptions do NOT apply if the AI system is high-risk (Annex III) or if it's a GPAI model with systemic risk. Open-source providers of high-risk AI still have full obligations.
1.4 Key Definitions ~20 min

Article 3 defines 68 terms. The critical ones:

AI System

Art. 3(1) defines "AI system" as a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments[src]

This is broad. It covers: machine learning models, expert systems, statistical approaches, search and optimization methods, and more. If your software infers outputs from inputs with some autonomy, it's likely an AI system.

The Role System

RoleDefinitionExample
ProviderA "provider" is a natural or legal person, public authority, agency, or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge[src]OpenAI (provides GPT-4), Anthropic (provides Claude)
DeployerA "deployer" is a natural or legal person, public authority, agency, or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity[src]A SaaS company using OpenAI's API in their product
DistributorMakes an AI system available on the market without modifying itA reseller offering a white-labeled AI product
ImporterPlaces a non-EU AI system on the EU marketEU company importing a Chinese AI surveillance system

Role Shifting

Your role can change. If a deployer substantially modifies an AI system, they become a provider of that modified system (Article 25). Fine-tuning a model or changing its intended purpose can trigger this.

Most SaaS companies using third-party AI APIs are deployers. You have specific obligations the provider cannot fulfill for you. If you fine-tune or substantially modify the AI, you may become a provider with heavier obligations.
1.5 The Risk-Based Approach ~20 min

The AI Act classifies AI systems into four risk tiers, with obligations proportional to the risk:

Tier 1: Unacceptable Risk (Prohibited)

Banned outright (Article 5). No exceptions. Covered in detail in Lesson 2.1.

Tier 2: High Risk

An AI system is high-risk if either (a) it is intended to be used as a safety component of, or is itself, a product covered by Annex I Union harmonisation legislation and is required to undergo a third-party conformity assessment under that legislation (Art. 6(1)), or (b) it falls within one of the use-cases listed in Annex III (Art. 6(2)), subject to the filter in Art. 6(3)[src]

High-risk systems must meet: risk management, data governance, technical documentation, transparency, human oversight, accuracy, robustness, and cybersecurity requirements.

Tier 3: Limited Risk (Transparency Obligations)

AI that interacts with people or generates content (Article 50):

  • Chatbots: must disclose AI interaction
  • Content generation: must label AI-generated text, images, audio, video
  • Deep fakes: must disclose AI manipulation
  • Emotion recognition: must inform subjects

Tier 4: Minimal Risk

No mandatory requirements. Voluntary codes of conduct encouraged. Examples: AI-powered spam filters, recommendation engines, inventory management.

Why This Model?

The risk-based approach was chosen over alternatives (horizontal ban, sector-specific regulation, self-regulation) because:

  • It's proportional — trivial AI doesn't need heavy regulation
  • It's technology-neutral — applies to any AI technique, not just ML
  • It's future-proof — the Commission can add new high-risk categories via Annex updates
Most SaaS products with customer-facing AI land in Limited Risk (chatbots, content gen). If you use AI for hiring, credit, or healthcare decisions, you're High Risk with 10x the compliance burden.
1.6 Timeline & Phased Enforcement ~15 min

The AI Act uses a phased rollout (Article 113). Each date below is verified against the official Regulation text and linked to the primary source.

  • Chapter I (general provisions, including Art. 4 AI literacy) and Chapter II (prohibited practices, Art. 5) apply from 2 February 2025[src] — prohibited practices (Art. 5) + AI literacy (Art. 4).
  • Chapter III Section 4 (notified bodies), Chapter V (general-purpose AI models), Chapter VII (governance), Chapter XII (penalties), and Art. 78 apply from 2 August 2025, with the exception of Art. 101[src] — GPAI model obligations, national authorities, penalty framework, EU governance.
  • The Regulation applies from 2 August 2026, with earlier dates for Chapter I-II (prohibited practices and AI literacy, from 2 February 2025) and Chapter V (general-purpose AI, from 2 August 2025), and a later date for Annex I legacy high-risk systems already on the market (2 August 2027)[src] — the headline deadline; high-risk (Annex III) + transparency obligations apply.
  • Art. 6(1) and the corresponding high-risk obligations for AI systems embedded in products covered by Annex I Union harmonisation legislation apply from 2 August 2027[src] — high-risk AI embedded in regulated products (Annex I).

What This Means for You

  • Right now (2025): You must already comply with AI literacy requirements (Art. 4) and prohibited practices (Art. 5). Yes, these are already enforceable.
  • By August 2026: All deployer obligations (Art. 26), transparency requirements (Art. 50), conformity assessment, EU database registration — everything in Titles III-IV must be in place.
  • Conformity assessments take 6-12 months. If you haven't started by now, you're behind.
Article 4 (AI literacy) has been enforceable since February 2, 2025. Every organization using AI must ensure staff have "sufficient AI literacy." This curriculum is evidence of your compliance with that requirement.
1.7 The Digital Omnibus Act ~10 min

The European Commission published the "Digital Omnibus on AI" as COM(2025) 836 on 19 November 2025, proposing amendments to Regulation (EU) 2024/1689 including extensions to the application dates for high-risk AI systems and transitional provisions[src]

What It Proposes

  • Under COM(2025) 836, Annex III high-risk obligations would apply "latest by 2 December 2027" — six months later than the current 2 August 2026 date in Art. 113. The Council general approach (13 March 2026) and the IMCO+LIBE joint committee report (A-10-2026-0073, 18 March 2026) both converge on 2 December 2027 as a fixed date. NOT yet adopted as law[src]
  • Under COM(2025) 836, high-risk obligations for AI embedded in Annex I regulated products would apply "latest by 2 August 2028" — twelve months later than the current 2 August 2027 date in Art. 113(c). NOT yet adopted[src]
  • Additional simplifications for SMEs and reducing overlap with sector-specific regulations (full scope: see the proposal text itself).

Current Status

The Council of the EU adopted a general approach on the Digital Omnibus on AI on 13 March 2026, endorsing fixed replacement dates of 2 December 2027 (standalone high-risk, Annex III) and 2 August 2028 (high-risk embedded in regulated products, Annex I). A general approach is a negotiating position, not law[src]

The European Parliament's IMCO and LIBE committees adopted a joint report on the Digital Omnibus on AI on 18 March 2026, reference A-10-2026-0073 (on file 2025/0359(COD)). Plenary vote and trilogue had not occurred by mid-April 2026 per the EP Legislative Train entry[src]

Should You Wait?

No. Building a compliance strategy around a "maybe" is reckless. The proposal may:

  • Be rejected entirely
  • Be adopted with different provisions
  • Take longer than expected to pass

Even if parts are delayed, the fundamental requirements remain the same — you just get more time. Starting now gives you a head start regardless.

Plan for August 2, 2026. If the Omnibus Act grants more time, consider it a bonus — not a reason to delay.
1.8 AI Literacy Obligation (Art. 4) ~15 min

Providers and deployers must take measures to ensure a sufficient level of AI literacy among staff and any other persons dealing with the operation and use of AI systems on their behalf, having regard to their technical knowledge, experience, education, and training, and the context in which the AI systems are to be used[src]

Key Points

  • Already in force: Chapter I (general provisions, including Art. 4 AI literacy) and Chapter II (prohibited practices, Art. 5) apply from 2 February 2025[src] This is not a future obligation — it's current law.
  • Applies to everyone: Both providers and deployers, regardless of risk level. Even minimal-risk AI deployments trigger this.
  • "Sufficient" is context-dependent: The required literacy level depends on the nature of the AI system, the risk it poses, and the person's role.
  • Technical knowledge, experience, education, and training all count — there's no single prescribed format.

How to Comply

  1. Identify who in your organization interacts with AI systems
  2. Assess what level of understanding they need for their role
  3. Provide appropriate training (this curriculum, for example)
  4. Document that training was provided and when
Completing this curriculum and recording your progress is evidence toward your Article 4 AI literacy compliance. Save your completion certificate (print this page when done) as documentation.
1.9 Interaction with Other EU Regulations ~20 min

The AI Act doesn't exist in isolation. It interacts with several other EU regulations:

RegulationOverlap with AI Act
GDPRData protection impact assessments (Art. 35 GDPR ↔ Art. 26(9) AI Act). Lawful basis for training data. Rights of data subjects in automated decisions (Art. 22 GDPR). ~40% overlap.
Digital Services Act (DSA)Recommender systems transparency. Content moderation using AI. Systemic risk assessments for very large platforms.
Digital Markets Act (DMA)Gatekeepers using AI for ranking, advertising, profiling. Interoperability requirements.
NIS2 DirectiveCybersecurity requirements for AI systems in critical infrastructure. Incident reporting obligations overlap.
Product Safety DirectiveAI embedded in consumer products. Annex I of the AI Act cross-references product safety legislation.
Machinery RegulationAI in industrial machinery and robots. Safety requirements for autonomous systems.

The "Lex Specialis" Principle

Where sector-specific EU legislation already imposes equivalent or stricter requirements, those take precedence. The AI Act fills gaps — it doesn't override existing safety regulations.

What This Means for Deployers

If you're already GDPR compliant, you have a head start (~40% of AI Act requirements overlap). Your existing DPIA process can be extended for AI. Your data governance practices partially satisfy Art. 10 requirements.

The AI Act adds new requirements ON TOP of existing regulations. GDPR compliance gets you ~40% of the way there. The remaining 60% (human oversight, conformity assessment, supply chain documentation) is genuinely new.
1.10 Module 1 Quiz ~15 min

Test Your Understanding

Answer these without looking back. Then check your answers against the lessons above.

  1. Your US-based SaaS company uses Claude's API to power a customer support chatbot for EU customers. What is your role under the AI Act?
    Think: Are you the one who built the AI, or the one using it?
  2. A company uses AI to screen job applicants' resumes. What risk tier is this?
    Think: Is hiring/HR screening in Annex III?
  3. Which AI Act obligation has been enforceable since February 2, 2025?
    Think: What's in Phase 1 of the timeline?
  4. Your startup fine-tunes an open-source LLM and offers it as a SaaS product. Are you exempt from the AI Act because it's open-source?
    Think: What are the exceptions to the open-source exemption?
  5. Name 3 things the AI Act requires that GDPR does NOT.
    Think: What's in the ~60% that's genuinely new?
Show Answers
  1. Deployer. You use an AI system (Claude) under your own authority. Anthropic is the provider.
  2. High Risk. Employment/HR screening is explicitly listed in Annex III, Category 4.
  3. AI literacy (Art. 4) and prohibited practices (Art. 5). Both have been in force since Feb 2, 2025.
  4. No. The open-source exemption does NOT apply when you place the system on the market under your own name. You're a provider with full obligations.
  5. Human oversight (Art. 14/26), conformity assessment (Art. 43), supply chain documentation (Arts. 13/47), post-market monitoring (Art. 72), incident reporting (Art. 73), EU database registration (Art. 49).

Module 2: Prohibited, High-Risk & GPAI

What's banned, what triggers heavy obligations, and what foundation models must do

2.1 Prohibited Practices (Art. 5) ~25 min

Article 5 bans AI practices that pose an unacceptable risk to fundamental rights. Art. 5 prohibits specific AI practices deemed to pose unacceptable risk to fundamental rights, including subliminal/manipulative/deceptive techniques causing significant harm; exploitation of vulnerabilities based on age, disability, or socio-economic situation; social scoring leading to detrimental treatment; and (subject to narrow exceptions) real-time remote biometric identification in publicly accessible spaces for law enforcement[src]

Application date: Chapter I (general provisions, including Art. 4 AI literacy) and Chapter II (prohibited practices, Art. 5) apply from 2 February 2025[src]

What's Banned

PracticeWhat It MeansWhy It's Banned
Social scoringAI that evaluates people based on their social behavior or personal traits, leading to detrimental treatmentViolates human dignity; creates systemic discrimination
Subliminal manipulationAI that deploys subliminal techniques beyond a person's consciousness to distort behavior, causing significant harmUndermines autonomy and free will
Exploitation of vulnerabilitiesAI targeting specific vulnerabilities (age, disability, social/economic situation) to distort behaviorPreys on those least able to protect themselves
Real-time remote biometric IDReal-time facial recognition in publicly accessible spaces by law enforcement (with narrow exceptions)Mass surveillance incompatible with privacy rights
Emotion recognitionAI inferring emotions in workplace and education settings (with exceptions for safety/medical)Invasive, unreliable, discriminatory
Untargeted facial scrapingCreating facial recognition databases by scraping images from the internet or CCTVMass collection without consent violates privacy
Biometric categorizationAI that categorizes individuals based on biometric data to infer race, political opinions, trade union membership, religious beliefs, sex life/orientationCreates profiles that enable discrimination
Predictive policing (individual)AI predicting that a specific person will commit a crime based solely on profiling or personality traitsPresumption of innocence violated

Exceptions

Real-time biometric identification has three narrow exceptions for law enforcement:

  1. Searching for specific victims (kidnapping, trafficking, sexual exploitation)
  2. Preventing specific, substantial, imminent threats to life or terrorist attacks
  3. Identifying suspects of specific serious criminal offences (those carrying prison terms of 4+ years)

Even these require prior judicial authorization and necessity/proportionality assessment.

Penalties for Prohibited Practices

Non-compliance with the prohibition of AI practices under Art. 5 is subject to administrative fines of up to EUR 35 000 000 or, for an undertaking, up to 7% of total worldwide annual turnover for the preceding financial year, whichever is higher[src]

For SMEs and start-ups the direction reverses: For SMEs (including startups), each fine under Art. 99 is capped at the lower of the percentage or absolute amount listed in paragraphs 3, 4, and 5 — not the higher[src]

These are the highest fines in the entire Act.

If your product does ANY of these things — even inadvertently — you must stop immediately. There is no grace period. These prohibitions are already law.
2.2 General-Purpose AI Models (Arts. 51-56) ~25 min

Title V of the AI Act creates obligations specifically for General-Purpose AI (GPAI) models — foundation models like GPT-4, Claude, Gemini, and Llama. Chapter III Section 4 (notified bodies), Chapter V (general-purpose AI models), Chapter VII (governance), Chapter XII (penalties), and Art. 78 apply from 2 August 2025, with the exception of Art. 101[src]

What's a GPAI Model?

A "general-purpose AI model" (GPAI) is an AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market, and that can be integrated into a variety of downstream systems or applications (Art. 3(63)). A "general-purpose AI model with systemic risk" is a GPAI model meeting the conditions in Art. 51[src] This covers GPT-4, Claude, Gemini, Llama, Mistral, and any model that can be adapted for multiple tasks.

This covers: GPT-4, Claude, Gemini, Llama, Mistral, and any model that can be adapted for multiple tasks.

Two Tiers of GPAI Obligations

All GPAI providers (Article 53) must:

  • Prepare and maintain technical documentation (training methods, data, evaluation)
  • Provide information and documentation to downstream deployers
  • Establish a policy for complying with EU copyright law
  • Publish a sufficiently detailed summary of training data content

GPAI with systemic risk (Article 55) — additional obligations:

  • Perform model evaluations (including adversarial testing/red-teaming)
  • Assess and mitigate systemic risks
  • Track, document, and report serious incidents to the AI Office
  • Ensure adequate cybersecurity protections

Systemic Risk Threshold

A GPAI model is presumed to have systemic risk if its training used more than 10^25 FLOPs (floating-point operations). The Commission can also designate models based on other criteria.

Models presumed systemic: GPT-4 and later, potentially Claude 3/4 Opus, Gemini Ultra. The exact list is maintained by the AI Office.

Why This Matters for Deployers

As a deployer building on GPT-4 or Claude:

  • Your provider (OpenAI, Anthropic) must give you documentation under Art. 53(1)(b)
  • If they don't, you can't fully comply with YOUR deployer obligations
  • This is the "documentation gap" — providers may not have this ready yet
GPAI obligations create a supply chain dependency. Your provider must give you specific documentation. If they haven't, send the request. They are legally required to provide it.
Use AIActStack's scanner to generate a documentation request email for your specific AI provider, citing the exact articles they must comply with.
2.3 Annex III Deep-Dive — All 8 High-Risk Categories ~30 min

Annex III lists AI systems considered high-risk. There are 8 categories:

#CategoryExamplesWhy High-Risk
1BiometricsFacial recognition (non-prohibited), emotion recognition, biometric categorizationFundamental rights to privacy, non-discrimination
2Critical infrastructureAI managing road traffic, water/gas/electricity supply, digital infrastructureFailure can endanger life and public safety
3Education & vocational trainingDetermining access to education, evaluating learning outcomes, monitoring cheatingShapes life opportunities, potential for bias
4Employment & workersCV screening, hiring decisions, performance monitoring, promotion/termination decisionsAffects livelihoods, high discrimination risk
5Essential servicesCredit scoring, insurance pricing, emergency services dispatch prioritizationAccess to essential resources, discrimination risk
6Law enforcementRisk assessment for crime prediction, lie detection, evidence evaluationLiberty, presumption of innocence, due process
7Migration & borderAsylum application assessment, border surveillance, visa processingAffects vulnerable populations, fundamental rights
8Justice & democracyAI assisting judicial decisions, election influence analysisRule of law, democratic processes

The "Safety Component" Rule

Even if an AI system doesn't fall into these categories directly, it's high-risk if it's a safety component of a product covered by EU product safety legislation (Annex I). This catches AI in medical devices, vehicles, machinery, toys, and more.

Exemptions Within High-Risk

Article 6(3) allows an Annex III system to NOT be classified as high-risk if it:

  • Performs a narrow procedural task
  • Improves the result of a previously completed human activity
  • Detects decision-making patterns without replacing human assessment
  • Performs a preparatory task for an assessment in an Annex III use case

This exemption does NOT apply if the AI system profiles natural persons.

If you use AI anywhere in hiring, credit decisions, or healthcare, assume you're high-risk until proven otherwise. The exemptions in Art. 6(3) are narrow and require documentation to claim.
2.4 Hiring & HR Screening — Why It's High-Risk ~20 min

Category 4 of Annex III covers AI systems intended to be used for recruitment, selection, or evaluation of candidates during work-related contractual relationships.

What's Covered

  • CV/resume screening and ranking
  • Automated interview assessment (video, text, or voice analysis)
  • Candidate matching algorithms
  • Performance evaluation AI
  • Promotion and termination decision support
  • Task allocation based on worker profiling

Why It's High-Risk

Employment AI directly affects people's livelihoods and has documented bias issues:

  • Amazon's hiring AI famously discriminated against women (trained on 10 years of male-dominated resumes)
  • Personality assessment AI has been shown to discriminate by race and disability
  • Video interview analysis can penalize non-native speakers, people with disabilities, or those from different cultural backgrounds

Deployer Obligations for Hiring AI

All high-risk deployer obligations apply (Art. 26), plus:

  • Must inform workers or their representatives that high-risk AI is in use
  • Must perform DPIA before deployment (Art. 26(9))
  • Must implement human oversight — a human must review and can override every AI hiring decision
  • Must retain logs for at least 6 months
  • Must register the system in the EU database
If you use any AI in hiring — even a simple resume keyword filter powered by GPT — you're likely high-risk. The obligations are substantial: ~100-120 hours to initial compliance.
2.5 Credit & Insurance Scoring ~15 min

Category 5(b) of Annex III covers AI used for creditworthiness assessment and credit scoring, as well as risk assessment and pricing for life and health insurance.

What's Covered

  • Automated credit decisions (loan approval, credit limits)
  • AI-driven risk scoring for insurance underwriting
  • Dynamic pricing based on individual risk profiles
  • Fraud detection that affects access to financial services

Why It's High-Risk

Financial AI determines who can get a loan, a mortgage, or affordable insurance. Bias here creates systemic inequality — entire communities can be redlined by algorithms.

Any AI making or significantly influencing credit or insurance decisions is high-risk. This includes AI that "recommends" to a human reviewer if the recommendation is routinely followed.
2.6 Medical Diagnosis & Healthcare ~15 min

Healthcare AI is regulated through two paths: Annex III Category 1 (biometric systems) and Annex I (medical device regulations). AI used for medical diagnosis, treatment recommendation, or surgical assistance faces the heaviest scrutiny.

Double Regulation

Medical AI often falls under BOTH the AI Act AND the EU Medical Device Regulation (MDR 2017/745). The AI Act requirements apply on top of existing medical device requirements — they don't replace them.

Medical AI has the most complex compliance landscape — both AI Act and MDR apply. If you're in this space, you need specialized legal counsel in addition to this curriculum.
2.7 Education & Grading ~10 min

Category 3 of Annex III covers AI in education: determining access to institutions, evaluating learning outcomes, assessing appropriate education levels, and monitoring student behavior during exams (anti-cheating AI).

AI proctoring tools and automated grading systems are high-risk. If your edtech product uses AI to make decisions that affect students' educational paths, prepare for full high-risk compliance.
2.8 Law Enforcement & Migration ~15 min

Categories 6 and 7 of Annex III cover AI in law enforcement (crime prediction, evidence assessment, lie detection, suspect profiling) and migration (asylum processing, border surveillance, visa decisions).

These are the most politically sensitive categories. Law enforcement AI faces additional restrictions beyond standard high-risk requirements, including stricter prohibitions on real-time biometric identification (Art. 5).

If you're building AI for law enforcement or border control, you face the highest compliance burden in the entire Act, plus strict fundamental rights impact assessments.
2.9 Critical Infrastructure ~10 min

Category 2 covers AI as a safety component of critical infrastructure: road traffic management, water/gas/electricity supply, heating systems, and digital infrastructure management.

AI managing infrastructure where failure threatens life or safety is automatically high-risk. This also intersects with NIS2 cybersecurity requirements.
2.10 The "Significant Risk" Threshold ~15 min

Not every AI system in an Annex III domain is automatically high-risk. Article 6(3) provides a narrow exception for AI systems that don't pose a "significant risk of harm."

When You Can Claim the Exception

Your AI system is NOT high-risk if it:

  1. Performs a narrow procedural task (e.g., sorting documents by format, not content)
  2. Improves the result of a previously completed human activity (e.g., grammar check on a hiring manager's written feedback)
  3. Detects patterns without replacing human assessment (e.g., flagging anomalies for human review, not making decisions)
  4. Performs a preparatory task for an Annex III assessment (e.g., formatting data for a human credit analyst)

The Profiling Caveat

These exceptions do NOT apply if the AI system profiles natural persons. Profiling means any form of automated processing of personal data to evaluate personal aspects (performance, behavior, economic situation, health, preferences, interests, reliability, location, movements).

If your AI system profiles people in any way — even to "assist" a human decision-maker — you cannot claim the Art. 6(3) exception. Assume high-risk.
2.11 Module 2 Quiz ~15 min

Classify These Products

For each product, identify: (a) the risk tier, (b) which Annex III category (if high-risk), and (c) whether the Art. 6(3) exception might apply.

  1. A chatbot that answers customer FAQ using GPT-4.
  2. An AI that screens resumes and ranks candidates for a recruiter.
  3. An AI that generates marketing copy from product descriptions.
  4. An AI that predicts which insurance claims are likely fraudulent.
  5. An AI that sorts incoming emails into categories (spam, urgent, normal).
Show Answers
  1. Limited Risk. Customer-facing chatbot requires transparency disclosure (Art. 50) — must tell users they're interacting with AI. Not high-risk since it's not in an Annex III domain.
  2. High Risk. Annex III Category 4 (Employment). Resume screening and ranking is explicitly covered. Art. 6(3) exception does NOT apply because it profiles candidates.
  3. Limited Risk. Content generation requires AI-generated content labeling (Art. 50). Not high-risk — marketing is not in Annex III.
  4. High Risk. Annex III Category 5 (Essential services — insurance). Fraud detection that affects claim outcomes is high-risk. If it only flags for human review without profiling, Art. 6(3) MIGHT apply — but fraud detection typically involves profiling.
  5. Minimal Risk. Email sorting is internal tooling, not customer-facing, not in an Annex III domain. No mandatory requirements. Voluntary codes of conduct apply.

Module 3: Obligations by Role

What each actor must actually do — articles, templates, step-by-step guides

3.1 Provider Obligations Overview (Arts. 8-22)~25 min

If you develop, train, or place an AI system on the EU market, you are a provider and carry the heaviest compliance burden. Articles 8 through 22 lay out everything you must build, document, and maintain. Here is each article mapped to a concrete action.

Article-by-Article Action Map

ArticleRequirementWhat You Actually Do
Art. 8Compliance with requirementsDesign your AI system to meet Arts. 9-15 from the start. Compliance is not a bolt-on — it is a design constraint. Document how each requirement is satisfied.
Art. 9Risk management systemEstablish a living risk management process: identify risks, analyze severity and likelihood, test mitigations, monitor post-deployment. This is not a one-time document — it updates throughout the system's lifecycle.
Art. 10Data governanceDefine quality criteria for training, validation, and testing datasets. Document data provenance, preprocessing steps, bias detection, and any gaps. If you use personal data, ensure a lawful basis under GDPR.
Art. 11Technical documentationProduce the full Annex IV documentation package (9 sections — covered in Lesson 3.2). This must be ready before the system is placed on the market and kept up to date.
Art. 12Record-keeping (logging)Build automatic logging into the system. Logs must capture events relevant to identifying risks, facilitating post-market monitoring, and enabling traceability. Log retention must match the system's intended purpose.
Art. 13Transparency & information to deployersProviders must ensure high-risk AI systems are designed to enable deployers to interpret outputs and use them appropriately, including instructions for use containing the information listed in Art. 13(3)[src]
Art. 14Human oversightDesign the system so humans can effectively oversee it: understand outputs, detect anomalies, intervene, and stop the system. Build override mechanisms, not just dashboards.
Art. 15Accuracy, robustness & cybersecurityHigh-risk AI systems must be designed and developed to achieve an appropriate level of accuracy, robustness, and cybersecurity, and to perform consistently in those respects throughout their lifecycle[src]
Art. 16Provider obligations (summary)Providers of high-risk AI systems have the obligations listed in Art. 16, including ensuring their high-risk AI systems are compliant with the requirements, indicating name/address, having a QMS under Art. 17, keeping documentation (Art. 18) and logs (Art. 19), performing conformity assessment (Art. 43), drawing up an EU declaration of conformity (Art. 47), affixing CE marking (Art. 48), registering in the EU database (Art. 49), and taking corrective action when needed (Art. 20)[src]
Art. 17Quality management systemImplement a documented QMS covering: compliance strategy, design and development procedures, testing and validation, data management, risk management, post-market monitoring, incident reporting, communication with authorities, and record-keeping.
Art. 18Documentation retentionKeep technical documentation and QMS records for 10 years after the AI system is placed on the market. Store them so they are accessible to national authorities on request.
Art. 19Automatically generated logsRetain the logs generated by the AI system (per Art. 12) for at least 6 months, unless longer retention is required by other EU or national law.
Art. 20Corrective actionsIf the system is non-compliant, take immediate corrective action: fix it, withdraw it, or recall it. Notify the distributor, deployer, and relevant authorities.
Art. 21Cooperation with authoritiesProvide any information or documentation a national authority requests. Demonstrate compliance on demand. This means your documentation must actually be accessible, not buried in a developer's laptop.
Art. 22Authorised representativesIf you are outside the EU, appoint an authorised representative established in the EU before placing your system on the market. Give them a written mandate specifying the obligations they fulfill on your behalf.

Practical Guidance: Where to Start

  1. Start with Art. 9 (risk management) — it shapes every other requirement. Your risk assessment determines your data governance needs, your testing strategy, and your documentation scope.
  2. Build Art. 12 (logging) into your architecture early — retrofitting logging is expensive. Define your log schema before you build the system.
  3. Write Art. 13 (deployer instructions) as if your downstream customer knows nothing about AI — regulators will judge whether a deployer could reasonably comply based on what you gave them.
  4. Treat Art. 17 (QMS) as the backbone — a quality management system is not a document. It is the organizational structure that ensures everything else happens consistently.

Concrete Example

Imagine you are building a resume-screening AI. Before placing it on the market, you must: conduct a risk assessment identifying bias risks in hiring (Art. 9), document your training data sources and how you checked for demographic bias (Art. 10), produce the full Annex IV technical documentation package (Art. 11), build logging that records every screening decision and the factors that drove it (Art. 12), write a deployer instruction manual explaining the system's accuracy rates by demographic group and when human review is required (Art. 13), design an interface that lets HR managers override or reject any AI recommendation (Art. 14), and test the system against adversarial resumes designed to game the algorithm (Art. 15).

Common Mistakes

  • Treating compliance as a final step. Arts. 8-15 are design requirements. If you build first and document later, you will discover gaps that require re-engineering.
  • Ignoring Art. 13 (deployer instructions). Your deployers cannot comply with Art. 26 without the information you owe them. If your documentation is vague, they are non-compliant — and they will point the finger at you.
  • Confusing Art. 17 (QMS) with Art. 11 (technical docs). The QMS governs your processes. Technical documentation describes your system. You need both, and they serve different purposes.
  • Forgetting Art. 22 (authorised representative). Non-EU providers must have an EU-based representative before market placement. This is not optional and cannot be done retroactively.
Provider obligations are not a checklist you complete once. They form an integrated system: risk management drives data governance, which shapes documentation, which informs deployer instructions. Build them together from day one.
3.2 Technical Documentation (Art. 11 + Annex IV)~30 min

Article 11 requires providers to draw up technical documentation before placing a high-risk AI system on the market. Annex IV specifies exactly what that documentation must contain: 9 mandatory sections. Think of this as the "product dossier" — the single source of truth that proves your system is compliant.

The 9 Required Sections

Section 1: General Description of the AI System

  • The system's intended purpose, the name of the provider, and the system version
  • How the AI system interacts with hardware, software, or other systems it is embedded in
  • The versions of relevant software or firmware and any requirements related to version updates
  • A description of the forms in which the system is placed on the market (installed on device, API, SaaS, etc.)
  • The hardware the system is intended to run on
  • Practical tip: Write this so that a regulator with no technical background can understand what the system does and where it fits in the value chain.

Section 2: Detailed Description of Elements and Development Process

  • Methods and steps used to develop the system, including use of pre-trained systems or third-party tools
  • Design specifications: general logic, algorithms, key design choices, classification methodologies, what the system optimizes for, and the rationale behind those decisions
  • System architecture: how software components interact and feed into each other
  • Computational resources used for development, training, testing, and validation
  • Description of training data: data collection methods, data provenance, scope, characteristics, availability, quantity, and any demographic/geographic/behavioral properties
  • Assessment of training data for biases that could lead to discrimination
  • Description of validation and testing data, and selection criteria
  • Practical tip: This is the largest section. Use architecture diagrams, data flow charts, and training pipeline documentation. Do not write prose when a diagram communicates better.

Section 3: Monitoring, Functioning, and Control

  • Description of the system's capabilities and limitations in performance, including degrees of accuracy for specific persons or groups the system is intended for
  • Foreseeable unintended outcomes and sources of risks to health, safety, and fundamental rights
  • Human oversight measures built into the system (Art. 14) — who can intervene, how, and with what tools
  • Specifications for input data: what data the system expects, in what format, at what quality level
  • Practical tip: Be honest about limitations. Regulators will not punish you for having limitations — they will punish you for hiding them.

Section 4: Risk Management System

  • A description of the risk management system applied per Art. 9 (covered in detail in Lesson 3.3)
  • Residual risks after mitigation: what risks remain and why they are acceptable
  • Practical tip: Cross-reference your Art. 9 risk management documentation here. Do not duplicate — reference and summarize.

Section 5: Changes Throughout the Lifecycle

  • A description of any change made to the system after initial market placement, including software updates, model retraining, performance drift corrections, and configuration changes
  • Pre-determined changes included in the initial conformity assessment and technical documentation
  • Practical tip: Create a change log template from day one. Every model version, every retraining run, every prompt adjustment to a high-risk system needs to be logged here. Retroactively reconstructing this is nearly impossible.

Section 6: Harmonised Standards and Common Specifications Applied

  • List of harmonised standards (CEN/CENELEC) or common specifications that were applied in full or in part
  • Where standards are partially applied, specify which parts
  • If no harmonised standards were used, describe the alternative means used to meet Arts. 9-15 requirements
  • Practical tip: As of early 2026, harmonised standards for the AI Act are still in development by CEN/CENELEC. Document which draft standards you reference and be prepared to update when final standards are published.

Section 7: EU Declaration of Conformity

  • A copy of the EU declaration of conformity issued under Art. 47
  • This is a formal document stating that the system meets all applicable requirements
  • Practical tip: The declaration references your technical documentation. If the documentation is incomplete, the declaration is invalid. Do not sign the declaration until all other sections are complete.

Section 8: Performance and Accuracy Metrics

  • Description of the system's performance: accuracy, robustness, and cybersecurity levels (per Art. 15)
  • Metrics used, testing methodology, known limitations, and performance across relevant subgroups (demographic, geographic, etc.)
  • Declaration of the level of accuracy, along with accuracy metrics per Art. 15(2)
  • Practical tip: Do not only report aggregate accuracy. Break it down by the groups the system affects. A hiring tool with 95% overall accuracy but 70% accuracy for a protected group will fail this requirement.

Section 9: Resource Requirements and Energy/Computational Analysis

  • A general description of computational, hardware, and resource requirements (training, inference, deployment)
  • Energy consumption and other resource use data where relevant
  • Where applicable, information on known or estimated environmental impact
  • Practical tip: This section is increasingly scrutinized given the environmental debate around AI. Track GPU hours, energy consumption, and carbon footprint during training. Tools like CodeCarbon or ML CO2 Impact can help.

Concrete Example

A provider of an AI-powered credit scoring system creates an Annex IV package. Section 1 describes the system as a credit risk classifier delivered via API to European banks. Section 2 details the XGBoost model architecture, training data sourced from 3 EU credit bureaus covering 12 million records, and bias testing across age, gender, and nationality. Section 3 documents that the system achieves 89% accuracy overall but notes a 7% performance gap for applicants under 25 with thin credit histories. Section 4 references the Art. 9 risk management plan identifying age-based discrimination as a high-severity risk with mitigations (age-blind features, post-hoc fairness adjustments). Section 8 reports F1 scores broken down by 6 demographic subgroups with confidence intervals.

Common Mistakes

  • Writing documentation after development. Annex IV requires details about design choices and their rationale. If you do not document these decisions as you make them, you cannot reconstruct the reasoning later.
  • Treating it as a one-time deliverable. Art. 11 requires documentation to be "kept up to date." Every significant change triggers an update obligation (Section 5).
  • Reporting only aggregate performance metrics. Section 8 explicitly requires subgroup analysis. Aggregate accuracy that masks disparate impact is a compliance failure.
  • Omitting third-party components. If your system uses a pre-trained foundation model, Section 2 requires you to describe it: what model, from which provider, what version, and how it fits into your system.
Annex IV is not optional documentation — it is the legal evidence that your system complies. A regulator's first request will be to see this package. If it does not exist, does not cover all 9 sections, or is out of date, you are non-compliant regardless of how good your AI actually is.
Use AIActStack's doc generator to create an Annex IV Technical Documentation template pre-populated with your system's details.
3.3 Risk Management System (Art. 9)~25 min

Providers of high-risk AI systems must establish, implement, document, and maintain a risk-management system as a continuous iterative process planned and run throughout the entire lifecycle of the high-risk AI system[src]

What the Article Requires

Art. 9 mandates a risk management system that consists of a continuous iterative process planned and run throughout the AI system's lifecycle. It requires regular, systematic review and updating. The system must include the following phases:

  1. Risk Identification and Analysis (Art. 9(2)(a)): Identify and analyze the known and reasonably foreseeable risks that the high-risk AI system can pose to health, safety, or fundamental rights when used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse.
  2. Risk Estimation and Evaluation (Art. 9(2)(b)): Estimate and evaluate the risks that may emerge when the system is used in accordance with its intended purpose and under reasonably foreseeable misuse. Evaluate risks based on data gathered from post-market monitoring (Art. 72).
  3. Risk Mitigation (Art. 9(2)(c)): Evaluate other possible risks based on analysis of data gathered from the post-market monitoring system.
  4. Adoption of Suitable Risk Management Measures (Art. 9(2)(d)): Adopt appropriate and targeted risk management measures to address identified risks.

Step-by-Step Practical Guidance

Step 1: Risk Identification

List every risk your AI system poses across three categories: (a) risks to health and safety, (b) risks to fundamental rights (discrimination, privacy, dignity, effective remedy), and (c) risks from reasonably foreseeable misuse. For each risk, document: the risk description, who is affected, the potential severity, and the conditions under which it could occur.

Step 2: Risk Analysis

For each identified risk, assess: likelihood (how probable is it?), severity (if it occurs, how bad is the impact?), and reversibility (can the damage be undone?). Use a structured framework — a simple likelihood x severity matrix works. Art. 9(5) specifically requires that residual risks are communicated to deployers.

Step 3: Risk Evaluation

Determine which risks are acceptable, which require mitigation, and which are unacceptable. Art. 9(4) requires that risk management measures give due consideration to the effects and possible interactions resulting from the combined application of the requirements in Chapter III, Section 2. In other words — your risk mitigations must not create new compliance problems.

Step 4: Testing

Art. 9(6)-(8) requires testing to ensure the system works as intended and meets the requirements. Testing must happen before market placement. You must define metrics and probabilistic thresholds appropriate to the system's intended purpose. For systems that continue to learn after deployment, testing must address the risk of biased outputs being fed back as input (feedback loops).

Step 5: Monitoring and Updating

After deployment, feed post-market monitoring data (Art. 72) back into your risk assessment. If a new risk emerges or an existing risk changes, update the risk management measures. Document every update.

Concrete Example

A provider builds an AI system for automated job-applicant screening. During risk identification, they document: (1) risk of gender discrimination due to historical bias in training data (severity high, likelihood medium); (2) risk that recruiters over-rely on AI scores and skip manual review (severity high, likelihood high); (3) risk of misuse to screen candidates on protected characteristics like ethnicity (severity critical, likelihood low). For risk (1), the mitigation is demographic parity testing before deployment and ongoing bias monitoring. For risk (2), the mitigation is designing the UI to require the recruiter to view the applicant's full profile before seeing the AI score. For risk (3), the mitigation is technical controls that prevent protected-attribute inputs. Residual risks are documented: even after mitigation, screening accuracy for candidates with non-traditional career paths remains 12% lower, and this limitation is disclosed to deployers per Art. 9(5).

Common Mistakes

  • Treating it as a document rather than a process. The word "system" in Art. 9 is deliberate. This is an ongoing process with assigned owners, scheduled reviews, and update triggers — not a PDF you write once.
  • Only considering technical risks. Art. 9 explicitly covers risks to fundamental rights: discrimination, privacy, loss of human autonomy. A risk assessment that only addresses model accuracy and uptime misses half the requirement.
  • Ignoring foreseeable misuse. You must consider not only how the system is intended to be used, but how it could reasonably be misused. If your hiring AI could be used to screen candidates by age, that is a foreseeable misuse you must address.
  • Not connecting to post-market monitoring. Art. 9(2)(c) explicitly requires risks to be evaluated based on post-market monitoring data. If your risk management system and your monitoring system are not connected, you fail this requirement.
  • Not disclosing residual risks. Art. 9(5) requires that residual risks be communicated to deployers. Hiding known limitations is not just bad practice — it is a legal violation.
The risk management system is the foundation of AI Act compliance. It feeds into your technical documentation (Annex IV Section 4), shapes your testing strategy, determines your human oversight requirements, and drives your post-market monitoring plan. Get this right and everything else follows. Get it wrong and no amount of documentation will save you.
Use AIActStack's doc generator to create a Risk Management Plan template for your specific AI system.
3.4 Data Governance (Art. 10)~20 min

Training, validation, and testing data sets used for high-risk AI systems must meet specific quality criteria regarding relevance, representativeness, errors, completeness, data-governance practices, and bias mitigation measures[src]

Requirements

  • Relevance & representativeness: Data must represent the population the AI will serve
  • Error-free: Data sets must be examined and corrected before use
  • Complete: Must account for geographic, contextual, and behavioral settings
  • Bias-tested: Art. 10(2)(f) requires examining for biases that could cause discrimination

Special Category Data

Art. 10(5) allows processing sensitive data (race, health, political opinions) for bias detection — but ONLY to the extent strictly necessary, with appropriate safeguards.

As a deployer, you don't control training data — but you must request documentation about it from your provider. Ask: what data, what bias testing, what demographics are underrepresented.
3.5 Conformity Assessment (Art. 43)~25 min

Providers of high-risk AI systems must carry out a conformity assessment before placing the system on the market or putting it into service (Art. 43), using either the internal-control procedure in Annex VI or the notified-body procedure in Annex VII depending on the system type and whether harmonised standards were applied[src]

Two Paths

  • Internal control (Annex VI): Provider self-assesses compliance. Applies to most Annex III systems.
  • Third-party assessment (Annex VII): Independent notified body evaluates. Required for biometric systems and some critical infrastructure AI.

What the Assessment Covers

  1. Quality management system review
  2. Technical documentation completeness check
  3. Risk management system adequacy
  4. Data governance compliance
  5. Testing results review
  6. CE marking authorization

Timeline

Full conformity assessment takes 6-12 months. If you haven't started, you're behind for the August 2026 deadline.

Even deployers need to verify their provider has completed conformity assessment. Ask for the EU Declaration of Conformity (Art. 47) and CE marking documentation.
3.6 EU Database Registration (Art. 49)~15 min

Providers of high-risk AI systems listed in Annex III (except law-enforcement systems under Annex III point 1) must register themselves and their systems in the EU database established under Art. 71 before placing the system on the market or putting it into service (Art. 49)[src]

Who Registers

  • Providers: Register before placing on market (or for public authority systems, after deployment)
  • Deployers: Must also register their use of high-risk AI systems

What to Submit (Annex VIII)

  • Provider name, address, contact details
  • AI system name and version
  • Intended purpose description
  • Risk classification and Annex III category
  • Conformity assessment status
  • Member states where deployed
  • URL to instructions for use

The database is publicly accessible — anyone can look up registered AI systems. This creates transparency and enables market surveillance.

Registration is a concrete, dated action item. Calendar it: register your high-risk AI system in the EU database before August 2, 2026.
3.7 Deployer Obligations Overview (Art. 26)~25 min

Deployers of high-risk AI systems must: take appropriate technical and organisational measures to use the system per instructions (Art. 26(1)); assign human oversight to natural persons with necessary competence, training, and authority (Art. 26(2)); ensure input data is relevant and representative where they have control (Art. 26(4)); monitor operation, suspend use on serious incident suspicion, and inform the provider, distributor, and authorities (Art. 26(5)); keep automatically generated logs for at least 6 months unless otherwise required (Art. 26(6)); inform workers' representatives and affected workers before deploying in the workplace (Art. 26(7)); register system use in the EU database (Art. 26(8)); and carry out a data protection impact assessment (DPIA) where required (Art. 26(9))[src]

Article 26 is the single most important article for deployers of high-risk AI systems — and most SaaS companies using third-party AI for regulated purposes are deployers. It contains 12 paragraphs, each creating a specific obligation. Here is every paragraph mapped to a concrete action.

Paragraph-by-Paragraph Action Map

Para.ObligationWhat You Actually Do
26(1)Use in accordance with instructionsRead and follow the provider's instructions for use. If the provider says the system is for customer support, do not use it for credit scoring. Using a system outside its intended purpose can reclassify you as a provider with full provider obligations.
26(2)Human oversightAssign competent individuals with the authority, training, and resources to effectively oversee the AI system. These people must understand the system's capabilities, be able to interpret outputs, and be empowered to override or stop the system. Covered in detail in Lesson 3.8.
26(3)Input data relevanceEnsure that the data you feed into the AI system is relevant and representative for the system's intended purpose. If the provider designed the system for English-language inputs and you feed it German text, you are violating this paragraph.
26(4)Monitoring and reporting to providersMonitor the system's operation. If you have reason to believe the system presents a risk per Art. 79, inform the provider or distributor and suspend use. If you identify a serious incident, report it immediately per Art. 73.
26(5)Log retentionKeep the automatically generated logs for a period appropriate to the system's intended purpose — at least 6 months unless EU or national law requires longer. Store logs securely and ensure they are accessible to authorities on request.
26(6)Workplace informationIf you deploy a high-risk AI system in the workplace, inform workers' representatives and affected workers that they will be subject to the system. This is not optional — it is an active disclosure requirement before deployment.
26(7)Inform affected personsWhen making decisions about natural persons using a high-risk AI system, inform those persons that they are subject to the system. For hiring tools: candidates must be told the AI is involved. For credit scoring: applicants must be informed.
26(8)Public-sector deployers: fundamental rights impact assessmentIf you are a public-sector body (or a private entity providing public services), you must perform a fundamental rights impact assessment before deploying the system. This is separate from and in addition to the DPIA under 26(9).
26(9)Data Protection Impact AssessmentBefore putting a high-risk AI system into use, perform a DPIA under GDPR Art. 35. You may use the DPIA you already have and extend it with AI-specific considerations. Covered in detail in Lesson 3.9.
26(10)Cooperation with authoritiesCooperate with national competent authorities. Provide access to automatically generated logs (per 26(5)) and any other information needed to assess compliance. You cannot refuse a regulator's request for logs.
26(11)Use of AI output for decisionsIf you use the output of the AI system as a basis for decisions affecting natural persons, those decisions must be explained. You must be able to explain to an affected person how the AI contributed to the decision about them.
26(12)System not substantially modifiedDo not modify the system in a way that turns you into a provider. If you retrain the model, substantially alter its purpose, or make significant changes, you may assume provider obligations under Art. 25. Use the system as provided.

Step-by-Step: Deployer Compliance Checklist

  1. Obtain and read the provider's instructions for use. If they have not provided them, request them formally under Art. 13. You cannot comply with 26(1) without these instructions.
  2. Designate human oversight personnel. Assign named individuals. Document their training, authority level, and escalation procedures.
  3. Validate your input data. Confirm the data you feed the system matches the provider's specified requirements for format, quality, and scope.
  4. Set up monitoring. Implement a process to watch the system's outputs for anomalies, drift, or unexpected behavior. Define thresholds that trigger investigation.
  5. Configure log retention. Ensure automatically generated logs are stored for at least 6 months with adequate access controls.
  6. Draft disclosure notices. Prepare the notifications for workers (26(6)), affected individuals (26(7)), and workplace representatives.
  7. Complete the DPIA. Extend your existing GDPR DPIA or create a new one addressing AI-specific risks.
  8. Document everything. Create a deployer compliance file that evidences each obligation is met. A regulator will ask for this.

Concrete Example

A fintech startup uses a third-party AI model to score loan applications for EU customers. Under Art. 26, they must: follow the provider's instructions and only use the model for credit scoring, not fraud detection (26(1)); assign a trained credit risk officer as the human overseer with authority to override any AI recommendation (26(2)); ensure applicant data fed into the model matches the provider's specified data schema (26(3)); implement dashboards monitoring approval rates by demographic group and flag statistical deviations (26(4)); store all scoring logs for 6 months in an encrypted database (26(5)); notify loan applicants before submission that AI will be used in evaluating their application (26(7)); complete a DPIA addressing algorithmic discrimination risks (26(9)); and ensure any rejected applicant can receive an explanation of how the AI contributed to the denial (26(11)).

Common Mistakes

  • Assuming the provider handles everything. Art. 26 places specific, non-delegable obligations on deployers. The provider cannot perform human oversight for you, retain your logs, or notify your users.
  • Deploying without reading the instructions. 26(1) requires use "in accordance with instructions." If the provider's documentation says "not for use in employment decisions" and you use it for hiring, you are in violation — even if the AI works perfectly.
  • Treating log retention as optional. 26(5) and 26(10) together mean authorities can demand your logs. If you did not retain them, there is no defense.
  • Forgetting worker notification (26(6)). This is the most commonly overlooked paragraph. If you use AI in internal HR processes, your own employees must be informed before the system is deployed.
Art. 26 is the deployer's constitution. If you use third-party AI in a high-risk context, every paragraph creates a specific, actionable obligation. Print this table, assign an owner to each row, and track completion. This is your compliance backbone.
3.8 Human Oversight Implementation~25 min

Human oversight is one of the defining requirements of the EU AI Act. Article 14 tells providers what oversight capabilities to build into the system. Art. 26(2) tells deployers to assign the actual humans and give them the authority to act. Together, they create a chain: the provider builds the controls, the deployer uses them.

What Art. 14 Requires (Provider Side)

The provider must design the AI system so that it can be effectively overseen by natural persons during use. Specifically, the system must include measures that allow the human overseer to:

  • Fully understand the system's capacities and limitations and be able to properly monitor its operation (Art. 14(4)(a))
  • Remain aware of automation bias — the tendency to over-rely on AI outputs — and guard against it (Art. 14(4)(b))
  • Correctly interpret the system's output, taking into account the system's characteristics and the available interpretation tools (Art. 14(4)(c))
  • Decide not to use the system in any particular situation, override the output, or reverse a decision (Art. 14(4)(d))
  • Interrupt or stop the system using a "stop" button or similar procedure (Art. 14(4)(e))

What "Competent Individuals" Means (Art. 26(2))

Art. 26(2) does not use the word "competent" casually. The deployer must ensure that the individuals assigned to human oversight have:

  • Relevant competence: They understand the AI system they are overseeing — what it does, how it works at a functional level, what its known limitations are.
  • Training: They have received training appropriate to the task. For a hiring AI, this means understanding both the AI system and employment discrimination law. For a medical AI, this means clinical expertise combined with AI literacy.
  • Authority: They have the organizational authority to override, suspend, or stop the AI system. A junior analyst who can see the dashboard but cannot override a decision does not satisfy this requirement.
  • Resources: They have the time, tools, and support to actually perform oversight. Assigning oversight to someone who is already overloaded with other duties is a paper compliance exercise, not real oversight.

Step-by-Step: Setting Up an Oversight Process

  1. Identify oversight roles. For each high-risk AI system, designate specific individuals (by name or role) as human overseers. Document who they are and what system they oversee.
  2. Define competency requirements. Write a competency profile for the oversight role: what knowledge is needed (AI system specifics, domain expertise, legal requirements), what training is required, and how competency is verified.
  3. Deliver and document training. Train overseers on: the system's intended purpose and limitations, how to interpret outputs, what automation bias looks like, and when and how to intervene. Record training completion and schedule refreshers.
  4. Build intervention procedures (SOPs). Write standard operating procedures for: (a) routine monitoring — what to check, how often, what constitutes "normal"; (b) escalation — when to escalate an AI output for review; (c) override — how to override an AI decision and what documentation is required; (d) shutdown — when and how to stop the system entirely.
  5. Implement technical controls. Ensure the AI system exposes the interfaces needed: a dashboard showing system performance and decisions, an override mechanism that logs who overrode what and why, and a stop/suspend capability. If the provider's system does not offer these, request them under Art. 13.
  6. Audit and improve. Review the oversight process periodically. Track metrics: how often are AI decisions reviewed? How often are they overridden? What is the false positive/negative rate of overridden decisions? Use this data to improve both the AI system and the oversight process.

Concrete Example

A recruitment SaaS company deploys an AI-powered candidate screening tool. They designate senior HR business partners as human overseers. Each overseer completes a mandatory 4-hour training covering: how the screening model ranks candidates, known demographic performance gaps documented by the provider, what automation bias looks like in hiring (e.g., anchoring to the AI score rather than reading the full application), and the override procedure. The SOP requires every AI-recommended "reject" to be reviewed by a human before the candidate is notified. The system provides a dashboard showing screening outcomes by gender and ethnicity. If demographic disparity exceeds a defined threshold, the overseer must suspend the AI system and escalate to the compliance officer. Every override is logged with the overseer's name, the original AI recommendation, the human decision, and the rationale.

Common Mistakes

  • Nominal oversight. Assigning someone as "human overseer" on paper without giving them training, tools, or authority. This is the number one compliance failure regulators will look for. The test is: can this person actually intervene effectively?
  • Confusing monitoring with oversight. Monitoring means watching dashboards. Oversight means having the power to act. Art. 14(4)(d) specifically requires the ability to override or reverse decisions — a read-only dashboard is not oversight.
  • Not addressing automation bias. Art. 14(4)(b) specifically calls out automation bias. If your overseers simply rubber-stamp AI outputs 99% of the time, your oversight process is not effective. Design the process to force genuine engagement — for example, requiring the human to form an independent judgment before seeing the AI output.
  • No documentation of override decisions. If an overseer overrides the AI, document why. This creates the audit trail regulators need and helps improve the system over time.
  • Single point of failure. Having one human overseer with no backup. What happens when they are on leave? Oversight must be continuously available whenever the system is in use.
Human oversight is not a checkbox — it is an operational capability. The test is simple: if a regulator asks "show me how a human can override this AI system right now," can you demonstrate it? If you cannot, you are not compliant with Art. 14 or Art. 26(2).
3.9 DPIA for AI (Art. 26(9))~20 min

Art. 26(9) requires deployers of high-risk AI systems to carry out a Data Protection Impact Assessment (DPIA) under GDPR Art. 35 before putting the system into use. The good news: if you already have a GDPR DPIA for your processing activity, you do not start from scratch. The AI Act requires you to extend your existing DPIA with AI-specific considerations.

What the Article Requires

Before deploying a high-risk AI system that processes personal data, you must assess the impact on data subjects' rights and freedoms. Art. 26(9) explicitly states that deployers shall use the information provided by the provider under Art. 13 to comply with this obligation. This means the DPIA must incorporate the provider's documentation about the system's capabilities, limitations, and risks.

Section-by-Section: Extending a GDPR DPIA for AI

1. Description of Processing (GDPR Art. 35(7)(a))

Your existing DPIA describes the processing activity. Extend it with: the specific AI system used (name, version, provider), the system's intended purpose as documented by the provider, how AI outputs are used in your decision-making process, and what data flows into and out of the AI system. Be specific about whether the AI makes autonomous decisions or provides recommendations to humans.

2. Necessity and Proportionality (GDPR Art. 35(7)(b))

Address why AI is necessary for this processing. Could the same outcome be achieved without AI, or with a less intrusive AI approach? Document your rationale: does the AI provide a meaningful improvement in accuracy, speed, or consistency that justifies its deployment? This is where you demonstrate that deploying a high-risk AI system is proportionate to the goal.

3. Risks to Rights and Freedoms (GDPR Art. 35(7)(c))

This is where the AI Act extension is most significant. Beyond standard GDPR risks (data breach, unauthorized access), you must now assess:

  • Algorithmic discrimination: Does the AI system perform differently for different demographic groups? Use the provider's accuracy metrics by subgroup (which they must supply under Art. 13).
  • Automation bias: Risk that human overseers defer to the AI without critical evaluation, leading to unjust outcomes.
  • Opacity: Can you explain to data subjects how the AI contributed to a decision about them? If the system is a black box, this is a risk.
  • Feedback loops: If AI outputs influence future training data, there is a risk of amplifying existing biases.
  • Profiling and automated decision-making: Under GDPR Art. 22, individuals have the right not to be subject to purely automated decisions with legal effects. How does your AI deployment interact with this right?

4. Mitigation Measures (GDPR Art. 35(7)(d))

For each risk identified in section 3, document specific mitigations:

  • Human oversight measures (cross-reference your Art. 14/26(2) implementation from Lesson 3.8)
  • Bias monitoring and thresholds that trigger review
  • Transparency notices to data subjects (cross-reference Art. 50 implementation)
  • Data minimization: only feed the AI the personal data it needs
  • Right-to-explanation procedures: how data subjects can obtain a meaningful explanation of AI-assisted decisions
  • Appeal/contestation procedures: how data subjects can challenge an AI-influenced decision

5. AI-Specific Additions (New for AI Act)

Add a dedicated section that does not exist in a standard GDPR DPIA:

  • Provider documentation review: confirm you have received and reviewed the Art. 13 instructions for use, and summarize relevant risk information from the provider
  • AI system monitoring plan: how you will monitor the system post-deployment and what triggers a DPIA update
  • Incident response: cross-reference your Art. 73 incident reporting process
  • Log retention: confirm log storage per Art. 26(5)

Concrete Example

An insurance company deploys an AI system that assesses health insurance claims. Their existing GDPR DPIA covers the processing of health data under Art. 9 GDPR. They extend it with: (1) a description of the AI claim-assessment system, its provider, and its documented accuracy rate of 91% overall but 84% for claims involving rare conditions; (2) a necessity assessment explaining that AI processing reduces claim resolution from 14 days to 2 days, directly benefiting claimants; (3) AI-specific risks including the 7% accuracy gap for rare conditions (risk: legitimate claims wrongly denied), automation bias among claims adjusters, and opacity of the model's decision factors; (4) mitigations including mandatory human review of all AI-recommended denials, quarterly bias audits against condition types, and a claimant right to request fully human review; (5) an AI-specific section confirming provider documentation has been obtained, a monitoring dashboard tracks denial rates by condition category, and any 5% increase in denials triggers a DPIA review.

Common Mistakes

  • Creating a separate "AI DPIA" instead of extending the existing one. Art. 26(9) points to GDPR Art. 35 — this is the same DPIA, extended. Creating a separate document fragments your compliance and risks inconsistency.
  • Not using the provider's documentation. Art. 26(9) specifically says to use information provided under Art. 13. If your DPIA does not reference the provider's risk information, accuracy metrics, and known limitations, it is incomplete.
  • Ignoring the right to explanation. AI-assisted decisions about individuals trigger both GDPR Art. 22 (automated decision-making) and AI Act Art. 26(11) (explainability). Your DPIA must address how you satisfy both.
  • Static DPIA. A DPIA written once and never updated. AI systems change — models are updated, data distributions shift, new risks emerge. Build review triggers into the DPIA: model version changes, significant accuracy drift, or new categories of affected persons.
The AI Act DPIA is not a new obligation from scratch — it is an expansion of what GDPR already requires. If you have a mature GDPR DPIA process, you are 60-70% of the way there. The key additions are: AI-specific risks (bias, opacity, automation bias), provider documentation integration, and cross-references to your Art. 14 oversight and Art. 73 incident reporting processes.
Use AIActStack's doc generator to create a DPIA template pre-filled with your AI system details and the AI Act extension sections.
3.10 Transparency Obligations (Art. 50)~20 min

Providers and deployers of certain AI systems must comply with transparency obligations in Art. 50, including: informing natural persons that they are interacting with an AI system (Art. 50(1)); marking synthetic audio/image/video/text outputs in a machine-readable format (Art. 50(2)); informing persons exposed to emotion-recognition or biometric categorisation systems (Art. 50(3)); disclosing AI-generated deep fakes and AI-generated text published on matters of public interest (Art. 50(4)). Information must be given clearly at first interaction (Art. 50(5))[src]

Article 50 is unique in the AI Act because it applies regardless of risk classification. Even if your AI system is not high-risk, if it interacts with people or generates content, you have transparency obligations. This article affects the broadest range of companies — essentially anyone deploying customer-facing AI.

The Four Sub-Obligations

1. AI Interaction Disclosure (Art. 50(1))

What it requires: Providers must ensure that AI systems intended to interact directly with natural persons are designed and developed so that the persons concerned are informed they are interacting with an AI system, unless this is obvious from the circumstances and context of use. The notification must be given at the latest at the time of first interaction or exposure.

What you actually do:

  • If you have a chatbot, virtual assistant, or any AI-driven conversation interface: display a clear notice before or at the start of the interaction. Example: "You are chatting with an AI assistant. A human agent is available if you prefer."
  • The notice must be "clear and distinguishable" — not buried in a terms-of-service page. It must be visible at the point of interaction.
  • Exception: if it is "obvious from the circumstances" that the user is interacting with AI. A clearly labeled "AI Search" feature with a robot icon probably qualifies. A chatbot that mimics human conversation style does not.

2. AI-Generated Content Labeling (Art. 50(2))

What it requires: Providers of AI systems that generate synthetic audio, image, video, or text content must ensure that the outputs are marked in a machine-readable format and are detectable as artificially generated or manipulated. The technical solution must be effective, interoperable, robust, and reliable.

Proposal to watch (commission proposal, not yet adopted): COM(2025) 836 proposes a grace period for Art. 50(2) watermarking obligations (marking AI-generated synthetic audio/image/video/text in machine-readable format) — the Commission proposed until 2 February 2027 (six months), and the Parliament position shortens this to 2 November 2026 (three months) per secondary reporting. Under the current law in force, Art. 50 applies from 2 August 2026[src]

What you actually do:

  • Implement machine-readable metadata in AI-generated content. For images: embed markers using C2PA (Coalition for Content Provenance and Authenticity) or similar standards. For text: consider watermarking techniques or metadata headers.
  • This is a provider obligation (the company generating the content), but deployers must not remove or disable these markings.
  • The marking must survive common sharing and editing operations where technically feasible.
  • Exception: AI systems performing assistive functions (e.g., spell-checking, grammar correction) that do not substantially alter the input are exempt.

3. Emotion Recognition Disclosure (Art. 50(3))

What it requires: Deployers of emotion recognition systems or biometric categorisation systems must inform the natural persons exposed. They must also process personal data in accordance with GDPR, the Law Enforcement Directive, and relevant data protection regulations.

What you actually do:

  • If your AI analyzes facial expressions, voice tone, body language, or physiological signals to detect emotions: notify every person being analyzed before the analysis begins.
  • The notification must specify that emotion recognition is in use and what data is being processed.
  • Important: certain emotion recognition uses in the workplace and education are prohibited under Art. 5. Check Art. 5 first before implementing any emotion recognition disclosure — you may be banned from using the system entirely.

4. Deep Fake Disclosure (Art. 50(4))

What it requires: Deployers of AI systems that generate or manipulate image, audio, or video content constituting a deep fake must disclose that the content has been artificially generated or manipulated. The disclosure must be made in a clear and visible manner, labeling the content as AI-generated.

What you actually do:

  • If your product generates realistic images, videos, or audio of real people, or manipulates existing media to alter what someone appears to say or do: label the output visibly. Example: a watermark, a caption, or a persistent label stating "AI-generated content."
  • The label must be placed in a way that is "clearly visible and recognisable" to the average person.
  • Exception: content that is part of an "obviously artistic, creative, satirical, fictional, or analogous work" — but this exception is narrow, and you should err on the side of disclosure.
  • Note: the machine-readable marking under 50(2) and the visible disclosure under 50(4) are separate requirements. You may need to implement both.

Concrete Example

A SaaS company builds a customer service platform with three AI features: (1) an AI chatbot for first-line support, (2) an AI email composer that drafts responses for agents, and (3) a sentiment analysis module that detects customer frustration. Under Art. 50, they must: display "You are chatting with an AI assistant" at the start of every chatbot conversation (50(1)); embed C2PA metadata in AI-drafted emails so the content is machine-detectable as AI-generated (50(2)); and notify customers that their communications are being analyzed for sentiment before the analysis occurs (50(3)). The chatbot notice appears as a banner above the chat window. The email metadata is embedded automatically by the AI provider. The sentiment notice is added to the support portal's privacy notice and displayed as a one-time notification when a customer opens a support ticket.

Common Mistakes

  • Burying the disclosure in terms of service. Art. 50(1) requires notification "at the latest at the time of first interaction." A line in your ToS that users accepted six months ago does not satisfy this. The notice must be at the point of interaction.
  • Assuming "obvious" too broadly. Providers and deployers often assume users know they are interacting with AI. Unless the AI nature is genuinely unmistakable from the interface (a clearly labeled "AI" section), disclose explicitly. When in doubt, disclose.
  • Ignoring machine-readable marking (50(2)). A visible "AI-generated" label on an image does not satisfy 50(2), which requires machine-readable detection. You need both human-visible and machine-readable markers.
  • Not checking Art. 5 before implementing emotion recognition disclosure. Some emotion recognition uses are banned entirely under Art. 5 (prohibited practices). If your use case is prohibited, a transparency notice does not make it legal — you must not deploy the system at all.
Art. 50 is the widest-reaching obligation in the AI Act. It catches companies that think they are not affected because their AI is "low risk." If your product has a chatbot, generates text or images with AI, detects emotions, or creates realistic media — you have Art. 50 obligations, no exceptions.
Use AIActStack's doc generator to create an Article 50 Transparency Notice for your product.
3.11 Incident Reporting (Art. 73)~15 min

Providers must report serious incidents to the market surveillance authorities of the Member States where the incident occurred immediately after establishing a causal link, and in any event no later than 15 days after becoming aware, or 2 days for widespread infringements or serious and irreversible disruption to critical infrastructure[src]

Both providers and deployers have reporting duties, with strict timelines. This is the AI Act's equivalent of GDPR's 72-hour breach notification — but with different triggers and different recipients.

What Is a "Serious Incident"?

Art. 3(49) defines a serious incident as any incident or malfunctioning of an AI system that directly or indirectly leads to any of the following:

  • Death of a person or serious damage to a person's health
  • Serious and irreversible disruption of the management or operation of critical infrastructure
  • Breach of obligations under Union law intended to protect fundamental rights
  • Serious damage to property or the environment

The key word is "serious." Not every AI error or malfunction triggers reporting. A chatbot giving a wrong product recommendation is not a serious incident. An AI hiring tool systematically rejecting candidates of a particular ethnicity is — it constitutes a breach of fundamental rights.

Reporting Timelines

ScenarioDeadlineDetails
Death or serious health damageImmediately, no later than 2 days after awarenessInitial report within 2 days, followed by a final report within 15 days
Widespread malfunctioning affecting multiple personsImmediately, no later than 2 daysSame timeline — the scale triggers urgency
Other serious incidents (fundamental rights breach, critical infrastructure disruption, serious property/environmental damage)Within 15 days of becoming awareA single report is acceptable if the investigation is complete within 15 days; otherwise, an initial report followed by a final report

The clock starts when the provider or deployer becomes aware (or when they should reasonably have become aware) of the incident. Ignorance due to inadequate monitoring is not a defense.

Who Reports to Whom

  • Providers report to the market surveillance authority of the member state where the incident occurred.
  • Deployers report serious incidents to the provider and to the relevant market surveillance authority.
  • If the incident occurs in multiple member states, report to each relevant authority.
  • The provider must also report to the importer or distributor where applicable.

What to Include in a Report

While the exact reporting format will be specified by implementing acts, based on Art. 73 and general incident reporting best practices, your report should include:

  1. System identification: AI system name, version, provider, registration number in the EU database
  2. Incident description: What happened, when it was detected, what harm occurred or is likely
  3. Affected persons: Number and categories of persons affected
  4. Root cause analysis: What caused the incident, to the extent known at the time of reporting (the initial report can state "investigation ongoing")
  5. Immediate actions taken: What corrective actions were taken — system suspended, outputs reversed, affected persons notified
  6. Preventive measures: What steps are being taken to prevent recurrence
  7. Contact information: Who the authority should contact for follow-up

Step-by-Step: Building an Incident Response Process

  1. Define incident categories. Map the Art. 3(49) definition to your specific context. For a hiring AI: systematic discrimination = serious incident (fundamental rights). For a medical AI: wrong diagnosis leading to delayed treatment = serious incident (health damage).
  2. Establish detection mechanisms. You cannot report what you do not detect. Implement monitoring that can catch: anomalous output patterns, demographic disparities in AI decisions, user complaints referencing AI behavior, and system malfunctions.
  3. Create an internal escalation path. Define who receives initial incident reports internally, who has authority to classify an incident as "serious," and who is responsible for external reporting. This should not require multiple approval layers — the timelines are tight.
  4. Prepare report templates. Have a pre-filled template ready so the team can focus on facts, not formatting, during a stressful incident.
  5. Identify your authorities. Know which national market surveillance authority you report to. Each EU member state designates one. Find yours in advance — do not scramble during an incident.
  6. Conduct drills. Run a tabletop exercise at least once: "Our AI hiring tool has been rejecting female candidates at 2x the rate of male candidates for 3 weeks. Walk through the response." The drill reveals gaps in your process.

Concrete Example

A healthcare SaaS deploys an AI triage system that recommends priority levels for emergency department patients. A software update introduces a regression: the system consistently under-prioritizes patients presenting with atypical cardiac symptoms, leading to 3 patients experiencing delayed treatment over 5 days. A nurse notices the pattern and reports internally. The company classifies this as a serious incident (health damage) and must: (1) file an initial report with the national market surveillance authority within 2 days, (2) notify the AI provider immediately, (3) suspend the AI triage system or revert to the previous version, (4) notify the hospital deploying the system, and (5) file a final report within 15 days detailing the root cause (regression in model update), the number of affected patients, and corrective actions (rollback, additional testing requirements for updates, enhanced monitoring thresholds).

Common Mistakes

  • Not having a process before an incident occurs. The 2-day timeline for critical incidents leaves no time to design a reporting process from scratch. Build it now.
  • Confusing AI Act reporting with GDPR breach notification. These are separate obligations with different triggers, different timelines, and different recipients. A data breach in your AI system may trigger both GDPR Art. 33 (72-hour notification to the data protection authority) and AI Act Art. 73 (reporting to the market surveillance authority). You must do both.
  • Waiting for certainty before reporting. Art. 73 requires reporting when you become aware or should reasonably have become aware. An initial report with "investigation ongoing" is far better than a late report with full details.
  • Only reporting to the provider. Deployers must report to both the provider AND the market surveillance authority. Reporting only to the provider does not satisfy the obligation.
Incident reporting under Art. 73 is time-critical and non-negotiable. The two things you must have ready before an incident occurs: (1) an internal escalation process with clear roles and timelines, and (2) the contact details for your relevant market surveillance authority. Everything else can be figured out during the incident — but not those two.
3.12 Post-Market Monitoring (Art. 72)~15 min

Article 72 requires providers of high-risk AI systems to establish and document a post-market monitoring system. This is the ongoing surveillance obligation — what you do after the system is deployed to ensure it continues to comply. Think of it as the AI system's ongoing health check, not a one-time inspection.

What the Article Requires

The provider must establish a post-market monitoring system that is proportionate to the nature of the AI technologies and the risks of the high-risk system. The system must:

  • Actively and systematically collect, document, and analyze relevant data provided by deployers or collected through other sources throughout the AI system's lifetime
  • Allow the provider to continuously evaluate the AI system's compliance with the requirements in Chapter III, Section 2 (Arts. 8-15)
  • Feed into the risk management system (Art. 9) — monitoring data must trigger risk re-evaluation when needed

What a Monitoring Plan Includes

Art. 72(3) requires the post-market monitoring plan to be part of the technical documentation (Annex IV). The plan must include at minimum:

  1. Data collection strategy: What data you will collect, from which sources, and how frequently. Sources include: deployer feedback, system logs, performance metrics, user complaints, incident reports, and publicly available information (e.g., academic research on vulnerabilities in your model type).
  2. Performance monitoring metrics: The specific metrics you will track — accuracy, precision, recall, false positive/negative rates, and crucially, these metrics broken down by relevant subgroups (demographic, geographic, etc.).
  3. Bias and drift detection: How you will detect performance degradation (model drift) and emerging biases. Define thresholds: at what point does a performance change trigger investigation?
  4. Deployer feedback mechanisms: How deployers report issues to you. This is a two-way obligation — Art. 26(4) requires deployers to inform providers of risks, and you must have a channel to receive that information.
  5. Review schedule: How often you review monitoring data. For high-risk systems with frequent updates, this may be weekly. For stable systems, monthly or quarterly may suffice. The key is that the schedule is defined, not ad hoc.
  6. Trigger conditions: What findings trigger specific actions — re-evaluation of the risk management system, updates to technical documentation, corrective action, or incident reporting under Art. 73.
  7. Roles and responsibilities: Who is responsible for monitoring, who reviews findings, and who has authority to trigger corrective actions.

Data to Collect

Data CategoryExamplesWhy It Matters
Performance metricsAccuracy, F1 score, AUC-ROC by subgroupDetect degradation over time (model drift)
Input data characteristicsDistribution shifts in incoming data vs. training dataInput drift is the leading cause of AI performance degradation
Output distributionsDecision rates, score distributions, rejection rates by categoryDetect emerging bias or systemic errors
User/deployer feedbackComplaints, override rates, reported errorsReal-world signal that metrics alone may miss
Incident dataNear-misses, actual incidents, Art. 73 reportsPattern detection — multiple near-misses may predict a serious incident
External intelligencePublished vulnerabilities, academic papers on model weaknesses, regulatory guidance updatesRisks you did not know about at deployment may emerge later

When to Update

The monitoring plan is not static. You must update it when:

  • The AI system is significantly modified (new model version, retraining, expanded use case)
  • Monitoring reveals risks not previously identified
  • A serious incident occurs (Art. 73) — the root cause analysis should feed back into the monitoring plan
  • Harmonised standards or common specifications change
  • A national authority or the AI Office issues guidance affecting your system

Concrete Example

A provider of an AI credit-scoring system establishes a post-market monitoring plan. They collect: weekly performance metrics (approval/denial rates by age group, gender, and nationality), monthly model drift reports comparing current input distributions to training data distributions, deployer-reported issues via a dedicated compliance portal, and quarterly reviews of published research on credit-scoring bias. Their trigger conditions: if denial rate for any demographic group deviates by more than 5% from the baseline, an investigation is launched within 7 days. If the investigation confirms bias, the risk management system is updated, affected deployers are notified, and if the bias caused harm, an Art. 73 report is filed. The monitoring plan assigns the ML operations team as responsible for data collection, the compliance officer for review, and the CTO as the authority for corrective action decisions.

Common Mistakes

  • Monitoring only aggregate metrics. A system that monitors overall accuracy but not subgroup performance will miss discriminatory drift. Art. 72 requires monitoring relevant to Art. 9 risks — and demographic bias is almost always a relevant risk for high-risk systems.
  • No trigger conditions. Collecting data without defining what constitutes a problem. If you have no thresholds, monitoring data accumulates without driving action. Define thresholds before deployment.
  • Not connecting monitoring to risk management. Art. 72 explicitly states that monitoring data must be used to evaluate compliance and feed into the Art. 9 risk management system. If your monitoring team and your risk management team do not communicate, you have a compliance gap.
  • Relying solely on deployer reports. Deployers may not detect or report all issues. Your monitoring system must include proactive data collection (system logs, automated performance checks), not just reactive deployer feedback.
Post-market monitoring is where ongoing compliance lives. Your conformity assessment gets you compliant on day one. Your monitoring system keeps you compliant on day 365. The plan must be documented in your Annex IV technical documentation and must actively feed your risk management process — it is not a standalone activity.
3.13 Distributor Obligations~10 min

Distributors (entities that make AI systems available on the market without modifying them) have lighter but real obligations (Article 24).

What Distributors Must Do

  • Verify the provider has completed conformity assessment and CE marking
  • Verify instructions for use are provided in the correct language
  • Verify the AI system bears the required identification information
  • Not make available systems they know or should know are non-compliant
  • Inform the provider and market surveillance authorities if a system poses a risk

When Distributors Become Providers

If a distributor modifies the AI system, puts it on the market under their own name, or changes the intended purpose — they become a provider with full provider obligations.

Distributors are mostly "pass-through" — verify compliance, don't modify. But the moment you rebrand or modify, you inherit full provider obligations.
3.14 Standards & Codes of Practice~15 min

The AI Act relies on harmonized standards (Arts. 40-41) and codes of practice (Art. 56) to define the technical details of compliance.

Harmonized Standards (CEN/CENELEC)

CEN-CENELEC JTC 21 is developing harmonised standards under Art. 40 to give a presumption of conformity. In October 2025, the BT (Technical Boards) adopted an exceptional-measures package to accelerate publication of key deliverables, with publication targeted by Q4 2026. prEN 18286 (Quality Management Systems for AI) is among the first to reach the Enquiry stage[src]

The European Commission has tasked CEN and CENELEC (European standardization bodies) with developing standards covering:

  • Risk management systems (Art. 9 implementation)
  • Data governance requirements (Art. 10 implementation)
  • Technical documentation templates (Annex IV)
  • Accuracy, robustness, and cybersecurity testing methods
  • Quality management system requirements

Compliance with harmonized standards creates a presumption of conformity — meaning if you follow the standard, authorities presume you comply with the corresponding article.

Codes of Practice for GPAI

The General-Purpose AI Code of Practice, drawn up by independent experts under Art. 56 (multi-stakeholder process), was published in final form on 10 July 2025. It offers GPAI providers a way to demonstrate compliance with Art. 53 (all providers) and Art. 55 (GPAI models with systemic risk). Adopting the Code is voluntary but non-adoption exposes providers to closer scrutiny and risk of the Art. 99(4) fines[src]

How to Stay Current

  • Follow the AI Office announcements: EC AI policy page
  • Monitor CEN/CENELEC AI standardization work programs
  • Subscribe to the European AI Act newsletter at artificialintelligenceact.eu
Standards are still being developed. This is both a risk (you can't fully comply until standards exist) and an opportunity (early adopters shape the interpretation). Follow the standards development closely.
3.15 Regulatory Sandboxes (Arts. 57-58)~15 min

Articles 57-58 require each member state to establish at least one AI regulatory sandbox by the headline application date: The Regulation applies from 2 August 2026, with earlier dates for Chapter I-II (prohibited practices and AI literacy, from 2 February 2025) and Chapter V (general-purpose AI, from 2 August 2025), and a later date for Annex I legacy high-risk systems already on the market (2 August 2027)[src]

What's a Sandbox?

A controlled environment where companies can develop, test, and validate innovative AI systems under the direct supervision of national authorities — before full market deployment. Think of it as a "safe space to experiment."

Benefits for Startups

  • Reduced compliance burden: Sandbox participants get guidance from regulators during development, not after
  • Faster time to market: Regulatory questions answered before launch
  • SME priority: Art. 57(4) requires sandboxes to prioritize access for SMEs and startups
  • Real-world testing: Art. 58 allows testing in real-world conditions with informed participants

How to Apply

  1. Check your national authority's website for sandbox applications
  2. Prepare: description of AI system, intended purpose, risk assessment, testing plan
  3. Apply early — sandbox spots are limited and competitive
If you're developing novel AI for a high-risk use case, a regulatory sandbox can significantly reduce your compliance risk and time-to-market. Check if your member state's sandbox is open for applications.
3.16 Module 3 Quiz~15 min

Scenario: List All Obligations

Your company is a Series B SaaS startup in Berlin. You use Anthropic's Claude API to power a hiring tool that screens and ranks job applicants for EU enterprise customers. List every obligation you have under the AI Act, citing the specific articles.

Show Answer

You are a deployer of a high-risk AI system (Annex III, Category 4 — Employment). Your obligations: Human oversight (Art. 26(2)), System monitoring (Art. 26(5)), Log retention (Art. 26(5)), DPIA (Art. 26(9)), Inform workers/candidates about AI use (Art. 26(7)), Transparency disclosure (Art. 50), Incident reporting (Art. 73), EU database registration (Art. 49), Risk management contribution (Art. 9 via provider), Request provider documentation (Arts. 13, 47), AI literacy for staff (Art. 4).

Module 4: The Supply Chain Problem

What makes third-party AI compliance unique — your competitive differentiator

4.0 GPAI Provider Obligations & What They Owe You~25 min

Title V of the AI Act (Articles 51-56) creates a distinct set of obligations for providers of General-Purpose AI models. If you build on GPT-4, Claude, or Gemini, the companies behind those models owe you specific documentation. Understanding what they must provide (and what they have not yet provided) is the foundation of the supply chain problem.

What Every GPAI Provider Must Do (Art. 53)

Article 53 applies to all GPAI providers, regardless of model size. Four core obligations:

  1. Technical documentation: Providers must prepare and keep up-to-date technical documentation of the model, including its training and testing process and evaluation results. This must be made available to the AI Office and national authorities on request.
  2. Downstream information: Providers must supply information and documentation to downstream providers (and deployers who integrate the GPAI model into AI systems) to enable them to understand the model's capabilities, limitations, and comply with their own obligations.
  3. Copyright compliance policy: Providers must establish a policy for respecting EU copyright law, in particular the text and data mining opt-out right under Article 4(3) of Directive (EU) 2019/790.
  4. Training data summary: Providers must draw up and make publicly available a sufficiently detailed summary of the content used for training, following a template provided by the AI Office.

Additional Obligations for Systemic Risk Models (Art. 55)

A GPAI model is presumed to have "systemic risk" if its cumulative training compute exceeds 10^25 floating-point operations (FLOPs). The European Commission can also designate models based on other criteria such as number of users, degree of autonomy, or impact on the internal market. Models in this category include GPT-4 and successors, likely Claude 3.5/4 Opus-class models, and Gemini Ultra.

Article 55 imposes additional requirements on systemic risk models:

  • Model evaluation: Perform standardised evaluations, including adversarial testing (red-teaming), to identify and mitigate systemic risks
  • Systemic risk assessment and mitigation: Assess and mitigate possible systemic risks at Union level, including their sources
  • Serious incident tracking: Track, document, and report serious incidents and possible corrective measures to the AI Office and relevant national authorities without undue delay
  • Cybersecurity protections: Ensure an adequate level of cybersecurity for the model and its physical infrastructure

What Providers Have NOT Yet Provided

As of early 2026, the gap between what the Act requires and what providers have delivered is significant:

  • Training data summaries: No major provider has published a summary that meets the AI Office template requirements. OpenAI, Anthropic, and Google have disclosed broad categories ("internet data," "books," "code") but not the "sufficiently detailed" summaries the Act demands.
  • Downstream deployer documentation: Providers have published model cards and system cards as voluntary measures, but these were not designed to satisfy Art. 53(1)(b). They often lack the specificity deployers need for their own conformity assessments — particularly around performance metrics for specific use cases, known failure modes, and interaction logging formats.
  • Copyright compliance: None have published a verifiable policy for respecting text and data mining opt-outs, and several face active litigation on this front.
  • Codes of practice: The AI Office's codes of practice for GPAI, which will flesh out how to comply with Arts. 53 and 55, were still in development through 2025. Providers are waiting for final guidance, creating a chicken-and-egg problem.

Why This Matters to You

As a deployer, several of YOUR obligations under Article 26 require information that only the provider can give you. You cannot complete a meaningful risk assessment, DPIA, or conformity assessment without understanding the model's capabilities, limitations, training data characteristics, and known biases. The provider's failure to deliver does not eliminate your obligation — it creates legal exposure for both parties.

GPAI providers owe you technical documentation, downstream information, copyright compliance details, and training data summaries. The 10^25 FLOPs threshold for systemic risk captures all frontier models. Most of this documentation does not yet exist in the form the Act requires — which is your problem as much as theirs.
4.1 The Documentation Gap~25 min

The AI Act creates a documentation supply chain: providers must produce specific documents and pass them downstream to deployers. Two articles are central to this obligation. Article 13 requires that high-risk AI systems be designed with sufficient transparency to enable deployers to interpret outputs and use the system appropriately. Article 47 requires providers to draw up an EU declaration of conformity for each high-risk AI system. Together, these articles define the documentation you need but probably do not have.

What Providers Must Give Deployers

Under Art. 13, providers must supply "instructions for use" that include:

  • Identity and contact details of the provider, plus their authorised representative if applicable
  • The intended purpose of the AI system and the specific conditions of use it was designed for
  • Performance metrics: the level of accuracy, robustness, and cybersecurity the system was tested and validated against, and any known circumstances that could impact performance
  • Known limitations: foreseeable conditions of misuse, their consequences, and the groups of persons on whom the system was tested (demographics, contexts)
  • Technical specifications of input data, or any other relevant information in terms of training, validation, and testing data sets used
  • Human oversight measures the deployer must implement, including technical safeguards built into the system
  • Expected lifetime and maintenance/update schedules
  • Interaction logging format so deployers know what data they will collect and how to retain it per Art. 12

Why Most Providers Have Not Sent This

There are several reasons this documentation has not materialised proactively:

  1. GPAI obligations came in on the Chapter V date. Chapter III Section 4 (notified bodies), Chapter V (general-purpose AI models), Chapter VII (governance), Chapter XII (penalties), and Art. 78 apply from 2 August 2025, with the exception of Art. 101[src] Providers had a legal argument for waiting until then, although many deployers needed the information well before that to prepare their own compliance.
  2. The model is general-purpose. GPAI providers argue their models were not designed for any specific "intended purpose" — which creates tension with Art. 13's requirement to describe the intended purpose. The provider designed a general model; you turned it into a specific application.
  3. No enforcement yet. Without enforcement action or clear codes of practice, the commercial incentive to produce this documentation is weak. Providers face cost and potential liability from detailed disclosures.
  4. Competitive sensitivity. Training data details, performance benchmarks on sensitive tasks, and known failure modes are commercially sensitive information that providers are reluctant to share broadly.

The CE Marking Question

For high-risk AI systems, the provider must affix a CE marking indicating conformity (Art. 48). No GPAI provider has issued CE markings for their models when used in high-risk applications because the conformity assessment process for these use cases has not been completed — and arguably cannot be completed by the GPAI provider alone, since they do not control how deployers use the model.

Articles 13 and 47 create a clear legal obligation for providers to supply deployers with instructions for use, performance data, known limitations, and conformity declarations. The gap exists because GPAI timelines, commercial incentives, and the general-purpose nature of the models create friction. But the obligation is unambiguous — and as a deployer, you need this documentation to fulfil your own obligations.
Use AIActStack's scanner to generate documentation request emails for your specific providers, citing the exact articles and document types you need.
4.2 What to Request from OpenAI~20 min

OpenAI is the provider most deployers need to contact first. As the maker of GPT-4, GPT-4o, and the ChatGPT platform, OpenAI's GPAI obligations under Art. 53 are extensive, and their systemic risk obligations under Art. 55 apply to their frontier models. Here is what they have published, what is still missing, and how to structure your request.

What OpenAI Has Published

  • Model cards / system cards: OpenAI publishes system cards for major model releases (GPT-4, GPT-4o). These describe general capabilities, safety evaluations, and some limitations. However, they are framed as voluntary disclosures, not as compliance documents under the AI Act.
  • Usage policies: OpenAI maintains acceptable use policies that describe prohibited uses. These partially address "foreseeable misuse" but from a terms-of-service perspective, not a regulatory documentation perspective.
  • Safety research: Papers on red-teaming, alignment techniques, and evaluation results have been published. These contain useful performance data but are scattered across blog posts and academic papers, not consolidated as Art. 13 instructions for use.

What Is Still Missing

Required DocumentArticleStatus
Instructions for use (intended purpose, performance metrics, known limitations, human oversight guidance)Art. 13Not provided in AI Act format
Sufficiently detailed training data summaryArt. 53(1)(d)Not published
Copyright compliance policyArt. 53(1)(c)Not published (active litigation)
EU declaration of conformity (for high-risk deployments)Art. 47Not issued
Downstream deployer documentation for complianceArt. 53(1)(b)Partial — system cards exist but lack deployer-specific guidance
Model evaluation results (adversarial testing)Art. 55(1)(a)Partial — some published in system cards, not comprehensive

How to Structure the Request Email

Your email should be formal, cite specific articles, and create a paper trail. Key elements:

  1. Identify yourself: Company name, your role as a deployer under the AI Act, the specific OpenAI products you use (model name, API tier)
  2. State the legal basis: Reference Art. 53(1)(b) (GPAI downstream information), Art. 13 (transparency and instructions for use), Art. 47 (declaration of conformity)
  3. Be specific about what you need: List each document type individually — do not send a vague "please send AI Act documentation" request
  4. Explain why you need it: State that you require this information to comply with your deployer obligations under Art. 26, including your DPIA (Art. 26(9)) and conformity assessment
  5. Set a deadline: Request a response within 30 days. This is reasonable and creates urgency.
  6. Keep a copy: This email is evidence of your good-faith compliance effort if regulators ask why you deployed without full provider documentation
Use AIActStack's scanner to generate a pre-filled documentation request email for OpenAI, citing the exact articles and documents relevant to your specific use case and risk level.
Do not wait for OpenAI to proactively send you documentation. They have shown no indication of doing so at scale. You must initiate the request, and you must document that you did.
4.3 What to Request from Anthropic~20 min

Anthropic, as the provider of Claude models, has the same GPAI obligations as OpenAI under Art. 53. Claude 3.5 Sonnet, Claude 3 Opus, and successor models likely cross the 10^25 FLOPs threshold, placing them in the systemic risk category under Art. 55. Anthropic's approach to transparency has been somewhat different from OpenAI's, which affects what you can reuse and what you still need to request.

What Anthropic Has Published

  • Model cards: Anthropic publishes model cards for Claude releases. These cover general capabilities, safety evaluations, and known limitations. The detail level is generally comparable to OpenAI's system cards.
  • Responsible Scaling Policy (RSP): Anthropic has published its internal framework for evaluating catastrophic risk before scaling models. This is a voluntary commitment, not an AI Act compliance document, but it partially addresses Art. 55 requirements around systemic risk assessment.
  • Usage policies: Terms of service and acceptable use policies describe prohibited uses, which partially map to "foreseeable misuse" documentation.
  • Safety research: Anthropic publishes technical reports on Constitutional AI, red-teaming results, and alignment research. These provide useful background but are not structured as deployer-facing documentation.

What Is Still Missing

Required DocumentArticleStatus
Instructions for use (intended purpose, performance metrics, known limitations, human oversight guidance)Art. 13Not provided in AI Act format
Sufficiently detailed training data summaryArt. 53(1)(d)Not published
Copyright compliance policyArt. 53(1)(c)Not published
EU declaration of conformityArt. 47Not issued
Downstream deployer documentation for complianceArt. 53(1)(b)Partial — model cards exist but lack deployer-specific compliance guidance
Systemic risk evaluation resultsArt. 55(1)(a)Partial — RSP framework exists, specific evaluation results vary

Anthropic-Specific Considerations

Two factors differentiate an Anthropic request from an OpenAI request:

  1. The RSP is an asset. Reference it in your request. Ask Anthropic to map their RSP commitments to specific AI Act obligations. This shows you have done your homework and makes it harder for them to deflect.
  2. Anthropic has been more vocal about EU engagement. They have published position papers on EU AI regulation and participated in consultation processes. This creates a reasonable expectation that they will respond constructively to formal deployer requests.

Structure your email the same way as the OpenAI request: identify yourself, cite Art. 53(1)(b), Art. 13, and Art. 47, list specific documents needed, explain why (your Art. 26 obligations, DPIA, conformity assessment), and set a 30-day deadline. Reference the RSP explicitly and ask how it maps to their Art. 55 systemic risk obligations.

Anthropic's RSP and model cards provide a stronger starting point than most providers, but they are voluntary disclosures, not AI Act compliance documents. You still need formal Art. 13 instructions for use, training data summaries, and copyright compliance documentation. Request it formally, reference their existing work, and document the exchange.
4.4 What to Request from Google~15 min

Google DeepMind, as the provider of Gemini models, has the same GPAI obligations under Art. 53 as OpenAI and Anthropic. Gemini Ultra almost certainly exceeds the 10^25 FLOPs threshold. The request structure mirrors what you would send to OpenAI or Anthropic, with Google-specific considerations.

What Google Has Published

  • Model cards: Google publishes technical reports and model cards for Gemini releases with capability descriptions and safety evaluations
  • AI Principles: Google's published AI Principles (since 2018) outline ethical commitments but are corporate governance documents, not AI Act compliance materials
  • Responsible AI practices: Google has published guidelines on fairness, interpretability, and safety testing, spread across various research publications

What Is Still Missing

The gap mirrors OpenAI and Anthropic: no formal Art. 13 instructions for use, no training data summary per Art. 53(1)(d), no copyright compliance policy, and no EU declaration of conformity. Google's model cards are more detailed than most competitors on some technical benchmarks but still fall short of the structured deployer documentation the Act requires.

Google-Specific Considerations

  • EU presence: Google has a substantial EU legal and compliance infrastructure due to GDPR enforcement history. They are more likely to have internal teams working on AI Act compliance than smaller providers.
  • Multiple products: Be specific about which Google AI product you use — Gemini API, Vertex AI, Google Cloud AI services, or embedded AI features. Each may have different compliance paths.
  • GDPR precedent: Google has received billions in GDPR fines across EU member states. They understand EU regulatory enforcement is real. This context may make them more responsive to formal compliance requests than providers without this history.
Apply the same request template: identify yourself, cite Arts. 53(1)(b), 13, and 47, list specific documents, explain your deployer obligations under Art. 26, set a 30-day deadline. Google's EU enforcement history means they may be better prepared to respond — but you still need to ask.
4.5 What to Do When Providers Don't Respond~25 min

You sent the request. Thirty days pass. No response, or a vague response that does not address your specific document requests. This is the scenario most deployers will face in 2025-2026. The question is: what is your legal exposure, and what can you do about it?

Your Legal Exposure as Deployer

The uncomfortable truth: the AI Act does not give deployers a blanket exemption when their provider fails to deliver documentation. Article 26 states your obligations unconditionally — there is no clause that says "unless your provider did not cooperate." If you deploy a high-risk AI system without adequate documentation, you bear liability for non-compliance with your deployer obligations, even if the root cause is provider silence.

However, a deployer who can demonstrate good-faith effort to obtain documentation will be in a fundamentally different position from one who never asked. Regulators enforcing the Act will consider the reasonableness of your compliance effort.

Can You Deploy Without Provider Documentation?

For limited-risk deployments (chatbots, content generation): yes, with appropriate transparency disclosures under Art. 50 and reasonable internal documentation of what you know about the model. The documentation gap is less critical here because your obligations are narrower.

For high-risk deployments (hiring, credit, healthcare): this is legally precarious. Your DPIA (Art. 26(9)) must assess the impact of the AI system, and you cannot do that rigorously without understanding the model's training data, biases, and failure modes. Deploying high-risk AI without adequate documentation is a significant risk.

Workarounds

  1. Use publicly available model cards as partial evidence. Treat published system cards, model cards, technical reports, and safety evaluations as your best available source. Document that you reviewed them, extract relevant information, and note where they fall short of Art. 13 requirements.
  2. Document the gap explicitly. Create a "Provider Documentation Gap Analysis" document listing every Art. 13 requirement, what information you have, where it came from, and what is missing. This demonstrates diligence.
  3. Implement additional monitoring. Where provider documentation is insufficient, compensate with enhanced monitoring: more extensive output logging, more frequent human review cycles, tighter usage boundaries, additional input/output filtering.
  4. Conduct your own evaluation. Run your own performance testing for your specific use case. Measure accuracy, bias indicators, and failure rates with your actual data. This cannot replace provider documentation but demonstrates that you took responsibility.
  5. Restrict the deployment scope. Narrow the use case to reduce risk. A hiring AI that only assists with initial sorting (with mandatory human review of every candidate) is lower risk than one that makes autonomous shortlisting decisions.

Escalation: Formal Notice Under Art. 25

Article 25 addresses responsibilities along the AI value chain. If a provider fails to fulfil their obligations and this prevents you from meeting yours, you have grounds to send a formal notice documenting the provider's non-compliance. This notice should:

  • Reference the specific articles the provider has not complied with (Arts. 13, 47, 53)
  • State that their non-compliance is impeding your ability to meet Art. 26 deployer obligations
  • Request remediation within a specific timeframe (14-30 days)
  • State that you will notify the relevant national authority and/or the AI Office if they do not respond

This is a last resort, but it creates a legally defensible record. You can also file a complaint with the AI Office under Art. 89, which has investigative powers over GPAI providers.

Never assume that "my provider did not give me the documents" is a defence. It is a mitigating factor, not an exemption. Regulators will ask what you did about the gap — not just whether the gap existed.
4.6 Evaluating Provider Documentation~20 min

Suppose your provider does respond — either with formal AI Act documentation or by pointing you to existing model cards and technical reports. How do you evaluate whether what they sent is sufficient for your compliance needs? Not all documentation is created equal, and a 20-page model card may still leave critical gaps.

Evaluation Checklist

Go through each item. If the provider documentation does not address it, flag it as a gap you must fill yourself or escalate.

RequirementArticleWhat to Look For
Intended purposeArt. 13(3)(a)Does it describe what the model is designed for? Does it cover YOUR specific use case, or only generic capabilities?
Performance metricsArt. 13(3)(b)Are accuracy, precision, recall, or other relevant metrics provided? For what tasks and datasets? Are they relevant to your deployment context?
Known limitationsArt. 13(3)(b)Are failure modes documented? Are there known demographic biases? Does it specify what the model should NOT be used for?
Foreseeable misuseArt. 13(3)(b)Are misuse scenarios described with their potential consequences? Is your use case close to any identified misuse pattern?
Training data characteristicsArt. 53(1)(d)Is there a summary of training data sources, volume, time period, and geographic/linguistic coverage? Are any known data quality issues documented?
Known biasesArt. 10(2)(f)Are bias evaluation results provided? For which protected characteristics (gender, race, age, disability)? Were mitigation measures applied?
Human oversight guidanceArt. 14Does the documentation describe how humans should supervise the system? What override mechanisms exist? What signals should trigger human intervention?
Interaction log formatArt. 12Does the documentation describe what data the system logs, in what format, and how to retain it? Can you comply with Art. 26(6) log retention based on this?
Contact informationArt. 13(3)(a)Is there a designated compliance contact, EU authorised representative, or complaint channel?

Is It Sufficient for Your DPIA?

Your DPIA under Art. 26(9) must assess the impact of the AI system on fundamental rights. To do this, you need to understand:

  • What data the model was trained on (could it encode biases against protected groups?)
  • How the model performs across different demographic groups
  • What happens when the model fails — what are the consequences for affected individuals?
  • Whether the model's outputs can be audited and explained

If the provider documentation does not answer these questions for your specific deployment, your DPIA has a blind spot. Document that blind spot, describe what compensating measures you have implemented, and note the outstanding information request to the provider.

Is It Sufficient for Your Conformity Assessment?

If your deployment is high-risk and requires a conformity assessment (Art. 43), you need the provider's documentation to demonstrate that the AI system meets the requirements of Chapter 2 of Title III. Without the provider's technical documentation covering risk management (Art. 9), data governance (Art. 10), and accuracy/robustness (Art. 15), your conformity assessment will have material gaps.

Create a "Provider Documentation Evaluation Matrix" — a spreadsheet mapping each Art. 13 requirement to the specific page/section of the provider documentation that addresses it, or marking it as "gap." This becomes part of your compliance file.
4.7 The Conformity Declaration Chain (Art. 47)~20 min

Providers of high-risk AI systems must draw up a written, machine-readable, signed EU declaration of conformity per Art. 47 including the information listed in Annex V, keep it up to date, and keep a copy at the disposal of national competent authorities for 10 years after the AI system has been placed on the market or put into service[src]

The problem for deployers of third-party AI is what happens when this declaration does not exist — or when it cannot exist in the form the Act envisions.

The Pass-Through Problem

The AI Act was designed primarily for a straightforward supply chain: a provider builds an AI system, conducts a conformity assessment, issues a declaration, affixes CE marking, and places it on the market. A deployer then uses that assessed system according to the provider's instructions.

GPAI models break this model. OpenAI does not build a "hiring screening AI system" — they build GPT-4, a general-purpose model. You build the hiring screening system by integrating GPT-4 with your application logic, prompts, and data pipeline. This creates a fundamental question: who conducts the conformity assessment for the resulting high-risk system?

Three Scenarios

ScenarioWho Does the Conformity Assessment?Provider's Art. 47 Declaration?
Provider sells a ready-to-use high-risk AI system (e.g., turnkey hiring AI product)The providerYes — provider issues declaration before placing on market
Deployer uses a GPAI model as a component in a self-built high-risk systemThe deployer (who is now arguably a provider under Art. 25 if the modification is substantial)No — the GPAI provider's declaration covers the model, not your system
Deployer uses a GPAI model with minimal modification in a high-risk contextUnclear — this is the grey zoneThe provider has not issued one for this use case

Your Liability as Deployer

If you deploy a high-risk AI system and no valid EU declaration of conformity exists for it, you are operating a non-conforming AI system in the EU market. Non-compliance with operator or notified body obligations (other than Art. 5) is subject to administrative fines of up to EUR 15 000 000 or, for an undertaking, up to 3% of total worldwide annual turnover for the preceding financial year, whichever is higher. Covers provider obligations (Art. 16), authorised representatives (Art. 22), importers (Art. 23), distributors (Art. 24), deployers (Art. 26), notified bodies (Arts. 31, 33, 34), and transparency obligations (Art. 50)[src]

The practical reality is that most deployers using GPAI models in high-risk contexts will need to take responsibility for the conformity assessment of their integrated system — even though they do not control the underlying model. This means:

  • You need the provider's technical documentation (Art. 11, Annex IV) as input to your own assessment
  • You must document the integration — how you combined the GPAI model with your application logic, what constraints you applied, what testing you performed
  • You must assess whether your modifications qualify as "substantial" under Art. 25, which would make you a provider of the integrated system with full provider obligations
  • If you are a provider of the integrated system, you must conduct the conformity assessment yourself (or engage a notified body for biometric systems under Art. 43(1))

What the Declaration Must Contain

Per Annex V, the EU declaration of conformity must include: the AI system identification, provider name and address, a statement that the declaration is issued under the provider's sole responsibility, a description of the AI system, references to harmonised standards or common specifications applied, the conformity assessment procedure followed, and the date and signature. It must be kept up to date.

The conformity declaration chain breaks when deployers use general-purpose models in high-risk contexts. No GPAI provider has issued Art. 47 declarations for downstream high-risk deployments. If you are building a high-risk AI system on top of a GPAI model, you are very likely the entity responsible for the conformity assessment — and you need the provider's documentation to do it properly.
If you are using a GPAI model in a high-risk context and have not assessed whether Art. 25 makes you a "provider" of the combined system, do that assessment now. The answer determines whether you have deployer obligations or the much heavier provider obligations.
4.8 Module 4 Quiz~15 min

Draft a Documentation Request

Your company uses OpenAI's GPT-4 API for a customer-facing chatbot deployed in the EU. Draft an email to OpenAI's compliance team requesting the documentation you need under Articles 13 and 53. Be specific about what documents and why you need them.

Supply Chain Analysis

Answer these without looking back at the lessons:

  1. Name the four core obligations every GPAI provider has under Art. 53.
  2. What is the FLOPs threshold for systemic risk? Which models likely exceed it?
  3. Your provider sends you their published model card and says "this satisfies our Art. 13 obligations." What three things do you check to verify this claim?
  4. You use Claude's API to power a hiring tool. Anthropic has not responded to your documentation request. Can you legally deploy the tool? What should you do?
  5. Who is responsible for the conformity assessment when you build a high-risk system using a GPAI model as a component?
Show Answers
  1. Technical documentation, downstream information to deployers, copyright compliance policy, and training data summary.
  2. 10^25 FLOPs. GPT-4 and successors, Claude 3/4 Opus-class models, and Gemini Ultra likely exceed it.
  3. Check whether it covers: (a) performance metrics relevant to your specific use case, (b) known limitations and foreseeable misuse scenarios, (c) human oversight guidance and interaction log format. A generic model card almost certainly does not cover all Art. 13 requirements for your deployment.
  4. Deploying a high-risk system without provider documentation is legally precarious. You should: (a) use publicly available model cards as partial evidence, (b) document the gap and your request attempts, (c) conduct your own performance evaluation, (d) implement enhanced monitoring, (e) consider sending a formal Art. 25 notice and/or filing with the AI Office. Simply deploying without action is not defensible.
  5. You are. As the entity building the integrated system, you likely qualify as a provider under Art. 25 (substantial modification of intended purpose). The GPAI provider's obligations cover the model; your conformity assessment must cover the system you built with it.

Module 5: Enforcement & Defense

What happens when regulators come knocking — being audit-ready

5.0 Early Enforcement — What's Happened So Far~20 min

The EU AI Act entered into force on August 1, 2024. Enforcement is phased, and as of early 2026 we are still in the early stages — but not in a vacuum. Understanding what has happened so far, and what the GDPR parallel tells us, is critical for calibrating your urgency.

Timeline of Enforcement Milestones

  • Regulation (EU) 2024/1689 entered into force on 1 August 2024, the twentieth day after its publication in the Official Journal on 12 July 2024[src] The clock starts; no obligations yet except general applicability.
  • Chapter I (general provisions, including Art. 4 AI literacy) and Chapter II (prohibited practices, Art. 5) apply from 2 February 2025[src] First real enforcement deadline. Any organisation using AI must ensure staff AI literacy; all prohibited AI practices become illegal.
  • Chapter III Section 4 (notified bodies), Chapter V (general-purpose AI models), Chapter VII (governance), Chapter XII (penalties), and Art. 78 apply from 2 August 2025, with the exception of Art. 101[src] OpenAI, Anthropic, Google, and other GPAI providers must comply; national authorities must be designated.
  • The Regulation applies from 2 August 2026, with earlier dates for Chapter I-II (prohibited practices and AI literacy, from 2 February 2025) and Chapter V (general-purpose AI, from 2 August 2025), and a later date for Annex I legacy high-risk systems already on the market (2 August 2027)[src] The major deadline. All deployer obligations, conformity assessments, transparency requirements, EU database registration.

What Has Actually Happened

As of early 2026, enforcement activity has been limited but deliberate:

  • Authority designation: Member states have been designating national competent authorities, though the pace varies significantly. Some (France, Netherlands, Spain) moved quickly; others are still finalising institutional arrangements.
  • AI Office establishment: The European AI Office within the Commission has been staffing up and developing codes of practice for GPAI models. Their first substantive output — draft codes of practice — went through public consultation in late 2025.
  • No formal fines yet: As of this writing, no AI Act-specific fines have been issued. This is consistent with the enforcement phase: only prohibited practices and AI literacy have been enforceable, and authorities are focused on institutional setup.
  • Informal guidance: Several national authorities have published preliminary guidance documents on AI literacy requirements and prohibited practices, signalling how they interpret these provisions.

The GDPR Parallel

GDPR enforcement offers the best predictive model. GDPR entered into force in May 2016, became enforceable in May 2018, and the first significant fines did not arrive until January 2019 (CNIL fined Google 50 million EUR). The pattern:

  1. Year 1 (2018): Grace period. Authorities focused on complaints, guidance, and institutional readiness. Very few enforcement actions.
  2. Year 2 (2019): First symbolic fines. Targets: big tech companies and egregious violators. Purpose: signal that enforcement is real.
  3. Year 3+ (2020-present): Enforcement scaled. Fines increased in size and frequency. Smaller organizations began receiving enforcement actions.

Expect the AI Act to follow a similar trajectory. The first fines are likely to target GPAI providers for Art. 53 non-compliance (the most visible obligation with the clearest deadline) and obvious prohibited practice violations. Deployer enforcement will follow, likely starting with high-risk sectors.

We are in the "grace period" phase of AI Act enforcement — similar to GDPR's first year. No fines have been issued, but the institutional machinery is being built. History tells us that enforcement will accelerate quickly once it starts. The organizations that prepared early will be rewarded; those that assumed "nobody is enforcing this" will be caught off guard.
5.1 National AI Authorities~20 min

Each EU member state must designate one or more national competent authorities to supervise the application and implementation of the AI Act (Article 70). These authorities are your primary enforcement contact point — they receive complaints, conduct investigations, and impose penalties. The landscape is still forming, but key countries have made their designations.

Key Country Designations

CountryAuthorityNotes
FranceCNIL (data protection) + dedicated AI authority coordinationFrance has taken a dual approach: CNIL handles AI-related data protection issues, while a broader coordination mechanism addresses non-data AI obligations. France has been among the most active in publishing guidance.
GermanyBNetzA (Federal Network Agency) as coordinator, with sectoral regulators for specific domainsGermany's federal structure means enforcement may involve multiple authorities depending on the sector (health, finance, employment). BNetzA provides coordination.
NetherlandsAutoriteit Persoonsgegevens (AP) designated as AI authorityThe Netherlands moved quickly, leveraging its data protection authority. The AP has been one of the more enforcement-active DPAs under GDPR, which may predict AI Act enforcement posture.
SpainAESIA (Spanish Agency for the Supervision of Artificial Intelligence)Spain created a dedicated AI-specific agency — one of the first member states to do so. AESIA was operational before the AI Act entered into force, signalling strong enforcement intent.
ItalyAgID (Agency for Digital Italy) + Garante (data protection)Italy has been active on AI enforcement even before the AI Act — the Garante's temporary ban of ChatGPT in 2023 demonstrated willingness to act aggressively on AI issues.

What This Means for You

Your primary enforcement risk depends on where your AI system affects people, not where your company is located:

  • If your AI system affects people in France, CNIL and the French AI coordination mechanism have jurisdiction
  • If your system is used across multiple member states, each national authority has jurisdiction for its territory, but the AI Office coordinates cross-border cases
  • If your company is not established in the EU but your AI system affects EU persons, you must designate an authorised representative in the EU (Art. 22), and the authority of the member state where that representative is located has primary jurisdiction

Authority Powers

National competent authorities can (Art. 74):

  • Access all information necessary for their tasks, including source code in justified cases
  • Conduct audits and inspections
  • Request documentation, data, and evidence
  • Order corrective actions, including withdrawal of AI systems from the market
  • Impose administrative fines per Art. 99
Enforcement will vary by member state. Countries with aggressive GDPR enforcement records (France, Netherlands, Italy) are likely to be aggressive on AI Act enforcement. If your AI system affects people in these jurisdictions, prepare accordingly. Know which authority has jurisdiction over your deployment and monitor their published guidance.
5.2 The European AI Office~20 min

The European AI Office is a body within the European Commission with a specific mandate: oversee GPAI model compliance, develop codes of practice, and coordinate enforcement across member states (Articles 64-69). It is the single most important institutional actor for companies building on foundation models.

Core Mandate

The AI Office has three primary functions:

  1. GPAI oversight: Direct supervision of GPAI model providers. The AI Office (not national authorities) is responsible for ensuring OpenAI, Anthropic, Google, and other GPAI providers comply with Arts. 51-56. This includes reviewing technical documentation, evaluating systemic risk assessments, and investigating non-compliance.
  2. Codes of practice: The AI Office develops and maintains codes of practice that provide detailed guidance on how to comply with GPAI obligations. These codes, developed in consultation with industry and civil society, will be the practical benchmark for compliance. Adherence to a code of practice creates a presumption of conformity.
  3. Coordination: The AI Office coordinates between national competent authorities, facilitates information sharing, and provides guidance on consistent interpretation of the Act. It also manages the European Artificial Intelligence Board, which brings together member state representatives.

Staffing and Capacity

The AI Office has been planned with approximately 140 staff members, including technical experts, legal specialists, and policy officers. This is a small team given its mandate; compare to the European Data Protection Board, which had years to build capacity. In practice, the AI Office will need to prioritise ruthlessly. Expect initial focus on the largest GPAI providers and the most obvious compliance gaps.

Powers

The AI Office can:

  • Request information and documentation from GPAI providers
  • Conduct evaluations of GPAI models, including requesting access to models for testing
  • Issue binding decisions requiring GPAI providers to take corrective action
  • Recommend that the Commission impose fines on non-compliant GPAI providers
  • Classify GPAI models as presenting systemic risk (expanding the list beyond the 10^25 FLOPs presumption)

Why This Matters for Deployers

As a deployer, you do not interact with the AI Office directly — your enforcement relationship is with your national competent authority. However, the AI Office's work on GPAI compliance directly affects you because:

  • If the AI Office pressures GPAI providers to produce better documentation, that documentation flows downstream to you
  • The codes of practice will define what "adequate" provider documentation looks like, which determines whether what you receive is sufficient
  • If you file a complaint about a GPAI provider's failure to provide documentation, the AI Office has the investigative power to act on it
The European AI Office is the enforcement body that matters most for the supply chain problem. It directly oversees GPAI providers, develops the codes of practice that define compliance standards, and has binding decision-making power. Watch for their published codes of practice — they will become the practical benchmark for what providers must give you.
5.3 Market Surveillance (Arts. 75-94)~20 min

Market surveillance is how enforcement actually happens in practice. Title IX of the AI Act (Articles 75-94) establishes the framework for monitoring the AI systems that are already on the market and in use. Understanding how investigations are triggered and conducted tells you what to prepare for.

How Investigations Are Triggered

Market surveillance authorities can initiate investigations through several channels:

  • Complaints: Any person or organisation can file a complaint with the national competent authority about an AI system they believe is non-compliant. This is the most common trigger under GDPR, and it will likely be the most common trigger for the AI Act. Competitors, NGOs, disgruntled employees, and affected individuals all file complaints.
  • Proactive monitoring: Authorities can conduct proactive market scans, particularly targeting sectors known for high-risk AI use (hiring, credit, healthcare). This is resource-intensive and will likely be limited initially.
  • Incident reports: Under Art. 73, deployers must report serious incidents. These reports can trigger investigations into broader non-compliance.
  • Cross-border referrals: One national authority can refer a case to another, or the AI Office can flag potential non-compliance discovered during GPAI oversight.
  • Media and public attention: High-profile AI failures or controversies can prompt authorities to open investigations, particularly when political pressure builds.

What an Investigation Looks Like

Based on the market surveillance framework in the AI Act and analogous processes under GDPR and product safety regulation:

  1. Information request: The authority sends a formal request for documentation — your risk management system, DPIA, conformity assessment, provider correspondence, training records, incident logs. You typically have 15-30 days to respond.
  2. Document review: The authority reviews your documentation for completeness and quality. They check whether your risk assessment actually addresses the risks posed by your specific deployment, whether your DPIA is substantive, and whether you have the provider documentation you should have.
  3. Technical evaluation: In some cases, the authority may request access to the AI system for testing, or may commission independent technical evaluation. For high-risk systems, they may request access to logs and performance data.
  4. Findings and corrective action: If non-compliance is found, the authority issues findings and may require corrective action within a specified timeframe. This can range from documentation remediation to system modification to market withdrawal.
  5. Sanctions: If non-compliance is serious or not remediated, fines per Art. 99 can follow.

Powers of Authorities

Market surveillance authorities have broad powers under Art. 74:

  • Access to all necessary information, including source code in justified cases
  • Power to conduct on-site audits and inspections
  • Power to order corrective actions, including system recall or withdrawal from the market
  • Power to impose interim measures if there is an imminent risk to fundamental rights
Enforcement will be primarily complaint-driven in the early years, with proactive audits increasing over time. The most likely trigger for an investigation is a complaint — from a competitor, an affected individual, or an NGO. Your best protection is documentation: if you can produce a complete compliance file within 30 days of a request, you are in a strong position.
5.4 Penalties Breakdown (Art. 99)~15 min

Article 99 sets out the penalty framework. Each tier applies "whichever is higher" between the absolute cap and the percentage-of-turnover.

Prohibited practices (Art. 5)

Non-compliance with the prohibition of AI practices under Art. 5 is subject to administrative fines of up to EUR 35 000 000 or, for an undertaking, up to 7% of total worldwide annual turnover for the preceding financial year, whichever is higher[src]

Operator and notified-body obligations (Art. 99(4))

Non-compliance with operator or notified body obligations (other than Art. 5) is subject to administrative fines of up to EUR 15 000 000 or, for an undertaking, up to 3% of total worldwide annual turnover for the preceding financial year, whichever is higher. Covers provider obligations (Art. 16), authorised representatives (Art. 22), importers (Art. 23), distributors (Art. 24), deployers (Art. 26), notified bodies (Arts. 31, 33, 34), and transparency obligations (Art. 50)[src]

Incorrect information to authorities (Art. 99(5))

Supplying incorrect, incomplete, or misleading information to notified bodies or national competent authorities in response to a request is subject to administrative fines of up to EUR 7 500 000 or, for an undertaking, up to 1% of total worldwide annual turnover for the preceding financial year, whichever is higher[src]

SME and start-up cap

For SMEs (including startups), each fine under Art. 99 is capped at the lower of the percentage or absolute amount listed in paragraphs 3, 4, and 5 — not the higher[src]

The SME rule flips the direction: for SMEs the cap is the lower of the two ceilings, not the higher. This is the opposite of the paragraphs above and is a common source of drafting errors in compliance briefs.
5.5 What "Audit-Ready" Looks Like~25 min

Being "audit-ready" means you can produce a complete, credible compliance file within 30 days of a regulatory request. This is not about perfection — it is about demonstrating systematic, good-faith effort to comply. Here is the documentation trail that separates a compliant deployer from a non-compliant one.

The Compliance File

Your compliance file should contain the following documents, organised and readily accessible:

DocumentArticleWhat It Demonstrates
Risk Management RecordsArt. 9You identified, analysed, estimated, and evaluated risks throughout the AI system lifecycle. You documented residual risks and mitigation measures.
Data Protection Impact Assessment (DPIA)Art. 26(9)You assessed the AI system's impact on fundamental rights before deployment. You identified risks to specific groups and documented mitigation.
Conformity Assessment (if applicable)Art. 43For high-risk systems: you assessed whether the AI system meets Chapter 2 requirements. If you are the provider of an integrated system under Art. 25, you conducted the full assessment.
Provider CorrespondenceArts. 13, 47, 53You requested documentation from your AI provider. You have copies of sent requests (with dates), any responses received, and a gap analysis of what was and was not provided.
Provider Documentation EvaluationArt. 13You evaluated the provider's documentation against Art. 13 requirements and documented gaps, compensating measures, and outstanding requests.
Incident LogsArt. 73You maintained a log of all incidents (serious and non-serious). For serious incidents, you documented the report to the relevant authority within the required timeframe.
AI Literacy Training RecordsArt. 4You ensured staff and relevant persons have sufficient AI literacy. You documented what training was provided, to whom, when, and how competency was assessed.
Human Oversight ProceduresArts. 14, 26(2)For high-risk: you documented who provides human oversight, their qualifications, the procedures they follow, escalation protocols, and how they can override the system.
System Monitoring RecordsArt. 26(5)You monitored the AI system in operation. You have logs showing monitoring activities, identified issues, and actions taken.
Transparency NoticesArt. 50You implemented and documented all required transparency disclosures: AI interaction notices, content labelling, deep fake disclosures.
EU Database RegistrationArt. 49For high-risk: you registered the AI system in the EU database and can demonstrate the registration is current and accurate.

Evidence of Ongoing Monitoring

Static documentation is necessary but not sufficient. Regulators will look for evidence that compliance is a living process:

  • Regular review cadence: Show that you review your risk assessment and DPIA periodically (quarterly for high-risk, annually for limited-risk)
  • Performance monitoring data: Accuracy metrics, bias indicators, output quality scores tracked over time
  • Incident response evidence: How you investigated past incidents, what corrective actions you took, and whether they were effective
  • Training updates: Staff training is current, not just a one-time event from 18 months ago
  • Provider relationship management: Ongoing correspondence with your AI provider, not just a single unanswered email
Assemble your compliance file now. Use the table above as a checklist. For each document, note its status: (a) complete, (b) in progress, (c) not started, (d) blocked by provider. This gap analysis is itself a compliance artifact — it shows you know what you need and are working toward it.
Audit-readiness is not about having perfect documentation — it is about having systematic, complete, and honest documentation. A gap analysis that honestly states "we requested this from our provider and have not received it" is far better than no documentation at all. Regulators reward good faith effort and systematic approach.
5.6 Defense Strategies~20 min

You receive a formal notice from a national competent authority regarding your AI system. This is not the end of the world — it is the beginning of a process. How you respond in the first 48 hours shapes the outcome. Here is the playbook.

Step 1: Do Not Panic, Do Not Ignore

A regulatory notice is not a fine. In most cases, it is an information request or a notification of an investigation. The authority wants to understand your compliance posture before deciding next steps. Your response matters enormously. Ignoring the notice or responding defensively will escalate the situation; responding promptly and transparently will de-escalate it.

Step 2: Document Everything from This Moment

From the moment you receive the notice, create a response file:

  • Log the date, time, and method of receipt
  • Identify the requesting authority and the specific articles/obligations cited
  • Note the response deadline (typically 15-30 days)
  • Preserve all relevant documents in their current state — do not modify, delete, or "improve" documentation after receiving the notice

Step 3: Engage Specialised AI Law Counsel

General corporate counsel may not have the AI Act expertise needed. Engage a law firm with specific EU AI Act experience. Key qualifications to look for:

  • Track record in EU regulatory enforcement (GDPR, product safety, digital regulation)
  • Understanding of AI technology, not just regulatory text
  • Relationships with the specific national authority that sent the notice
  • Experience with coordinated multi-jurisdiction enforcement (if your deployment spans multiple member states)

Step 4: Demonstrate Good Faith Effort

The single most important factor in regulatory enforcement outcomes is whether you can demonstrate good-faith compliance effort. Specifically:

  • Show your compliance file: Produce your risk management records, DPIA, provider correspondence, training records, and monitoring data. Even if incomplete, a structured effort demonstrates seriousness.
  • Acknowledge gaps honestly: If you are missing provider documentation, say so — and show the requests you sent. If your DPIA has blind spots, acknowledge them and explain your compensating measures.
  • Show timeline of effort: When did you start compliance work? What milestones have you achieved? A company that started compliance in 2025 is in a fundamentally different position from one that started the day before the notice arrived.

Step 5: Present a Remediation Plan

If the authority identifies non-compliance, respond with a concrete remediation plan:

  • Specific actions to address each identified gap
  • Realistic timelines for each action (do not overpromise)
  • Assigned responsibility for each action
  • Interim risk mitigation measures while remediation is in progress

Authorities have broad discretion in enforcement outcomes. A credible remediation plan can be the difference between a warning and a fine, or between a moderate fine and a severe one.

Never alter, backdate, or fabricate compliance documentation after receiving a regulatory notice. Providing incorrect information to authorities carries its own penalty tier: Supplying incorrect, incomplete, or misleading information to notified bodies or national competent authorities in response to a request is subject to administrative fines of up to EUR 7 500 000 or, for an undertaking, up to 1% of total worldwide annual turnover for the preceding financial year, whichever is higher[src] The cover-up is always worse than the original non-compliance.
Your best defence is preparation that happened before the notice arrived. The second-best defence is a prompt, transparent, and constructive response to the notice. Engage specialised counsel immediately, produce your compliance file, acknowledge gaps honestly, and present a credible remediation plan.
5.7 Remediation Paths~15 min

When non-compliance is identified (whether through self-assessment or regulatory investigation), you need a clear remediation path. The AI Act provides several mechanisms, and the appropriate path depends on the severity and nature of the non-compliance.

Remediation Options by Severity

SeverityExampleRemediation Path
Documentation gapMissing DPIA, incomplete risk assessment, no training recordsCreate the missing documentation. Backfill with current information and note the date of creation. This is the most common issue and the easiest to fix.
Transparency failureNo Art. 50 disclosure on a chatbot, missing AI content labelsImplement the required disclosures immediately. Document the date of implementation and any affected period where disclosures were missing.
Provider documentation gapNever requested provider documentation, or requests were ignoredSend formal requests immediately. Document the gap, implement compensating monitoring measures, and conduct your own performance evaluation for the interim.
Human oversight failureHigh-risk system operating without adequate human reviewImplement oversight procedures immediately. This may require temporarily reducing system autonomy (e.g., requiring human approval for all outputs) until proper oversight is in place.
Fundamental non-complianceHigh-risk system deployed without any conformity assessment, or system operating in a prohibited categorySuspend the system's operation. Conduct a full compliance assessment before resuming. For prohibited practices, cease permanently.

Voluntary Remediation vs. Ordered Remediation

If you identify non-compliance before the regulator does, voluntary remediation carries significant benefits:

  • Demonstrates proactive compliance culture (mitigating factor in any future enforcement)
  • Allows you to control the timeline and narrative
  • Avoids the reputational damage of a formal enforcement action
  • Can be documented as evidence of ongoing monitoring (itself a compliance requirement)
Most non-compliance under the AI Act is remediable — documentation gaps can be filled, transparency measures implemented, and oversight procedures established. The critical factor is speed and honesty. Fix what you can now, document what you cannot fix immediately, and show a credible plan for the rest.
5.8 Case Studies & GDPR Enforcement Analogies~25 min

The AI Act is too new for a substantial enforcement case history. But GDPR, which follows the same regulatory model (EU regulation with extraterritorial reach, national enforcement authorities, tiered fines), provides a reliable predictive framework. GDPR enforcement patterns over 2018-2026 predict how AI Act enforcement will likely evolve.

GDPR Enforcement Timeline as AI Act Predictor

GDPR PhaseWhat HappenedAI Act Prediction
Year 1 (2018-2019)First fines were symbolic. CNIL fined Google 50M EUR (Jan 2019). Focus on big tech and transparency failures. Most SMEs were not targeted.First AI Act fines (expected 2026-2027) will likely target GPAI providers for Art. 53 non-compliance and large deployers with obvious prohibited practice violations. SME deployers are unlikely to be early targets.
Year 2-3 (2019-2021)Fines scaled dramatically. Amazon: 746M EUR. WhatsApp: 225M EUR. Focus areas: insufficient legal basis, transparency failures, inadequate data processing records.AI Act fines will scale as authorities build capacity and precedent. Focus areas likely: GPAI documentation failures, high-risk deployers without conformity assessments, transparency obligation violations.
Year 4+ (2022-present)Enforcement broadened. Smaller companies fined. Cross-border coordination improved. Focus shifted to data transfers, automated decision-making, and data breaches.Broad enforcement of deployer obligations. Focus on supply chain documentation, human oversight gaps, and incident reporting failures. Cross-border AI cases handled through AI Office coordination.

Key GDPR Cases and Their AI Act Parallels

Case 1: Google/CNIL (2019) — 50M EUR

CNIL fined Google for lack of transparency and inadequate consent for ad personalisation. The fine was based on: (a) information about data processing was scattered across multiple documents, making it difficult for users to understand; (b) consent mechanisms did not meet GDPR's freely-given, specific, informed standard.

AI Act parallel: A GPAI provider whose Art. 53 documentation is scattered across blog posts, model cards, and research papers (rather than consolidated in a single, accessible deployer document) could face similar arguments. The Act requires "sufficiently detailed" information, not a scavenger hunt.

Case 2: Clearview AI (multiple DPAs, 2022) — 20M+ EUR combined

Multiple DPAs (France, Italy, UK, Greece) fined Clearview AI for scraping facial images without consent. This case is directly relevant to AI Act prohibited practices — untargeted facial scraping is now banned under Art. 5.

AI Act parallel: Companies operating biometric AI systems that touch Art. 5 prohibitions will be early enforcement targets. The GDPR precedent shows multiple national authorities will act independently and cumulatively against the same company.

Case 3: H&M (Germany, 2020) — 35M EUR

H&M was fined for excessive monitoring of employees through detailed records of personal circumstances (health issues, family problems, religious beliefs). Managers recorded this information after "welcome back" conversations.

AI Act parallel: Workplace AI monitoring tools (especially those that profile employees based on performance, behaviour, or personal characteristics) will face scrutiny under both GDPR and the AI Act. If the monitoring tool uses AI, it is likely high-risk under Annex III Category 4 (Employment). The H&M case shows authorities will act aggressively to protect workers.

Predicted AI Act Enforcement Priorities

Based on GDPR patterns and the AI Act's structure, likely enforcement priorities in order:

  1. GPAI providers (Art. 53): The most visible targets with the clearest obligations and the August 2025 deadline. Expect the AI Office to issue information requests to major GPAI providers within months of the deadline.
  2. Prohibited practices (Art. 5): Any company operating a clearly prohibited AI system will be an early target. These cases are politically appealing and legally straightforward.
  3. High-risk deployers in employment (Annex III, Cat. 4): Hiring AI is politically sensitive, well-understood, and widely deployed. Expect early enforcement actions against companies using AI in recruitment without conformity assessments or human oversight.
  4. Transparency failures (Art. 50): Chatbots and content generators that fail to disclose AI involvement are easy to identify and easy to prove. These will generate a high volume of lower-value enforcement actions.
  5. Broad deployer enforcement: Eventually reaching all deployers who have not performed DPIAs, maintained incident logs, or ensured AI literacy.
GDPR enforcement followed a predictable pattern: big tech first, then scaled down and broadened. The AI Act will follow the same path. If you are a deployer — especially in hiring, credit, or healthcare — you are not the first target, but you are on the list. Use the time before enforcement reaches you to build your compliance file. The companies that prepared before GDPR enforcement arrived avoided the largest fines; the same will be true for the AI Act.
5.9 When Your Compliance Tool Uses AI ~15 min

AIActStack itself uses Claude's API to generate compliance documents. This makes AIActStack:

  • A deployer of Anthropic's Claude (uses their API under its own authority)
  • The doc generator is Limited Risk (content generation, not high-risk domain) under Art. 50
  • AIActStack must disclose to users that compliance documents are AI-generated
  • AIActStack must request and retain documentation from Anthropic about Claude

Your Own Compliance Checklist

  1. Art. 50 transparency notice on the doc generation feature ("This document was generated by AI")
  2. AI literacy compliance (Art. 4) — you, as the operator, understand the AI Act (this curriculum)
  3. Documentation request sent to Anthropic for Claude's technical specs
  4. Track doc generation costs and usage for future audit evidence
Eating your own dog food: if your compliance tool isn't itself compliant, your credibility is zero. Address this before launch.
5.10 Module 5 Quiz — Mock Audit~20 min

Mock Audit Scenario

A national AI authority contacts your company about your AI-powered hiring tool. They request evidence of compliance with Articles 26, 50, and 73. What documents do you produce? How do you demonstrate human oversight? What if your provider (OpenAI) hasn't given you the documentation you requested?

This curriculum is based on Regulation (EU) 2024/1689 (the EU AI Act). It is educational material, not legal advice. For formal compliance guidance, consult a qualified legal professional.

Built by AIActStack — EU AI Act compliance for companies using third-party AI.

Completing this curriculum contributes to your Article 4 AI Literacy obligation.

Frequently asked questions about AI literacy

What is AI literacy?

AI literacy is the set of skills, knowledge, and understanding that lets people make informed decisions about AI systems they build, deploy, or interact with. Under the EU AI Act, providers and deployers of AI systems must ensure their staff have a sufficient level of AI literacy to carry out their role responsibly [src]. AI literacy covers how AI systems work, their opportunities and risks, and the specific obligations the law places on people using them.

Who needs AI literacy training?

Every staff member of a provider or deployer of AI systems in the EU — engineers, product managers, operators, customer-service staff, and anyone else whose work touches the AI system. The obligation is not restricted to high-risk systems. AI literacy requirements have been enforceable since February 2, 2025 [src].

What makes AI literacy "sufficient"?

The Regulation uses a proportionality standard — sufficiency depends on the person's role, the AI system's risk level, and the context in which the person interacts with the system. A developer shipping a high-risk decision system needs deeper literacy than a call-center agent using an AI-assisted knowledge tool. Evidence of training completion, documented modules, and role-specific assessment are the typical artefacts auditors expect.

How long does AI literacy training take?

This curriculum is structured as ~50 lessons across 5 modules and takes roughly 10 hours of self-paced study. Organisations typically schedule the literacy programme across 2-6 weeks with per-role checkpoints. The workload estimate is a starting point — more complex AI systems or regulated industries may need additional training on top.

Is this AI literacy training free?

Yes. The full curriculum is free and requires no account. You can complete it at your own pace and use it as part of the evidence bundle your organisation retains for AI literacy compliance. Completion records are stored locally in your browser only.