Obligations by Role
Providers, deployers, distributors, importers: what each actor must do, with article references, the conformity-assessment chain, transparency rules, DPIAs and FRIAs.
Part of the AI literacy training (Article 4) curriculum · Sources: Regulation (EU) 2024/1689
If you develop, train, or place an AI system on the EU market, you are a provider and carry the heaviest compliance burden. Articles 8 through 22 lay out everything you must build, document, and maintain. Here is each article mapped to a concrete action.
Article-by-Article Action Map
| Article | Requirement | What You Actually Do |
|---|---|---|
| Art. 8 | Compliance with requirements | Design your AI system to meet Arts. 9-15 from the start. Compliance is not a bolt-on — it is a design constraint. Document how each requirement is satisfied. |
| Art. 9 | Risk management system | Establish a living risk management process: identify risks, analyze severity and likelihood, test mitigations, monitor post-deployment. This is not a one-time document — it updates throughout the system's lifecycle. |
| Art. 10 | Data governance | Define quality criteria for training, validation, and testing datasets. Document data provenance, preprocessing steps, bias detection, and any gaps. If you use personal data, ensure a lawful basis under GDPR. |
| Art. 11 | Technical documentation | Produce the full Annex IV documentation package (9 sections — covered in Lesson 3.2). This must be ready before the system is placed on the market and kept up to date. |
| Art. 12 | Record-keeping (logging) | Build automatic logging into the system. Logs must capture events relevant to identifying risks, facilitating post-market monitoring, and enabling traceability. Log retention must match the system's intended purpose. |
| Art. 13 | Transparency & information to deployers | Providers must ensure high-risk AI systems are designed to enable deployers to interpret outputs and use them appropriately, including instructions for use containing the information listed in Art. 13(3)[src] |
| Art. 14 | Human oversight | Design the system so humans can effectively oversee it: understand outputs, detect anomalies, intervene, and stop the system. Build override mechanisms, not just dashboards. |
| Art. 15 | Accuracy, robustness & cybersecurity | High-risk AI systems must be designed and developed to achieve an appropriate level of accuracy, robustness, and cybersecurity, and to perform consistently in those respects throughout their lifecycle[src] |
| Art. 16 | Provider obligations (summary) | Providers of high-risk AI systems have the obligations listed in Art. 16, including ensuring their high-risk AI systems are compliant with the requirements, indicating name/address, having a QMS under Art. 17, keeping documentation (Art. 18) and logs (Art. 19), performing conformity assessment (Art. 43), drawing up an EU declaration of conformity (Art. 47), affixing CE marking (Art. 48), registering in the EU database (Art. 49), and taking corrective action when needed (Art. 20)[src] |
| Art. 17 | Quality management system | Implement a documented QMS covering: compliance strategy, design and development procedures, testing and validation, data management, risk management, post-market monitoring, incident reporting, communication with authorities, and record-keeping. |
| Art. 18 | Documentation retention | Keep technical documentation and QMS records for 10 years after the AI system is placed on the market. Store them so they are accessible to national authorities on request. |
| Art. 19 | Automatically generated logs | Retain the logs generated by the AI system (per Art. 12) for at least 6 months, unless longer retention is required by other EU or national law. |
| Art. 20 | Corrective actions | If the system is non-compliant, take immediate corrective action: fix it, withdraw it, or recall it. Notify the distributor, deployer, and relevant authorities. |
| Art. 21 | Cooperation with authorities | Provide any information or documentation a national authority requests. Demonstrate compliance on demand. This means your documentation must actually be accessible, not buried in a developer's laptop. |
| Art. 22 | Authorised representatives | If you are outside the EU, appoint an authorised representative established in the EU before placing your system on the market. Give them a written mandate specifying the obligations they fulfill on your behalf. |
Practical Guidance: Where to Start
- Start with Art. 9 (risk management) — it shapes every other requirement. Your risk assessment determines your data governance needs, your testing strategy, and your documentation scope.
- Build Art. 12 (logging) into your architecture early — retrofitting logging is expensive. Define your log schema before you build the system.
- Write Art. 13 (deployer instructions) as if your downstream customer knows nothing about AI — regulators will judge whether a deployer could reasonably comply based on what you gave them.
- Treat Art. 17 (QMS) as the backbone — a quality management system is not a document. It is the organizational structure that ensures everything else happens consistently.
Concrete Example
Imagine you are building a resume-screening AI. Before placing it on the market, you must: conduct a risk assessment identifying bias risks in hiring (Art. 9), document your training data sources and how you checked for demographic bias (Art. 10), produce the full Annex IV technical documentation package (Art. 11), build logging that records every screening decision and the factors that drove it (Art. 12), write a deployer instruction manual explaining the system's accuracy rates by demographic group and when human review is required (Art. 13), design an interface that lets HR managers override or reject any AI recommendation (Art. 14), and test the system against adversarial resumes designed to game the algorithm (Art. 15).
Common Mistakes
- Treating compliance as a final step. Arts. 8-15 are design requirements. If you build first and document later, you will discover gaps that require re-engineering.
- Ignoring Art. 13 (deployer instructions). Your deployers cannot comply with Art. 26 without the information you owe them. If your documentation is vague, they are non-compliant — and they will point the finger at you.
- Confusing Art. 17 (QMS) with Art. 11 (technical docs). The QMS governs your processes. Technical documentation describes your system. You need both, and they serve different purposes.
- Forgetting Art. 22 (authorised representative). Non-EU providers must have an EU-based representative before market placement. This is not optional and cannot be done retroactively.
Article 11 requires providers to draw up technical documentation before placing a high-risk AI system on the market. Annex IV specifies exactly what that documentation must contain: 9 mandatory sections. Think of this as the "product dossier" — the single source of truth that proves your system is compliant.
The 9 Required Sections
Section 1: General Description of the AI System
- The system's intended purpose, the name of the provider, and the system version
- How the AI system interacts with hardware, software, or other systems it is embedded in
- The versions of relevant software or firmware and any requirements related to version updates
- A description of the forms in which the system is placed on the market (installed on device, API, SaaS, etc.)
- The hardware the system is intended to run on
- Practical tip: Write this so that a regulator with no technical background can understand what the system does and where it fits in the value chain.
Section 2: Detailed Description of Elements and Development Process
- Methods and steps used to develop the system, including use of pre-trained systems or third-party tools
- Design specifications: general logic, algorithms, key design choices, classification methodologies, what the system optimizes for, and the rationale behind those decisions
- System architecture: how software components interact and feed into each other
- Computational resources used for development, training, testing, and validation
- Description of training data: data collection methods, data provenance, scope, characteristics, availability, quantity, and any demographic/geographic/behavioral properties
- Assessment of training data for biases that could lead to discrimination
- Description of validation and testing data, and selection criteria
- Practical tip: This is the largest section. Use architecture diagrams, data flow charts, and training pipeline documentation. Do not write prose when a diagram communicates better.
Section 3: Monitoring, Functioning, and Control
- Description of the system's capabilities and limitations in performance, including degrees of accuracy for specific persons or groups the system is intended for
- Foreseeable unintended outcomes and sources of risks to health, safety, and fundamental rights
- Human oversight measures built into the system (Art. 14) — who can intervene, how, and with what tools
- Specifications for input data: what data the system expects, in what format, at what quality level
- Practical tip: Be honest about limitations. Regulators will not punish you for having limitations — they will punish you for hiding them.
Section 4: Risk Management System
- A description of the risk management system applied per Art. 9 (covered in detail in Lesson 3.3)
- Residual risks after mitigation: what risks remain and why they are acceptable
- Practical tip: Cross-reference your Art. 9 risk management documentation here. Do not duplicate — reference and summarize.
Section 5: Changes Throughout the Lifecycle
- A description of any change made to the system after initial market placement, including software updates, model retraining, performance drift corrections, and configuration changes
- Pre-determined changes included in the initial conformity assessment and technical documentation
- Practical tip: Create a change log template from day one. Every model version, every retraining run, every prompt adjustment to a high-risk system needs to be logged here. Retroactively reconstructing this is nearly impossible.
Section 6: Harmonised Standards and Common Specifications Applied
- List of harmonised standards (CEN/CENELEC) or common specifications that were applied in full or in part
- Where standards are partially applied, specify which parts
- If no harmonised standards were used, describe the alternative means used to meet Arts. 9-15 requirements
- Practical tip: As of early 2026, harmonised standards for the AI Act are still in development by CEN/CENELEC. Document which draft standards you reference and be prepared to update when final standards are published.
Section 7: EU Declaration of Conformity
- A copy of the EU declaration of conformity issued under Art. 47
- This is a formal document stating that the system meets all applicable requirements
- Practical tip: The declaration references your technical documentation. If the documentation is incomplete, the declaration is invalid. Do not sign the declaration until all other sections are complete.
Section 8: Performance and Accuracy Metrics
- Description of the system's performance: accuracy, robustness, and cybersecurity levels (per Art. 15)
- Metrics used, testing methodology, known limitations, and performance across relevant subgroups (demographic, geographic, etc.)
- Declaration of the level of accuracy, along with accuracy metrics per Art. 15(2)
- Practical tip: Do not only report aggregate accuracy. Break it down by the groups the system affects. A hiring tool with 95% overall accuracy but 70% accuracy for a protected group will fail this requirement.
Section 9: Resource Requirements and Energy/Computational Analysis
- A general description of computational, hardware, and resource requirements (training, inference, deployment)
- Energy consumption and other resource use data where relevant
- Where applicable, information on known or estimated environmental impact
- Practical tip: This section is increasingly scrutinized given the environmental debate around AI. Track GPU hours, energy consumption, and carbon footprint during training. Tools like CodeCarbon or ML CO2 Impact can help.
Concrete Example
A provider of an AI-powered credit scoring system creates an Annex IV package. Section 1 describes the system as a credit risk classifier delivered via API to European banks. Section 2 details the XGBoost model architecture, training data sourced from 3 EU credit bureaus covering 12 million records, and bias testing across age, gender, and nationality. Section 3 documents that the system achieves 89% accuracy overall but notes a 7% performance gap for applicants under 25 with thin credit histories. Section 4 references the Art. 9 risk management plan identifying age-based discrimination as a high-severity risk with mitigations (age-blind features, post-hoc fairness adjustments). Section 8 reports F1 scores broken down by 6 demographic subgroups with confidence intervals.
Common Mistakes
- Writing documentation after development. Annex IV requires details about design choices and their rationale. If you do not document these decisions as you make them, you cannot reconstruct the reasoning later.
- Treating it as a one-time deliverable. Art. 11 requires documentation to be "kept up to date." Every significant change triggers an update obligation (Section 5).
- Reporting only aggregate performance metrics. Section 8 explicitly requires subgroup analysis. Aggregate accuracy that masks disparate impact is a compliance failure.
- Omitting third-party components. If your system uses a pre-trained foundation model, Section 2 requires you to describe it: what model, from which provider, what version, and how it fits into your system.
Providers of high-risk AI systems must establish, implement, document, and maintain a risk-management system as a continuous iterative process planned and run throughout the entire lifecycle of the high-risk AI system[src]
What the Article Requires
Art. 9 mandates a risk management system that consists of a continuous iterative process planned and run throughout the AI system's lifecycle. It requires regular, systematic review and updating. The system must include the following phases:
- Risk Identification and Analysis (Art. 9(2)(a)): Identify and analyze the known and reasonably foreseeable risks that the high-risk AI system can pose to health, safety, or fundamental rights when used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse.
- Risk Estimation and Evaluation (Art. 9(2)(b)): Estimate and evaluate the risks that may emerge when the system is used in accordance with its intended purpose and under reasonably foreseeable misuse. Evaluate risks based on data gathered from post-market monitoring (Art. 72).
- Risk Mitigation (Art. 9(2)(c)): Evaluate other possible risks based on analysis of data gathered from the post-market monitoring system.
- Adoption of Suitable Risk Management Measures (Art. 9(2)(d)): Adopt appropriate and targeted risk management measures to address identified risks.
Step-by-Step Practical Guidance
Step 1: Risk Identification
List every risk your AI system poses across three categories: (a) risks to health and safety, (b) risks to fundamental rights (discrimination, privacy, dignity, effective remedy), and (c) risks from reasonably foreseeable misuse. For each risk, document: the risk description, who is affected, the potential severity, and the conditions under which it could occur.
Step 2: Risk Analysis
For each identified risk, assess: likelihood (how probable is it?), severity (if it occurs, how bad is the impact?), and reversibility (can the damage be undone?). Use a structured framework — a simple likelihood x severity matrix works. Art. 9(5) specifically requires that residual risks are communicated to deployers.
Step 3: Risk Evaluation
Determine which risks are acceptable, which require mitigation, and which are unacceptable. Art. 9(4) requires that risk management measures give due consideration to the effects and possible interactions resulting from the combined application of the requirements in Chapter III, Section 2. In other words — your risk mitigations must not create new compliance problems.
Step 4: Testing
Art. 9(6)-(8) requires testing to ensure the system works as intended and meets the requirements. Testing must happen before market placement. You must define metrics and probabilistic thresholds appropriate to the system's intended purpose. For systems that continue to learn after deployment, testing must address the risk of biased outputs being fed back as input (feedback loops).
Step 5: Monitoring and Updating
After deployment, feed post-market monitoring data (Art. 72) back into your risk assessment. If a new risk emerges or an existing risk changes, update the risk management measures. Document every update.
Concrete Example
A provider builds an AI system for automated job-applicant screening. During risk identification, they document: (1) risk of gender discrimination due to historical bias in training data (severity high, likelihood medium); (2) risk that recruiters over-rely on AI scores and skip manual review (severity high, likelihood high); (3) risk of misuse to screen candidates on protected characteristics like ethnicity (severity critical, likelihood low). For risk (1), the mitigation is demographic parity testing before deployment and ongoing bias monitoring. For risk (2), the mitigation is designing the UI to require the recruiter to view the applicant's full profile before seeing the AI score. For risk (3), the mitigation is technical controls that prevent protected-attribute inputs. Residual risks are documented: even after mitigation, screening accuracy for candidates with non-traditional career paths remains 12% lower, and this limitation is disclosed to deployers per Art. 9(5).
Common Mistakes
- Treating it as a document rather than a process. The word "system" in Art. 9 is deliberate. This is an ongoing process with assigned owners, scheduled reviews, and update triggers — not a PDF you write once.
- Only considering technical risks. Art. 9 explicitly covers risks to fundamental rights: discrimination, privacy, loss of human autonomy. A risk assessment that only addresses model accuracy and uptime misses half the requirement.
- Ignoring foreseeable misuse. You must consider not only how the system is intended to be used, but how it could reasonably be misused. If your hiring AI could be used to screen candidates by age, that is a foreseeable misuse you must address.
- Not connecting to post-market monitoring. Art. 9(2)(c) explicitly requires risks to be evaluated based on post-market monitoring data. If your risk management system and your monitoring system are not connected, you fail this requirement.
- Not disclosing residual risks. Art. 9(5) requires that residual risks be communicated to deployers. Hiding known limitations is not just bad practice — it is a legal violation.
Training, validation, and testing data sets used for high-risk AI systems must meet specific quality criteria regarding relevance, representativeness, errors, completeness, data-governance practices, and bias mitigation measures[src]
Requirements
- Relevance & representativeness: Data must represent the population the AI will serve
- Error-free: Data sets must be examined and corrected before use
- Complete: Must account for geographic, contextual, and behavioral settings
- Bias-tested: Art. 10(2)(f) requires examining for biases that could cause discrimination
Special Category Data
Art. 10(5) allows processing sensitive data (race, health, political opinions) for bias detection — but ONLY to the extent strictly necessary, with appropriate safeguards.
Providers of high-risk AI systems must carry out a conformity assessment before placing the system on the market or putting it into service (Art. 43), using either the internal-control procedure in Annex VI or the notified-body procedure in Annex VII depending on the system type and whether harmonised standards were applied[src]
Two Paths
- Internal control (Annex VI): Provider self-assesses compliance. Applies to most Annex III systems.
- Third-party assessment (Annex VII): Independent notified body evaluates. Required for biometric systems and some critical infrastructure AI.
What the Assessment Covers
- Quality management system review
- Technical documentation completeness check
- Risk management system adequacy
- Data governance compliance
- Testing results review
- CE marking authorization
Timeline
Full conformity assessment takes 6-12 months. If you haven't started, you're behind for the August 2026 deadline.
Providers of high-risk AI systems listed in Annex III (except law-enforcement systems under Annex III point 1) must register themselves and their systems in the EU database established under Art. 71 before placing the system on the market or putting it into service (Art. 49)[src]
Who Registers
- Providers: Register before placing on market (or for public authority systems, after deployment)
- Deployers: Must also register their use of high-risk AI systems
What to Submit (Annex VIII)
- Provider name, address, contact details
- AI system name and version
- Intended purpose description
- Risk classification and Annex III category
- Conformity assessment status
- Member states where deployed
- URL to instructions for use
The database is publicly accessible — anyone can look up registered AI systems. This creates transparency and enables market surveillance.
Deployers of high-risk AI systems must: take appropriate technical and organisational measures to use the system per instructions (Art. 26(1)); assign human oversight to natural persons with necessary competence, training, and authority (Art. 26(2)); ensure input data is relevant and representative where they have control (Art. 26(4)); monitor operation, suspend use on serious incident suspicion, and inform the provider, distributor, and authorities (Art. 26(5)); keep automatically generated logs for at least 6 months unless otherwise required (Art. 26(6)); inform workers' representatives and affected workers before deploying in the workplace (Art. 26(7)); register system use in the EU database (Art. 26(8)); and carry out a data protection impact assessment (DPIA) where required (Art. 26(9))[src]
Article 26 is the single most important article for deployers of high-risk AI systems — and most SaaS companies using third-party AI for regulated purposes are deployers. It contains 12 paragraphs, each creating a specific obligation. Here is every paragraph mapped to a concrete action.
Paragraph-by-Paragraph Action Map
| Para. | Obligation | What You Actually Do |
|---|---|---|
| 26(1) | Use in accordance with instructions | Read and follow the provider's instructions for use. If the provider says the system is for customer support, do not use it for credit scoring. Using a system outside its intended purpose can reclassify you as a provider with full provider obligations. |
| 26(2) | Human oversight | Assign competent individuals with the authority, training, and resources to effectively oversee the AI system. These people must understand the system's capabilities, be able to interpret outputs, and be empowered to override or stop the system. Covered in detail in Lesson 3.8. |
| 26(3) | Input data relevance | Ensure that the data you feed into the AI system is relevant and representative for the system's intended purpose. If the provider designed the system for English-language inputs and you feed it German text, you are violating this paragraph. |
| 26(4) | Monitoring and reporting to providers | Monitor the system's operation. If you have reason to believe the system presents a risk per Art. 79, inform the provider or distributor and suspend use. If you identify a serious incident, report it immediately per Art. 73. |
| 26(5) | Log retention | Keep the automatically generated logs for a period appropriate to the system's intended purpose — at least 6 months unless EU or national law requires longer. Store logs securely and ensure they are accessible to authorities on request. |
| 26(6) | Workplace information | If you deploy a high-risk AI system in the workplace, inform workers' representatives and affected workers that they will be subject to the system. This is not optional — it is an active disclosure requirement before deployment. |
| 26(7) | Inform affected persons | When making decisions about natural persons using a high-risk AI system, inform those persons that they are subject to the system. For hiring tools: candidates must be told the AI is involved. For credit scoring: applicants must be informed. |
| 26(8) | Public-sector deployers: fundamental rights impact assessment | If you are a public-sector body (or a private entity providing public services), you must perform a fundamental rights impact assessment before deploying the system. This is separate from and in addition to the DPIA under 26(9). |
| 26(9) | Data Protection Impact Assessment | Before putting a high-risk AI system into use, perform a DPIA under GDPR Art. 35. You may use the DPIA you already have and extend it with AI-specific considerations. Covered in detail in Lesson 3.9. |
| 26(10) | Cooperation with authorities | Cooperate with national competent authorities. Provide access to automatically generated logs (per 26(5)) and any other information needed to assess compliance. You cannot refuse a regulator's request for logs. |
| 26(11) | Use of AI output for decisions | If you use the output of the AI system as a basis for decisions affecting natural persons, those decisions must be explained. You must be able to explain to an affected person how the AI contributed to the decision about them. |
| 26(12) | System not substantially modified | Do not modify the system in a way that turns you into a provider. If you retrain the model, substantially alter its purpose, or make significant changes, you may assume provider obligations under Art. 25. Use the system as provided. |
Step-by-Step: Deployer Compliance Checklist
- Obtain and read the provider's instructions for use. If they have not provided them, request them formally under Art. 13. You cannot comply with 26(1) without these instructions.
- Designate human oversight personnel. Assign named individuals. Document their training, authority level, and escalation procedures.
- Validate your input data. Confirm the data you feed the system matches the provider's specified requirements for format, quality, and scope.
- Set up monitoring. Implement a process to watch the system's outputs for anomalies, drift, or unexpected behavior. Define thresholds that trigger investigation.
- Configure log retention. Ensure automatically generated logs are stored for at least 6 months with adequate access controls.
- Draft disclosure notices. Prepare the notifications for workers (26(6)), affected individuals (26(7)), and workplace representatives.
- Complete the DPIA. Extend your existing GDPR DPIA or create a new one addressing AI-specific risks.
- Document everything. Create a deployer compliance file that evidences each obligation is met. A regulator will ask for this.
Concrete Example
A fintech startup uses a third-party AI model to score loan applications for EU customers. Under Art. 26, they must: follow the provider's instructions and only use the model for credit scoring, not fraud detection (26(1)); assign a trained credit risk officer as the human overseer with authority to override any AI recommendation (26(2)); ensure applicant data fed into the model matches the provider's specified data schema (26(3)); implement dashboards monitoring approval rates by demographic group and flag statistical deviations (26(4)); store all scoring logs for 6 months in an encrypted database (26(5)); notify loan applicants before submission that AI will be used in evaluating their application (26(7)); complete a DPIA addressing algorithmic discrimination risks (26(9)); and ensure any rejected applicant can receive an explanation of how the AI contributed to the denial (26(11)).
Common Mistakes
- Assuming the provider handles everything. Art. 26 places specific, non-delegable obligations on deployers. The provider cannot perform human oversight for you, retain your logs, or notify your users.
- Deploying without reading the instructions. 26(1) requires use "in accordance with instructions." If the provider's documentation says "not for use in employment decisions" and you use it for hiring, you are in violation — even if the AI works perfectly.
- Treating log retention as optional. 26(5) and 26(10) together mean authorities can demand your logs. If you did not retain them, there is no defense.
- Forgetting worker notification (26(6)). This is the most commonly overlooked paragraph. If you use AI in internal HR processes, your own employees must be informed before the system is deployed.
Human oversight is one of the defining requirements of the EU AI Act. Article 14 tells providers what oversight capabilities to build into the system. Art. 26(2) tells deployers to assign the actual humans and give them the authority to act. Together, they create a chain: the provider builds the controls, the deployer uses them.
What Art. 14 Requires (Provider Side)
The provider must design the AI system so that it can be effectively overseen by natural persons during use. Specifically, the system must include measures that allow the human overseer to:
- Fully understand the system's capacities and limitations and be able to properly monitor its operation (Art. 14(4)(a))
- Remain aware of automation bias — the tendency to over-rely on AI outputs — and guard against it (Art. 14(4)(b))
- Correctly interpret the system's output, taking into account the system's characteristics and the available interpretation tools (Art. 14(4)(c))
- Decide not to use the system in any particular situation, override the output, or reverse a decision (Art. 14(4)(d))
- Interrupt or stop the system using a "stop" button or similar procedure (Art. 14(4)(e))
What "Competent Individuals" Means (Art. 26(2))
Art. 26(2) does not use the word "competent" casually. The deployer must ensure that the individuals assigned to human oversight have:
- Relevant competence: They understand the AI system they are overseeing — what it does, how it works at a functional level, what its known limitations are.
- Training: They have received training appropriate to the task. For a hiring AI, this means understanding both the AI system and employment discrimination law. For a medical AI, this means clinical expertise combined with AI literacy.
- Authority: They have the organizational authority to override, suspend, or stop the AI system. A junior analyst who can see the dashboard but cannot override a decision does not satisfy this requirement.
- Resources: They have the time, tools, and support to actually perform oversight. Assigning oversight to someone who is already overloaded with other duties is a paper compliance exercise, not real oversight.
Step-by-Step: Setting Up an Oversight Process
- Identify oversight roles. For each high-risk AI system, designate specific individuals (by name or role) as human overseers. Document who they are and what system they oversee.
- Define competency requirements. Write a competency profile for the oversight role: what knowledge is needed (AI system specifics, domain expertise, legal requirements), what training is required, and how competency is verified.
- Deliver and document training. Train overseers on: the system's intended purpose and limitations, how to interpret outputs, what automation bias looks like, and when and how to intervene. Record training completion and schedule refreshers.
- Build intervention procedures (SOPs). Write standard operating procedures for: (a) routine monitoring — what to check, how often, what constitutes "normal"; (b) escalation — when to escalate an AI output for review; (c) override — how to override an AI decision and what documentation is required; (d) shutdown — when and how to stop the system entirely.
- Implement technical controls. Ensure the AI system exposes the interfaces needed: a dashboard showing system performance and decisions, an override mechanism that logs who overrode what and why, and a stop/suspend capability. If the provider's system does not offer these, request them under Art. 13.
- Audit and improve. Review the oversight process periodically. Track metrics: how often are AI decisions reviewed? How often are they overridden? What is the false positive/negative rate of overridden decisions? Use this data to improve both the AI system and the oversight process.
Concrete Example
A recruitment SaaS company deploys an AI-powered candidate screening tool. They designate senior HR business partners as human overseers. Each overseer completes a mandatory 4-hour training covering: how the screening model ranks candidates, known demographic performance gaps documented by the provider, what automation bias looks like in hiring (e.g., anchoring to the AI score rather than reading the full application), and the override procedure. The SOP requires every AI-recommended "reject" to be reviewed by a human before the candidate is notified. The system provides a dashboard showing screening outcomes by gender and ethnicity. If demographic disparity exceeds a defined threshold, the overseer must suspend the AI system and escalate to the compliance officer. Every override is logged with the overseer's name, the original AI recommendation, the human decision, and the rationale.
Common Mistakes
- Nominal oversight. Assigning someone as "human overseer" on paper without giving them training, tools, or authority. This is the number one compliance failure regulators will look for. The test is: can this person actually intervene effectively?
- Confusing monitoring with oversight. Monitoring means watching dashboards. Oversight means having the power to act. Art. 14(4)(d) specifically requires the ability to override or reverse decisions — a read-only dashboard is not oversight.
- Not addressing automation bias. Art. 14(4)(b) specifically calls out automation bias. If your overseers simply rubber-stamp AI outputs 99% of the time, your oversight process is not effective. Design the process to force genuine engagement — for example, requiring the human to form an independent judgment before seeing the AI output.
- No documentation of override decisions. If an overseer overrides the AI, document why. This creates the audit trail regulators need and helps improve the system over time.
- Single point of failure. Having one human overseer with no backup. What happens when they are on leave? Oversight must be continuously available whenever the system is in use.
Art. 26(9) requires deployers of high-risk AI systems to carry out a Data Protection Impact Assessment (DPIA) under GDPR Art. 35 before putting the system into use. The good news: if you already have a GDPR DPIA for your processing activity, you do not start from scratch. The AI Act requires you to extend your existing DPIA with AI-specific considerations.
What the Article Requires
Before deploying a high-risk AI system that processes personal data, you must assess the impact on data subjects' rights and freedoms. Art. 26(9) explicitly states that deployers shall use the information provided by the provider under Art. 13 to comply with this obligation. This means the DPIA must incorporate the provider's documentation about the system's capabilities, limitations, and risks.
Section-by-Section: Extending a GDPR DPIA for AI
1. Description of Processing (GDPR Art. 35(7)(a))
Your existing DPIA describes the processing activity. Extend it with: the specific AI system used (name, version, provider), the system's intended purpose as documented by the provider, how AI outputs are used in your decision-making process, and what data flows into and out of the AI system. Be specific about whether the AI makes autonomous decisions or provides recommendations to humans.
2. Necessity and Proportionality (GDPR Art. 35(7)(b))
Address why AI is necessary for this processing. Could the same outcome be achieved without AI, or with a less intrusive AI approach? Document your rationale: does the AI provide a meaningful improvement in accuracy, speed, or consistency that justifies its deployment? This is where you demonstrate that deploying a high-risk AI system is proportionate to the goal.
3. Risks to Rights and Freedoms (GDPR Art. 35(7)(c))
This is where the AI Act extension is most significant. Beyond standard GDPR risks (data breach, unauthorized access), you must now assess:
- Algorithmic discrimination: Does the AI system perform differently for different demographic groups? Use the provider's accuracy metrics by subgroup (which they must supply under Art. 13).
- Automation bias: Risk that human overseers defer to the AI without critical evaluation, leading to unjust outcomes.
- Opacity: Can you explain to data subjects how the AI contributed to a decision about them? If the system is a black box, this is a risk.
- Feedback loops: If AI outputs influence future training data, there is a risk of amplifying existing biases.
- Profiling and automated decision-making: Under GDPR Art. 22, individuals have the right not to be subject to purely automated decisions with legal effects. How does your AI deployment interact with this right?
4. Mitigation Measures (GDPR Art. 35(7)(d))
For each risk identified in section 3, document specific mitigations:
- Human oversight measures (cross-reference your Art. 14/26(2) implementation from Lesson 3.8)
- Bias monitoring and thresholds that trigger review
- Transparency notices to data subjects (cross-reference Art. 50 implementation)
- Data minimization: only feed the AI the personal data it needs
- Right-to-explanation procedures: how data subjects can obtain a meaningful explanation of AI-assisted decisions
- Appeal/contestation procedures: how data subjects can challenge an AI-influenced decision
5. AI-Specific Additions (New for AI Act)
Add a dedicated section that does not exist in a standard GDPR DPIA:
- Provider documentation review: confirm you have received and reviewed the Art. 13 instructions for use, and summarize relevant risk information from the provider
- AI system monitoring plan: how you will monitor the system post-deployment and what triggers a DPIA update
- Incident response: cross-reference your Art. 73 incident reporting process
- Log retention: confirm log storage per Art. 26(5)
Concrete Example
An insurance company deploys an AI system that assesses health insurance claims. Their existing GDPR DPIA covers the processing of health data under Art. 9 GDPR. They extend it with: (1) a description of the AI claim-assessment system, its provider, and its documented accuracy rate of 91% overall but 84% for claims involving rare conditions; (2) a necessity assessment explaining that AI processing reduces claim resolution from 14 days to 2 days, directly benefiting claimants; (3) AI-specific risks including the 7% accuracy gap for rare conditions (risk: legitimate claims wrongly denied), automation bias among claims adjusters, and opacity of the model's decision factors; (4) mitigations including mandatory human review of all AI-recommended denials, quarterly bias audits against condition types, and a claimant right to request fully human review; (5) an AI-specific section confirming provider documentation has been obtained, a monitoring dashboard tracks denial rates by condition category, and any 5% increase in denials triggers a DPIA review.
Common Mistakes
- Creating a separate "AI DPIA" instead of extending the existing one. Art. 26(9) points to GDPR Art. 35 — this is the same DPIA, extended. Creating a separate document fragments your compliance and risks inconsistency.
- Not using the provider's documentation. Art. 26(9) specifically says to use information provided under Art. 13. If your DPIA does not reference the provider's risk information, accuracy metrics, and known limitations, it is incomplete.
- Ignoring the right to explanation. AI-assisted decisions about individuals trigger both GDPR Art. 22 (automated decision-making) and AI Act Art. 26(11) (explainability). Your DPIA must address how you satisfy both.
- Static DPIA. A DPIA written once and never updated. AI systems change — models are updated, data distributions shift, new risks emerge. Build review triggers into the DPIA: model version changes, significant accuracy drift, or new categories of affected persons.
Providers and deployers of certain AI systems must comply with transparency obligations in Art. 50, including: informing natural persons that they are interacting with an AI system (Art. 50(1)); marking synthetic audio/image/video/text outputs in a machine-readable format (Art. 50(2)); informing persons exposed to emotion-recognition or biometric categorisation systems (Art. 50(3)); disclosing AI-generated deep fakes and AI-generated text published on matters of public interest (Art. 50(4)). Information must be given clearly at first interaction (Art. 50(5))[src]
Article 50 is unique in the AI Act because it applies regardless of risk classification. Even if your AI system is not high-risk, if it interacts with people or generates content, you have transparency obligations. This article affects the broadest range of companies — essentially anyone deploying customer-facing AI.
The Four Sub-Obligations
1. AI Interaction Disclosure (Art. 50(1))
What it requires: Providers must ensure that AI systems intended to interact directly with natural persons are designed and developed so that the persons concerned are informed they are interacting with an AI system, unless this is obvious from the circumstances and context of use. The notification must be given at the latest at the time of first interaction or exposure.
What you actually do:
- If you have a chatbot, virtual assistant, or any AI-driven conversation interface: display a clear notice before or at the start of the interaction. Example: "You are chatting with an AI assistant. A human agent is available if you prefer."
- The notice must be "clear and distinguishable" — not buried in a terms-of-service page. It must be visible at the point of interaction.
- Exception: if it is "obvious from the circumstances" that the user is interacting with AI. A clearly labeled "AI Search" feature with a robot icon probably qualifies. A chatbot that mimics human conversation style does not.
2. AI-Generated Content Labeling (Art. 50(2))
What it requires: Providers of AI systems that generate synthetic audio, image, video, or text content must ensure that the outputs are marked in a machine-readable format and are detectable as artificially generated or manipulated. The technical solution must be effective, interoperable, robust, and reliable.
Proposal to watch (commission proposal, not yet adopted): COM(2025) 836 proposes a grace period for Art. 50(2) watermarking obligations (marking AI-generated synthetic audio/image/video/text in machine-readable format) — the Commission proposed until 2 February 2027 (six months), and the Parliament position shortens this to 2 November 2026 (three months) per secondary reporting. Under the current law in force, Art. 50 applies from 2 August 2026[src]
What you actually do:
- Implement machine-readable metadata in AI-generated content. For images: embed markers using C2PA (Coalition for Content Provenance and Authenticity) or similar standards. For text: consider watermarking techniques or metadata headers.
- This is a provider obligation (the company generating the content), but deployers must not remove or disable these markings.
- The marking must survive common sharing and editing operations where technically feasible.
- Exception: AI systems performing assistive functions (e.g., spell-checking, grammar correction) that do not substantially alter the input are exempt.
3. Emotion Recognition Disclosure (Art. 50(3))
What it requires: Deployers of emotion recognition systems or biometric categorisation systems must inform the natural persons exposed. They must also process personal data in accordance with GDPR, the Law Enforcement Directive, and relevant data protection regulations.
What you actually do:
- If your AI analyzes facial expressions, voice tone, body language, or physiological signals to detect emotions: notify every person being analyzed before the analysis begins.
- The notification must specify that emotion recognition is in use and what data is being processed.
- Important: certain emotion recognition uses in the workplace and education are prohibited under Art. 5. Check Art. 5 first before implementing any emotion recognition disclosure — you may be banned from using the system entirely.
4. Deep Fake Disclosure (Art. 50(4))
What it requires: Deployers of AI systems that generate or manipulate image, audio, or video content constituting a deep fake must disclose that the content has been artificially generated or manipulated. The disclosure must be made in a clear and visible manner, labeling the content as AI-generated.
What you actually do:
- If your product generates realistic images, videos, or audio of real people, or manipulates existing media to alter what someone appears to say or do: label the output visibly. Example: a watermark, a caption, or a persistent label stating "AI-generated content."
- The label must be placed in a way that is "clearly visible and recognisable" to the average person.
- Exception: content that is part of an "obviously artistic, creative, satirical, fictional, or analogous work" — but this exception is narrow, and you should err on the side of disclosure.
- Note: the machine-readable marking under 50(2) and the visible disclosure under 50(4) are separate requirements. You may need to implement both.
Concrete Example
A SaaS company builds a customer service platform with three AI features: (1) an AI chatbot for first-line support, (2) an AI email composer that drafts responses for agents, and (3) a sentiment analysis module that detects customer frustration. Under Art. 50, they must: display "You are chatting with an AI assistant" at the start of every chatbot conversation (50(1)); embed C2PA metadata in AI-drafted emails so the content is machine-detectable as AI-generated (50(2)); and notify customers that their communications are being analyzed for sentiment before the analysis occurs (50(3)). The chatbot notice appears as a banner above the chat window. The email metadata is embedded automatically by the AI provider. The sentiment notice is added to the support portal's privacy notice and displayed as a one-time notification when a customer opens a support ticket.
Common Mistakes
- Burying the disclosure in terms of service. Art. 50(1) requires notification "at the latest at the time of first interaction." A line in your ToS that users accepted six months ago does not satisfy this. The notice must be at the point of interaction.
- Assuming "obvious" too broadly. Providers and deployers often assume users know they are interacting with AI. Unless the AI nature is genuinely unmistakable from the interface (a clearly labeled "AI" section), disclose explicitly. When in doubt, disclose.
- Ignoring machine-readable marking (50(2)). A visible "AI-generated" label on an image does not satisfy 50(2), which requires machine-readable detection. You need both human-visible and machine-readable markers.
- Not checking Art. 5 before implementing emotion recognition disclosure. Some emotion recognition uses are banned entirely under Art. 5 (prohibited practices). If your use case is prohibited, a transparency notice does not make it legal — you must not deploy the system at all.
Providers must report serious incidents to the market surveillance authorities of the Member States where the incident occurred immediately after establishing a causal link, and in any event no later than 15 days after becoming aware, or 2 days for widespread infringements or serious and irreversible disruption to critical infrastructure[src]
Both providers and deployers have reporting duties, with strict timelines. This is the AI Act's equivalent of GDPR's 72-hour breach notification — but with different triggers and different recipients.
What Is a "Serious Incident"?
Art. 3(49) defines a serious incident as any incident or malfunctioning of an AI system that directly or indirectly leads to any of the following:
- Death of a person or serious damage to a person's health
- Serious and irreversible disruption of the management or operation of critical infrastructure
- Breach of obligations under Union law intended to protect fundamental rights
- Serious damage to property or the environment
The key word is "serious." Not every AI error or malfunction triggers reporting. A chatbot giving a wrong product recommendation is not a serious incident. An AI hiring tool systematically rejecting candidates of a particular ethnicity is — it constitutes a breach of fundamental rights.
Reporting Timelines
| Scenario | Deadline | Details |
|---|---|---|
| Death or serious health damage | Immediately, no later than 2 days after awareness | Initial report within 2 days, followed by a final report within 15 days |
| Widespread malfunctioning affecting multiple persons | Immediately, no later than 2 days | Same timeline — the scale triggers urgency |
| Other serious incidents (fundamental rights breach, critical infrastructure disruption, serious property/environmental damage) | Within 15 days of becoming aware | A single report is acceptable if the investigation is complete within 15 days; otherwise, an initial report followed by a final report |
The clock starts when the provider or deployer becomes aware (or when they should reasonably have become aware) of the incident. Ignorance due to inadequate monitoring is not a defense.
Who Reports to Whom
- Providers report to the market surveillance authority of the member state where the incident occurred.
- Deployers report serious incidents to the provider and to the relevant market surveillance authority.
- If the incident occurs in multiple member states, report to each relevant authority.
- The provider must also report to the importer or distributor where applicable.
What to Include in a Report
While the exact reporting format will be specified by implementing acts, based on Art. 73 and general incident reporting best practices, your report should include:
- System identification: AI system name, version, provider, registration number in the EU database
- Incident description: What happened, when it was detected, what harm occurred or is likely
- Affected persons: Number and categories of persons affected
- Root cause analysis: What caused the incident, to the extent known at the time of reporting (the initial report can state "investigation ongoing")
- Immediate actions taken: What corrective actions were taken — system suspended, outputs reversed, affected persons notified
- Preventive measures: What steps are being taken to prevent recurrence
- Contact information: Who the authority should contact for follow-up
Step-by-Step: Building an Incident Response Process
- Define incident categories. Map the Art. 3(49) definition to your specific context. For a hiring AI: systematic discrimination = serious incident (fundamental rights). For a medical AI: wrong diagnosis leading to delayed treatment = serious incident (health damage).
- Establish detection mechanisms. You cannot report what you do not detect. Implement monitoring that can catch: anomalous output patterns, demographic disparities in AI decisions, user complaints referencing AI behavior, and system malfunctions.
- Create an internal escalation path. Define who receives initial incident reports internally, who has authority to classify an incident as "serious," and who is responsible for external reporting. This should not require multiple approval layers — the timelines are tight.
- Prepare report templates. Have a pre-filled template ready so the team can focus on facts, not formatting, during a stressful incident.
- Identify your authorities. Know which national market surveillance authority you report to. Each EU member state designates one. Find yours in advance — do not scramble during an incident.
- Conduct drills. Run a tabletop exercise at least once: "Our AI hiring tool has been rejecting female candidates at 2x the rate of male candidates for 3 weeks. Walk through the response." The drill reveals gaps in your process.
Concrete Example
A healthcare SaaS deploys an AI triage system that recommends priority levels for emergency department patients. A software update introduces a regression: the system consistently under-prioritizes patients presenting with atypical cardiac symptoms, leading to 3 patients experiencing delayed treatment over 5 days. A nurse notices the pattern and reports internally. The company classifies this as a serious incident (health damage) and must: (1) file an initial report with the national market surveillance authority within 2 days, (2) notify the AI provider immediately, (3) suspend the AI triage system or revert to the previous version, (4) notify the hospital deploying the system, and (5) file a final report within 15 days detailing the root cause (regression in model update), the number of affected patients, and corrective actions (rollback, additional testing requirements for updates, enhanced monitoring thresholds).
Common Mistakes
- Not having a process before an incident occurs. The 2-day timeline for critical incidents leaves no time to design a reporting process from scratch. Build it now.
- Confusing AI Act reporting with GDPR breach notification. These are separate obligations with different triggers, different timelines, and different recipients. A data breach in your AI system may trigger both GDPR Art. 33 (72-hour notification to the data protection authority) and AI Act Art. 73 (reporting to the market surveillance authority). You must do both.
- Waiting for certainty before reporting. Art. 73 requires reporting when you become aware or should reasonably have become aware. An initial report with "investigation ongoing" is far better than a late report with full details.
- Only reporting to the provider. Deployers must report to both the provider AND the market surveillance authority. Reporting only to the provider does not satisfy the obligation.
Article 72 requires providers of high-risk AI systems to establish and document a post-market monitoring system. This is the ongoing surveillance obligation — what you do after the system is deployed to ensure it continues to comply. Think of it as the AI system's ongoing health check, not a one-time inspection.
What the Article Requires
The provider must establish a post-market monitoring system that is proportionate to the nature of the AI technologies and the risks of the high-risk system. The system must:
- Actively and systematically collect, document, and analyze relevant data provided by deployers or collected through other sources throughout the AI system's lifetime
- Allow the provider to continuously evaluate the AI system's compliance with the requirements in Chapter III, Section 2 (Arts. 8-15)
- Feed into the risk management system (Art. 9) — monitoring data must trigger risk re-evaluation when needed
What a Monitoring Plan Includes
Art. 72(3) requires the post-market monitoring plan to be part of the technical documentation (Annex IV). The plan must include at minimum:
- Data collection strategy: What data you will collect, from which sources, and how frequently. Sources include: deployer feedback, system logs, performance metrics, user complaints, incident reports, and publicly available information (e.g., academic research on vulnerabilities in your model type).
- Performance monitoring metrics: The specific metrics you will track — accuracy, precision, recall, false positive/negative rates, and crucially, these metrics broken down by relevant subgroups (demographic, geographic, etc.).
- Bias and drift detection: How you will detect performance degradation (model drift) and emerging biases. Define thresholds: at what point does a performance change trigger investigation?
- Deployer feedback mechanisms: How deployers report issues to you. This is a two-way obligation — Art. 26(4) requires deployers to inform providers of risks, and you must have a channel to receive that information.
- Review schedule: How often you review monitoring data. For high-risk systems with frequent updates, this may be weekly. For stable systems, monthly or quarterly may suffice. The key is that the schedule is defined, not ad hoc.
- Trigger conditions: What findings trigger specific actions — re-evaluation of the risk management system, updates to technical documentation, corrective action, or incident reporting under Art. 73.
- Roles and responsibilities: Who is responsible for monitoring, who reviews findings, and who has authority to trigger corrective actions.
Data to Collect
| Data Category | Examples | Why It Matters |
|---|---|---|
| Performance metrics | Accuracy, F1 score, AUC-ROC by subgroup | Detect degradation over time (model drift) |
| Input data characteristics | Distribution shifts in incoming data vs. training data | Input drift is the leading cause of AI performance degradation |
| Output distributions | Decision rates, score distributions, rejection rates by category | Detect emerging bias or systemic errors |
| User/deployer feedback | Complaints, override rates, reported errors | Real-world signal that metrics alone may miss |
| Incident data | Near-misses, actual incidents, Art. 73 reports | Pattern detection — multiple near-misses may predict a serious incident |
| External intelligence | Published vulnerabilities, academic papers on model weaknesses, regulatory guidance updates | Risks you did not know about at deployment may emerge later |
When to Update
The monitoring plan is not static. You must update it when:
- The AI system is significantly modified (new model version, retraining, expanded use case)
- Monitoring reveals risks not previously identified
- A serious incident occurs (Art. 73) — the root cause analysis should feed back into the monitoring plan
- Harmonised standards or common specifications change
- A national authority or the AI Office issues guidance affecting your system
Concrete Example
A provider of an AI credit-scoring system establishes a post-market monitoring plan. They collect: weekly performance metrics (approval/denial rates by age group, gender, and nationality), monthly model drift reports comparing current input distributions to training data distributions, deployer-reported issues via a dedicated compliance portal, and quarterly reviews of published research on credit-scoring bias. Their trigger conditions: if denial rate for any demographic group deviates by more than 5% from the baseline, an investigation is launched within 7 days. If the investigation confirms bias, the risk management system is updated, affected deployers are notified, and if the bias caused harm, an Art. 73 report is filed. The monitoring plan assigns the ML operations team as responsible for data collection, the compliance officer for review, and the CTO as the authority for corrective action decisions.
Common Mistakes
- Monitoring only aggregate metrics. A system that monitors overall accuracy but not subgroup performance will miss discriminatory drift. Art. 72 requires monitoring relevant to Art. 9 risks — and demographic bias is almost always a relevant risk for high-risk systems.
- No trigger conditions. Collecting data without defining what constitutes a problem. If you have no thresholds, monitoring data accumulates without driving action. Define thresholds before deployment.
- Not connecting monitoring to risk management. Art. 72 explicitly states that monitoring data must be used to evaluate compliance and feed into the Art. 9 risk management system. If your monitoring team and your risk management team do not communicate, you have a compliance gap.
- Relying solely on deployer reports. Deployers may not detect or report all issues. Your monitoring system must include proactive data collection (system logs, automated performance checks), not just reactive deployer feedback.
Distributors (entities that make AI systems available on the market without modifying them) have lighter but real obligations (Article 24).
What Distributors Must Do
- Verify the provider has completed conformity assessment and CE marking
- Verify instructions for use are provided in the correct language
- Verify the AI system bears the required identification information
- Not make available systems they know or should know are non-compliant
- Inform the provider and market surveillance authorities if a system poses a risk
When Distributors Become Providers
If a distributor modifies the AI system, puts it on the market under their own name, or changes the intended purpose — they become a provider with full provider obligations.
The AI Act relies on harmonized standards (Arts. 40-41) and codes of practice (Art. 56) to define the technical details of compliance.
Harmonized Standards (CEN/CENELEC)
CEN-CENELEC JTC 21 is developing harmonised standards under Art. 40 to give a presumption of conformity. In October 2025, the BT (Technical Boards) adopted an exceptional-measures package to accelerate publication of key deliverables, with publication targeted by Q4 2026. prEN 18286 (Quality Management Systems for AI) is among the first to reach the Enquiry stage[src]
The European Commission has tasked CEN and CENELEC (European standardization bodies) with developing standards covering:
- Risk management systems (Art. 9 implementation)
- Data governance requirements (Art. 10 implementation)
- Technical documentation templates (Annex IV)
- Accuracy, robustness, and cybersecurity testing methods
- Quality management system requirements
Compliance with harmonized standards creates a presumption of conformity — meaning if you follow the standard, authorities presume you comply with the corresponding article.
Codes of Practice for GPAI
The General-Purpose AI Code of Practice, drawn up by independent experts under Art. 56 (multi-stakeholder process), was published in final form on 10 July 2025. It offers GPAI providers a way to demonstrate compliance with Art. 53 (all providers) and Art. 55 (GPAI models with systemic risk). Adopting the Code is voluntary but non-adoption exposes providers to closer scrutiny and risk of the Art. 99(4) fines[src]
How to Stay Current
- Follow the AI Office announcements: EC AI policy page
- Monitor CEN/CENELEC AI standardization work programs
- Subscribe to the European AI Act newsletter at artificialintelligenceact.eu
Articles 57-58 require each member state to establish at least one AI regulatory sandbox by the headline application date: The Regulation applies from 2 August 2026, with earlier dates for Chapter I-II (prohibited practices and AI literacy, from 2 February 2025) and Chapter V (general-purpose AI, from 2 August 2025), and a later date for Annex I legacy high-risk systems already on the market (2 August 2027)[src]
What's a Sandbox?
A controlled environment where companies can develop, test, and validate innovative AI systems under the direct supervision of national authorities — before full market deployment. Think of it as a "safe space to experiment."
Benefits for Startups
- Reduced compliance burden: Sandbox participants get guidance from regulators during development, not after
- Faster time to market: Regulatory questions answered before launch
- SME priority: Art. 57(4) requires sandboxes to prioritize access for SMEs and startups
- Real-world testing: Art. 58 allows testing in real-world conditions with informed participants
How to Apply
- Check your national authority's website for sandbox applications
- Prepare: description of AI system, intended purpose, risk assessment, testing plan
- Apply early — sandbox spots are limited and competitive
Scenario: List All Obligations
Your company is a Series B SaaS startup in Berlin. You use Anthropic's Claude API to power a hiring tool that screens and ranks job applicants for EU enterprise customers. List every obligation you have under the AI Act, citing the specific articles.
Show Answer
You are a deployer of a high-risk AI system (Annex III, Category 4 — Employment). Your obligations: Human oversight (Art. 26(2)), System monitoring (Art. 26(5)), Log retention (Art. 26(5)), DPIA (Art. 26(9)), Inform workers/candidates about AI use (Art. 26(7)), Transparency disclosure (Art. 50), Incident reporting (Art. 73), EU database registration (Art. 49), Risk management contribution (Art. 9 via provider), Request provider documentation (Arts. 13, 47), AI literacy for staff (Art. 4).