Enforcement & Defense

National AI authorities, the European AI Office, market surveillance, and what audit-ready actually looks like in practice.

Part of the AI literacy training (Article 4) curriculum · Sources: Regulation (EU) 2024/1689

0 of 0 lessons completed in this module
5.0 Early Enforcement — What's Happened So Far ~20 min

The EU AI Act entered into force on August 1, 2024. Enforcement is phased, and as of early 2026 we are still in the early stages — but not in a vacuum. Understanding what has happened so far, and what the GDPR parallel tells us, is critical for calibrating your urgency.

Timeline of Enforcement Milestones

  • Regulation (EU) 2024/1689 entered into force on 1 August 2024, the twentieth day after its publication in the Official Journal on 12 July 2024[src] The clock starts; no obligations yet except general applicability.
  • Chapter I (general provisions, including Art. 4 AI literacy) and Chapter II (prohibited practices, Art. 5) apply from 2 February 2025[src] First real enforcement deadline. Any organisation using AI must ensure staff AI literacy; all prohibited AI practices become illegal.
  • Chapter III Section 4 (notified bodies), Chapter V (general-purpose AI models), Chapter VII (governance), Chapter XII (penalties), and Art. 78 apply from 2 August 2025, with the exception of Art. 101[src] OpenAI, Anthropic, Google, and other GPAI providers must comply; national authorities must be designated.
  • The Regulation applies from 2 August 2026, with earlier dates for Chapter I-II (prohibited practices and AI literacy, from 2 February 2025) and Chapter V (general-purpose AI, from 2 August 2025), and a later date for Annex I legacy high-risk systems already on the market (2 August 2027)[src] The major deadline. All deployer obligations, conformity assessments, transparency requirements, EU database registration.

What Has Actually Happened

As of early 2026, enforcement activity has been limited but deliberate:

  • Authority designation: Member states have been designating national competent authorities, though the pace varies significantly. Some (France, Netherlands, Spain) moved quickly; others are still finalising institutional arrangements.
  • AI Office establishment: The European AI Office within the Commission has been staffing up and developing codes of practice for GPAI models. Their first substantive output — draft codes of practice — went through public consultation in late 2025.
  • No formal fines yet: As of this writing, no AI Act-specific fines have been issued. This is consistent with the enforcement phase: only prohibited practices and AI literacy have been enforceable, and authorities are focused on institutional setup.
  • Informal guidance: Several national authorities have published preliminary guidance documents on AI literacy requirements and prohibited practices, signalling how they interpret these provisions.

The GDPR Parallel

GDPR enforcement offers the best predictive model. GDPR entered into force in May 2016, became enforceable in May 2018, and the first significant fines did not arrive until January 2019 (CNIL fined Google 50 million EUR). The pattern:

  1. Year 1 (2018): Grace period. Authorities focused on complaints, guidance, and institutional readiness. Very few enforcement actions.
  2. Year 2 (2019): First symbolic fines. Targets: big tech companies and egregious violators. Purpose: signal that enforcement is real.
  3. Year 3+ (2020-present): Enforcement scaled. Fines increased in size and frequency. Smaller organizations began receiving enforcement actions.

Expect the AI Act to follow a similar trajectory. The first fines are likely to target GPAI providers for Art. 53 non-compliance (the most visible obligation with the clearest deadline) and obvious prohibited practice violations. Deployer enforcement will follow, likely starting with high-risk sectors.

We are in the "grace period" phase of AI Act enforcement — similar to GDPR's first year. No fines have been issued, but the institutional machinery is being built. History tells us that enforcement will accelerate quickly once it starts. The organizations that prepared early will be rewarded; those that assumed "nobody is enforcing this" will be caught off guard.
5.1 National AI Authorities ~20 min

Each EU member state must designate one or more national competent authorities to supervise the application and implementation of the AI Act (Article 70). These authorities are your primary enforcement contact point — they receive complaints, conduct investigations, and impose penalties. The landscape is still forming, but key countries have made their designations.

Key Country Designations

CountryAuthorityNotes
FranceCNIL (data protection) + dedicated AI authority coordinationFrance has taken a dual approach: CNIL handles AI-related data protection issues, while a broader coordination mechanism addresses non-data AI obligations. France has been among the most active in publishing guidance.
GermanyBNetzA (Federal Network Agency) as coordinator, with sectoral regulators for specific domainsGermany's federal structure means enforcement may involve multiple authorities depending on the sector (health, finance, employment). BNetzA provides coordination.
NetherlandsAutoriteit Persoonsgegevens (AP) designated as AI authorityThe Netherlands moved quickly, leveraging its data protection authority. The AP has been one of the more enforcement-active DPAs under GDPR, which may predict AI Act enforcement posture.
SpainAESIA (Spanish Agency for the Supervision of Artificial Intelligence)Spain created a dedicated AI-specific agency — one of the first member states to do so. AESIA was operational before the AI Act entered into force, signalling strong enforcement intent.
ItalyAgID (Agency for Digital Italy) + Garante (data protection)Italy has been active on AI enforcement even before the AI Act — the Garante's temporary ban of ChatGPT in 2023 demonstrated willingness to act aggressively on AI issues.

What This Means for You

Your primary enforcement risk depends on where your AI system affects people, not where your company is located:

  • If your AI system affects people in France, CNIL and the French AI coordination mechanism have jurisdiction
  • If your system is used across multiple member states, each national authority has jurisdiction for its territory, but the AI Office coordinates cross-border cases
  • If your company is not established in the EU but your AI system affects EU persons, you must designate an authorised representative in the EU (Art. 22), and the authority of the member state where that representative is located has primary jurisdiction

Authority Powers

National competent authorities can (Art. 74):

  • Access all information necessary for their tasks, including source code in justified cases
  • Conduct audits and inspections
  • Request documentation, data, and evidence
  • Order corrective actions, including withdrawal of AI systems from the market
  • Impose administrative fines per Art. 99
Enforcement will vary by member state. Countries with aggressive GDPR enforcement records (France, Netherlands, Italy) are likely to be aggressive on AI Act enforcement. If your AI system affects people in these jurisdictions, prepare accordingly. Know which authority has jurisdiction over your deployment and monitor their published guidance.
5.2 The European AI Office ~20 min

The European AI Office is a body within the European Commission with a specific mandate: oversee GPAI model compliance, develop codes of practice, and coordinate enforcement across member states (Articles 64-69). It is the single most important institutional actor for companies building on foundation models.

Core Mandate

The AI Office has three primary functions:

  1. GPAI oversight: Direct supervision of GPAI model providers. The AI Office (not national authorities) is responsible for ensuring OpenAI, Anthropic, Google, and other GPAI providers comply with Arts. 51-56. This includes reviewing technical documentation, evaluating systemic risk assessments, and investigating non-compliance.
  2. Codes of practice: The AI Office develops and maintains codes of practice that provide detailed guidance on how to comply with GPAI obligations. These codes, developed in consultation with industry and civil society, will be the practical benchmark for compliance. Adherence to a code of practice creates a presumption of conformity.
  3. Coordination: The AI Office coordinates between national competent authorities, facilitates information sharing, and provides guidance on consistent interpretation of the Act. It also manages the European Artificial Intelligence Board, which brings together member state representatives.

Staffing and Capacity

The AI Office has been planned with approximately 140 staff members, including technical experts, legal specialists, and policy officers. This is a small team given its mandate; compare to the European Data Protection Board, which had years to build capacity. In practice, the AI Office will need to prioritise ruthlessly. Expect initial focus on the largest GPAI providers and the most obvious compliance gaps.

Powers

The AI Office can:

  • Request information and documentation from GPAI providers
  • Conduct evaluations of GPAI models, including requesting access to models for testing
  • Issue binding decisions requiring GPAI providers to take corrective action
  • Recommend that the Commission impose fines on non-compliant GPAI providers
  • Classify GPAI models as presenting systemic risk (expanding the list beyond the 10^25 FLOPs presumption)

Why This Matters for Deployers

As a deployer, you do not interact with the AI Office directly — your enforcement relationship is with your national competent authority. However, the AI Office's work on GPAI compliance directly affects you because:

  • If the AI Office pressures GPAI providers to produce better documentation, that documentation flows downstream to you
  • The codes of practice will define what "adequate" provider documentation looks like, which determines whether what you receive is sufficient
  • If you file a complaint about a GPAI provider's failure to provide documentation, the AI Office has the investigative power to act on it
The European AI Office is the enforcement body that matters most for the supply chain problem. It directly oversees GPAI providers, develops the codes of practice that define compliance standards, and has binding decision-making power. Watch for their published codes of practice — they will become the practical benchmark for what providers must give you.
5.3 Market Surveillance (Arts. 75-94) ~20 min

Market surveillance is how enforcement actually happens in practice. Title IX of the AI Act (Articles 75-94) establishes the framework for monitoring the AI systems that are already on the market and in use. Understanding how investigations are triggered and conducted tells you what to prepare for.

How Investigations Are Triggered

Market surveillance authorities can initiate investigations through several channels:

  • Complaints: Any person or organisation can file a complaint with the national competent authority about an AI system they believe is non-compliant. This is the most common trigger under GDPR, and it will likely be the most common trigger for the AI Act. Competitors, NGOs, disgruntled employees, and affected individuals all file complaints.
  • Proactive monitoring: Authorities can conduct proactive market scans, particularly targeting sectors known for high-risk AI use (hiring, credit, healthcare). This is resource-intensive and will likely be limited initially.
  • Incident reports: Under Art. 73, deployers must report serious incidents. These reports can trigger investigations into broader non-compliance.
  • Cross-border referrals: One national authority can refer a case to another, or the AI Office can flag potential non-compliance discovered during GPAI oversight.
  • Media and public attention: High-profile AI failures or controversies can prompt authorities to open investigations, particularly when political pressure builds.

What an Investigation Looks Like

Based on the market surveillance framework in the AI Act and analogous processes under GDPR and product safety regulation:

  1. Information request: The authority sends a formal request for documentation — your risk management system, DPIA, conformity assessment, provider correspondence, training records, incident logs. You typically have 15-30 days to respond.
  2. Document review: The authority reviews your documentation for completeness and quality. They check whether your risk assessment actually addresses the risks posed by your specific deployment, whether your DPIA is substantive, and whether you have the provider documentation you should have.
  3. Technical evaluation: In some cases, the authority may request access to the AI system for testing, or may commission independent technical evaluation. For high-risk systems, they may request access to logs and performance data.
  4. Findings and corrective action: If non-compliance is found, the authority issues findings and may require corrective action within a specified timeframe. This can range from documentation remediation to system modification to market withdrawal.
  5. Sanctions: If non-compliance is serious or not remediated, fines per Art. 99 can follow.

Powers of Authorities

Market surveillance authorities have broad powers under Art. 74:

  • Access to all necessary information, including source code in justified cases
  • Power to conduct on-site audits and inspections
  • Power to order corrective actions, including system recall or withdrawal from the market
  • Power to impose interim measures if there is an imminent risk to fundamental rights
Enforcement will be primarily complaint-driven in the early years, with proactive audits increasing over time. The most likely trigger for an investigation is a complaint — from a competitor, an affected individual, or an NGO. Your best protection is documentation: if you can produce a complete compliance file within 30 days of a request, you are in a strong position.
5.4 Penalties Breakdown (Art. 99) ~15 min

Article 99 sets out the penalty framework. Each tier applies "whichever is higher" between the absolute cap and the percentage-of-turnover.

Prohibited practices (Art. 5)

Non-compliance with the prohibition of AI practices under Art. 5 is subject to administrative fines of up to EUR 35 000 000 or, for an undertaking, up to 7% of total worldwide annual turnover for the preceding financial year, whichever is higher[src]

Operator and notified-body obligations (Art. 99(4))

Non-compliance with operator or notified body obligations (other than Art. 5) is subject to administrative fines of up to EUR 15 000 000 or, for an undertaking, up to 3% of total worldwide annual turnover for the preceding financial year, whichever is higher. Covers provider obligations (Art. 16), authorised representatives (Art. 22), importers (Art. 23), distributors (Art. 24), deployers (Art. 26), notified bodies (Arts. 31, 33, 34), and transparency obligations (Art. 50)[src]

Incorrect information to authorities (Art. 99(5))

Supplying incorrect, incomplete, or misleading information to notified bodies or national competent authorities in response to a request is subject to administrative fines of up to EUR 7 500 000 or, for an undertaking, up to 1% of total worldwide annual turnover for the preceding financial year, whichever is higher[src]

SME and start-up cap

For SMEs (including startups), each fine under Art. 99 is capped at the lower of the percentage or absolute amount listed in paragraphs 3, 4, and 5 — not the higher[src]

The SME rule flips the direction: for SMEs the cap is the lower of the two ceilings, not the higher. This is the opposite of the paragraphs above and is a common source of drafting errors in compliance briefs.
"5.5 What \"Audit-Ready\" Looks Like" ~25 min

Being "audit-ready" means you can produce a complete, credible compliance file within 30 days of a regulatory request. This is not about perfection — it is about demonstrating systematic, good-faith effort to comply. Here is the documentation trail that separates a compliant deployer from a non-compliant one.

The Compliance File

Your compliance file should contain the following documents, organised and readily accessible:

DocumentArticleWhat It Demonstrates
Risk Management RecordsArt. 9You identified, analysed, estimated, and evaluated risks throughout the AI system lifecycle. You documented residual risks and mitigation measures.
Data Protection Impact Assessment (DPIA)Art. 26(9)You assessed the AI system's impact on fundamental rights before deployment. You identified risks to specific groups and documented mitigation.
Conformity Assessment (if applicable)Art. 43For high-risk systems: you assessed whether the AI system meets Chapter 2 requirements. If you are the provider of an integrated system under Art. 25, you conducted the full assessment.
Provider CorrespondenceArts. 13, 47, 53You requested documentation from your AI provider. You have copies of sent requests (with dates), any responses received, and a gap analysis of what was and was not provided.
Provider Documentation EvaluationArt. 13You evaluated the provider's documentation against Art. 13 requirements and documented gaps, compensating measures, and outstanding requests.
Incident LogsArt. 73You maintained a log of all incidents (serious and non-serious). For serious incidents, you documented the report to the relevant authority within the required timeframe.
AI Literacy Training RecordsArt. 4You ensured staff and relevant persons have sufficient AI literacy. You documented what training was provided, to whom, when, and how competency was assessed.
Human Oversight ProceduresArts. 14, 26(2)For high-risk: you documented who provides human oversight, their qualifications, the procedures they follow, escalation protocols, and how they can override the system.
System Monitoring RecordsArt. 26(5)You monitored the AI system in operation. You have logs showing monitoring activities, identified issues, and actions taken.
Transparency NoticesArt. 50You implemented and documented all required transparency disclosures: AI interaction notices, content labelling, deep fake disclosures.
EU Database RegistrationArt. 49For high-risk: you registered the AI system in the EU database and can demonstrate the registration is current and accurate.

Evidence of Ongoing Monitoring

Static documentation is necessary but not sufficient. Regulators will look for evidence that compliance is a living process:

  • Regular review cadence: Show that you review your risk assessment and DPIA periodically (quarterly for high-risk, annually for limited-risk)
  • Performance monitoring data: Accuracy metrics, bias indicators, output quality scores tracked over time
  • Incident response evidence: How you investigated past incidents, what corrective actions you took, and whether they were effective
  • Training updates: Staff training is current, not just a one-time event from 18 months ago
  • Provider relationship management: Ongoing correspondence with your AI provider, not just a single unanswered email
Assemble your compliance file now. Use the table above as a checklist. For each document, note its status: (a) complete, (b) in progress, (c) not started, (d) blocked by provider. This gap analysis is itself a compliance artifact — it shows you know what you need and are working toward it.
Audit-readiness is not about having perfect documentation — it is about having systematic, complete, and honest documentation. A gap analysis that honestly states "we requested this from our provider and have not received it" is far better than no documentation at all. Regulators reward good faith effort and systematic approach.
5.6 Defense Strategies ~20 min

You receive a formal notice from a national competent authority regarding your AI system. This is not the end of the world — it is the beginning of a process. How you respond in the first 48 hours shapes the outcome. Here is the playbook.

Step 1: Do Not Panic, Do Not Ignore

A regulatory notice is not a fine. In most cases, it is an information request or a notification of an investigation. The authority wants to understand your compliance posture before deciding next steps. Your response matters enormously. Ignoring the notice or responding defensively will escalate the situation; responding promptly and transparently will de-escalate it.

Step 2: Document Everything from This Moment

From the moment you receive the notice, create a response file:

  • Log the date, time, and method of receipt
  • Identify the requesting authority and the specific articles/obligations cited
  • Note the response deadline (typically 15-30 days)
  • Preserve all relevant documents in their current state — do not modify, delete, or "improve" documentation after receiving the notice

Step 3: Engage Specialised AI Law Counsel

General corporate counsel may not have the AI Act expertise needed. Engage a law firm with specific EU AI Act experience. Key qualifications to look for:

  • Track record in EU regulatory enforcement (GDPR, product safety, digital regulation)
  • Understanding of AI technology, not just regulatory text
  • Relationships with the specific national authority that sent the notice
  • Experience with coordinated multi-jurisdiction enforcement (if your deployment spans multiple member states)

Step 4: Demonstrate Good Faith Effort

The single most important factor in regulatory enforcement outcomes is whether you can demonstrate good-faith compliance effort. Specifically:

  • Show your compliance file: Produce your risk management records, DPIA, provider correspondence, training records, and monitoring data. Even if incomplete, a structured effort demonstrates seriousness.
  • Acknowledge gaps honestly: If you are missing provider documentation, say so — and show the requests you sent. If your DPIA has blind spots, acknowledge them and explain your compensating measures.
  • Show timeline of effort: When did you start compliance work? What milestones have you achieved? A company that started compliance in 2025 is in a fundamentally different position from one that started the day before the notice arrived.

Step 5: Present a Remediation Plan

If the authority identifies non-compliance, respond with a concrete remediation plan:

  • Specific actions to address each identified gap
  • Realistic timelines for each action (do not overpromise)
  • Assigned responsibility for each action
  • Interim risk mitigation measures while remediation is in progress

Authorities have broad discretion in enforcement outcomes. A credible remediation plan can be the difference between a warning and a fine, or between a moderate fine and a severe one.

Never alter, backdate, or fabricate compliance documentation after receiving a regulatory notice. Providing incorrect information to authorities carries its own penalty tier: Supplying incorrect, incomplete, or misleading information to notified bodies or national competent authorities in response to a request is subject to administrative fines of up to EUR 7 500 000 or, for an undertaking, up to 1% of total worldwide annual turnover for the preceding financial year, whichever is higher[src] The cover-up is always worse than the original non-compliance.
Your best defence is preparation that happened before the notice arrived. The second-best defence is a prompt, transparent, and constructive response to the notice. Engage specialised counsel immediately, produce your compliance file, acknowledge gaps honestly, and present a credible remediation plan.
5.7 Remediation Paths ~15 min

When non-compliance is identified (whether through self-assessment or regulatory investigation), you need a clear remediation path. The AI Act provides several mechanisms, and the appropriate path depends on the severity and nature of the non-compliance.

Remediation Options by Severity

SeverityExampleRemediation Path
Documentation gapMissing DPIA, incomplete risk assessment, no training recordsCreate the missing documentation. Backfill with current information and note the date of creation. This is the most common issue and the easiest to fix.
Transparency failureNo Art. 50 disclosure on a chatbot, missing AI content labelsImplement the required disclosures immediately. Document the date of implementation and any affected period where disclosures were missing.
Provider documentation gapNever requested provider documentation, or requests were ignoredSend formal requests immediately. Document the gap, implement compensating monitoring measures, and conduct your own performance evaluation for the interim.
Human oversight failureHigh-risk system operating without adequate human reviewImplement oversight procedures immediately. This may require temporarily reducing system autonomy (e.g., requiring human approval for all outputs) until proper oversight is in place.
Fundamental non-complianceHigh-risk system deployed without any conformity assessment, or system operating in a prohibited categorySuspend the system's operation. Conduct a full compliance assessment before resuming. For prohibited practices, cease permanently.

Voluntary Remediation vs. Ordered Remediation

If you identify non-compliance before the regulator does, voluntary remediation carries significant benefits:

  • Demonstrates proactive compliance culture (mitigating factor in any future enforcement)
  • Allows you to control the timeline and narrative
  • Avoids the reputational damage of a formal enforcement action
  • Can be documented as evidence of ongoing monitoring (itself a compliance requirement)
Most non-compliance under the AI Act is remediable — documentation gaps can be filled, transparency measures implemented, and oversight procedures established. The critical factor is speed and honesty. Fix what you can now, document what you cannot fix immediately, and show a credible plan for the rest.
5.8 Case Studies & GDPR Enforcement Analogies ~25 min

The AI Act is too new for a substantial enforcement case history. But GDPR, which follows the same regulatory model (EU regulation with extraterritorial reach, national enforcement authorities, tiered fines), provides a reliable predictive framework. GDPR enforcement patterns over 2018-2026 predict how AI Act enforcement will likely evolve.

GDPR Enforcement Timeline as AI Act Predictor

GDPR PhaseWhat HappenedAI Act Prediction
Year 1 (2018-2019)First fines were symbolic. CNIL fined Google 50M EUR (Jan 2019). Focus on big tech and transparency failures. Most SMEs were not targeted.First AI Act fines (expected 2026-2027) will likely target GPAI providers for Art. 53 non-compliance and large deployers with obvious prohibited practice violations. SME deployers are unlikely to be early targets.
Year 2-3 (2019-2021)Fines scaled dramatically. Amazon: 746M EUR. WhatsApp: 225M EUR. Focus areas: insufficient legal basis, transparency failures, inadequate data processing records.AI Act fines will scale as authorities build capacity and precedent. Focus areas likely: GPAI documentation failures, high-risk deployers without conformity assessments, transparency obligation violations.
Year 4+ (2022-present)Enforcement broadened. Smaller companies fined. Cross-border coordination improved. Focus shifted to data transfers, automated decision-making, and data breaches.Broad enforcement of deployer obligations. Focus on supply chain documentation, human oversight gaps, and incident reporting failures. Cross-border AI cases handled through AI Office coordination.

Key GDPR Cases and Their AI Act Parallels

Case 1: Google/CNIL (2019) — 50M EUR

CNIL fined Google for lack of transparency and inadequate consent for ad personalisation. The fine was based on: (a) information about data processing was scattered across multiple documents, making it difficult for users to understand; (b) consent mechanisms did not meet GDPR's freely-given, specific, informed standard.

AI Act parallel: A GPAI provider whose Art. 53 documentation is scattered across blog posts, model cards, and research papers (rather than consolidated in a single, accessible deployer document) could face similar arguments. The Act requires "sufficiently detailed" information, not a scavenger hunt.

Case 2: Clearview AI (multiple DPAs, 2022) — 20M+ EUR combined

Multiple DPAs (France, Italy, UK, Greece) fined Clearview AI for scraping facial images without consent. This case is directly relevant to AI Act prohibited practices — untargeted facial scraping is now banned under Art. 5.

AI Act parallel: Companies operating biometric AI systems that touch Art. 5 prohibitions will be early enforcement targets. The GDPR precedent shows multiple national authorities will act independently and cumulatively against the same company.

Case 3: H&M (Germany, 2020) — 35M EUR

H&M was fined for excessive monitoring of employees through detailed records of personal circumstances (health issues, family problems, religious beliefs). Managers recorded this information after "welcome back" conversations.

AI Act parallel: Workplace AI monitoring tools (especially those that profile employees based on performance, behaviour, or personal characteristics) will face scrutiny under both GDPR and the AI Act. If the monitoring tool uses AI, it is likely high-risk under Annex III Category 4 (Employment). The H&M case shows authorities will act aggressively to protect workers.

Predicted AI Act Enforcement Priorities

Based on GDPR patterns and the AI Act's structure, likely enforcement priorities in order:

  1. GPAI providers (Art. 53): The most visible targets with the clearest obligations and the August 2025 deadline. Expect the AI Office to issue information requests to major GPAI providers within months of the deadline.
  2. Prohibited practices (Art. 5): Any company operating a clearly prohibited AI system will be an early target. These cases are politically appealing and legally straightforward.
  3. High-risk deployers in employment (Annex III, Cat. 4): Hiring AI is politically sensitive, well-understood, and widely deployed. Expect early enforcement actions against companies using AI in recruitment without conformity assessments or human oversight.
  4. Transparency failures (Art. 50): Chatbots and content generators that fail to disclose AI involvement are easy to identify and easy to prove. These will generate a high volume of lower-value enforcement actions.
  5. Broad deployer enforcement: Eventually reaching all deployers who have not performed DPIAs, maintained incident logs, or ensured AI literacy.
GDPR enforcement followed a predictable pattern: big tech first, then scaled down and broadened. The AI Act will follow the same path. If you are a deployer — especially in hiring, credit, or healthcare — you are not the first target, but you are on the list. Use the time before enforcement reaches you to build your compliance file. The companies that prepared before GDPR enforcement arrived avoided the largest fines; the same will be true for the AI Act.
5.9 When Your Compliance Tool Uses AI ~15 min

AIActStack itself uses Claude's API to generate compliance documents. This makes AIActStack:

  • A deployer of Anthropic's Claude (uses their API under its own authority)
  • The doc generator is Limited Risk (content generation, not high-risk domain) under Art. 50
  • AIActStack must disclose to users that compliance documents are AI-generated
  • AIActStack must request and retain documentation from Anthropic about Claude

Your Own Compliance Checklist

  1. Art. 50 transparency notice on the doc generation feature ("This document was generated by AI")
  2. AI literacy compliance (Art. 4) — you, as the operator, understand the AI Act (this curriculum)
  3. Documentation request sent to Anthropic for Claude's technical specs
  4. Track doc generation costs and usage for future audit evidence
Eating your own dog food: if your compliance tool isn't itself compliant, your credibility is zero. Address this before launch.
5.10 Module 5 Quiz — Mock Audit ~20 min

Mock Audit Scenario

A national AI authority contacts your company about your AI-powered hiring tool. They request evidence of compliance with Articles 26, 50, and 73. What documents do you produce? How do you demonstrate human oversight? What if your provider (OpenAI) hasn't given you the documentation you requested?