Prohibited, High-Risk & GPAI

What's banned outright (Art. 5), what triggers high-risk obligations (Annex III), and what general-purpose AI providers owe under Arts. 51-56.

Part of the AI literacy training (Article 4) curriculum · Sources: Regulation (EU) 2024/1689

0 of 0 lessons completed in this module
2.1 Prohibited Practices (Art. 5) ~25 min

Article 5 bans AI practices that pose an unacceptable risk to fundamental rights. Art. 5 prohibits specific AI practices deemed to pose unacceptable risk to fundamental rights, including subliminal/manipulative/deceptive techniques causing significant harm; exploitation of vulnerabilities based on age, disability, or socio-economic situation; social scoring leading to detrimental treatment; and (subject to narrow exceptions) real-time remote biometric identification in publicly accessible spaces for law enforcement[src]

Application date: Chapter I (general provisions, including Art. 4 AI literacy) and Chapter II (prohibited practices, Art. 5) apply from 2 February 2025[src]

What's Banned

PracticeWhat It MeansWhy It's Banned
Social scoringAI that evaluates people based on their social behavior or personal traits, leading to detrimental treatmentViolates human dignity; creates systemic discrimination
Subliminal manipulationAI that deploys subliminal techniques beyond a person's consciousness to distort behavior, causing significant harmUndermines autonomy and free will
Exploitation of vulnerabilitiesAI targeting specific vulnerabilities (age, disability, social/economic situation) to distort behaviorPreys on those least able to protect themselves
Real-time remote biometric IDReal-time facial recognition in publicly accessible spaces by law enforcement (with narrow exceptions)Mass surveillance incompatible with privacy rights
Emotion recognitionAI inferring emotions in workplace and education settings (with exceptions for safety/medical)Invasive, unreliable, discriminatory
Untargeted facial scrapingCreating facial recognition databases by scraping images from the internet or CCTVMass collection without consent violates privacy
Biometric categorizationAI that categorizes individuals based on biometric data to infer race, political opinions, trade union membership, religious beliefs, sex life/orientationCreates profiles that enable discrimination
Predictive policing (individual)AI predicting that a specific person will commit a crime based solely on profiling or personality traitsPresumption of innocence violated

Exceptions

Real-time biometric identification has three narrow exceptions for law enforcement:

  1. Searching for specific victims (kidnapping, trafficking, sexual exploitation)
  2. Preventing specific, substantial, imminent threats to life or terrorist attacks
  3. Identifying suspects of specific serious criminal offences (those carrying prison terms of 4+ years)

Even these require prior judicial authorization and necessity/proportionality assessment.

Penalties for Prohibited Practices

Non-compliance with the prohibition of AI practices under Art. 5 is subject to administrative fines of up to EUR 35 000 000 or, for an undertaking, up to 7% of total worldwide annual turnover for the preceding financial year, whichever is higher[src]

For SMEs and start-ups the direction reverses: For SMEs (including startups), each fine under Art. 99 is capped at the lower of the percentage or absolute amount listed in paragraphs 3, 4, and 5 — not the higher[src]

These are the highest fines in the entire Act.

If your product does ANY of these things — even inadvertently — you must stop immediately. There is no grace period. These prohibitions are already law.
2.2 General-Purpose AI Models (Arts. 51-56) ~25 min

Title V of the AI Act creates obligations specifically for General-Purpose AI (GPAI) models — foundation models like GPT-4, Claude, Gemini, and Llama. Chapter III Section 4 (notified bodies), Chapter V (general-purpose AI models), Chapter VII (governance), Chapter XII (penalties), and Art. 78 apply from 2 August 2025, with the exception of Art. 101[src]

What's a GPAI Model?

A "general-purpose AI model" (GPAI) is an AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market, and that can be integrated into a variety of downstream systems or applications (Art. 3(63)). A "general-purpose AI model with systemic risk" is a GPAI model meeting the conditions in Art. 51[src] This covers GPT-4, Claude, Gemini, Llama, Mistral, and any model that can be adapted for multiple tasks.

This covers: GPT-4, Claude, Gemini, Llama, Mistral, and any model that can be adapted for multiple tasks.

Two Tiers of GPAI Obligations

All GPAI providers (Article 53) must:

  • Prepare and maintain technical documentation (training methods, data, evaluation)
  • Provide information and documentation to downstream deployers
  • Establish a policy for complying with EU copyright law
  • Publish a sufficiently detailed summary of training data content

GPAI with systemic risk (Article 55) — additional obligations:

  • Perform model evaluations (including adversarial testing/red-teaming)
  • Assess and mitigate systemic risks
  • Track, document, and report serious incidents to the AI Office
  • Ensure adequate cybersecurity protections

Systemic Risk Threshold

A GPAI model is presumed to have systemic risk if its training used more than 10^25 FLOPs (floating-point operations). The Commission can also designate models based on other criteria.

Models presumed systemic: GPT-4 and later, potentially Claude 3/4 Opus, Gemini Ultra. The exact list is maintained by the AI Office.

Why This Matters for Deployers

As a deployer building on GPT-4 or Claude:

  • Your provider (OpenAI, Anthropic) must give you documentation under Art. 53(1)(b)
  • If they don't, you can't fully comply with YOUR deployer obligations
  • This is the "documentation gap" — providers may not have this ready yet
GPAI obligations create a supply chain dependency. Your provider must give you specific documentation. If they haven't, send the request. They are legally required to provide it.
Use AIActStack's scanner to generate a documentation request email for your specific AI provider, citing the exact articles they must comply with.
2.3 Annex III Deep-Dive — All 8 High-Risk Categories ~30 min

Annex III lists AI systems considered high-risk. There are 8 categories:

#CategoryExamplesWhy High-Risk
1BiometricsFacial recognition (non-prohibited), emotion recognition, biometric categorizationFundamental rights to privacy, non-discrimination
2Critical infrastructureAI managing road traffic, water/gas/electricity supply, digital infrastructureFailure can endanger life and public safety
3Education & vocational trainingDetermining access to education, evaluating learning outcomes, monitoring cheatingShapes life opportunities, potential for bias
4Employment & workersCV screening, hiring decisions, performance monitoring, promotion/termination decisionsAffects livelihoods, high discrimination risk
5Essential servicesCredit scoring, insurance pricing, emergency services dispatch prioritizationAccess to essential resources, discrimination risk
6Law enforcementRisk assessment for crime prediction, lie detection, evidence evaluationLiberty, presumption of innocence, due process
7Migration & borderAsylum application assessment, border surveillance, visa processingAffects vulnerable populations, fundamental rights
8Justice & democracyAI assisting judicial decisions, election influence analysisRule of law, democratic processes

The "Safety Component" Rule

Even if an AI system doesn't fall into these categories directly, it's high-risk if it's a safety component of a product covered by EU product safety legislation (Annex I). This catches AI in medical devices, vehicles, machinery, toys, and more.

Exemptions Within High-Risk

Article 6(3) allows an Annex III system to NOT be classified as high-risk if it:

  • Performs a narrow procedural task
  • Improves the result of a previously completed human activity
  • Detects decision-making patterns without replacing human assessment
  • Performs a preparatory task for an assessment in an Annex III use case

This exemption does NOT apply if the AI system profiles natural persons.

If you use AI anywhere in hiring, credit decisions, or healthcare, assume you're high-risk until proven otherwise. The exemptions in Art. 6(3) are narrow and require documentation to claim.
2.4 Hiring & HR Screening — Why It's High-Risk ~20 min

Category 4 of Annex III covers AI systems intended to be used for recruitment, selection, or evaluation of candidates during work-related contractual relationships.

What's Covered

  • CV/resume screening and ranking
  • Automated interview assessment (video, text, or voice analysis)
  • Candidate matching algorithms
  • Performance evaluation AI
  • Promotion and termination decision support
  • Task allocation based on worker profiling

Why It's High-Risk

Employment AI directly affects people's livelihoods and has documented bias issues:

  • Amazon's hiring AI famously discriminated against women (trained on 10 years of male-dominated resumes)
  • Personality assessment AI has been shown to discriminate by race and disability
  • Video interview analysis can penalize non-native speakers, people with disabilities, or those from different cultural backgrounds

Deployer Obligations for Hiring AI

All high-risk deployer obligations apply (Art. 26), plus:

  • Must inform workers or their representatives that high-risk AI is in use
  • Must perform DPIA before deployment (Art. 26(9))
  • Must implement human oversight — a human must review and can override every AI hiring decision
  • Must retain logs for at least 6 months
  • Must register the system in the EU database
If you use any AI in hiring — even a simple resume keyword filter powered by GPT — you're likely high-risk. The obligations are substantial: ~100-120 hours to initial compliance.
2.5 Credit & Insurance Scoring ~15 min

Category 5(b) of Annex III covers AI used for creditworthiness assessment and credit scoring, as well as risk assessment and pricing for life and health insurance.

What's Covered

  • Automated credit decisions (loan approval, credit limits)
  • AI-driven risk scoring for insurance underwriting
  • Dynamic pricing based on individual risk profiles
  • Fraud detection that affects access to financial services

Why It's High-Risk

Financial AI determines who can get a loan, a mortgage, or affordable insurance. Bias here creates systemic inequality — entire communities can be redlined by algorithms.

Any AI making or significantly influencing credit or insurance decisions is high-risk. This includes AI that "recommends" to a human reviewer if the recommendation is routinely followed.
2.6 Medical Diagnosis & Healthcare ~15 min

Healthcare AI is regulated through two paths: Annex III Category 1 (biometric systems) and Annex I (medical device regulations). AI used for medical diagnosis, treatment recommendation, or surgical assistance faces the heaviest scrutiny.

Double Regulation

Medical AI often falls under BOTH the AI Act AND the EU Medical Device Regulation (MDR 2017/745). The AI Act requirements apply on top of existing medical device requirements — they don't replace them.

Medical AI has the most complex compliance landscape — both AI Act and MDR apply. If you're in this space, you need specialized legal counsel in addition to this curriculum.
2.7 Education & Grading ~10 min

Category 3 of Annex III covers AI in education: determining access to institutions, evaluating learning outcomes, assessing appropriate education levels, and monitoring student behavior during exams (anti-cheating AI).

AI proctoring tools and automated grading systems are high-risk. If your edtech product uses AI to make decisions that affect students' educational paths, prepare for full high-risk compliance.
2.8 Law Enforcement & Migration ~15 min

Categories 6 and 7 of Annex III cover AI in law enforcement (crime prediction, evidence assessment, lie detection, suspect profiling) and migration (asylum processing, border surveillance, visa decisions).

These are the most politically sensitive categories. Law enforcement AI faces additional restrictions beyond standard high-risk requirements, including stricter prohibitions on real-time biometric identification (Art. 5).

If you're building AI for law enforcement or border control, you face the highest compliance burden in the entire Act, plus strict fundamental rights impact assessments.
2.9 Critical Infrastructure ~10 min

Category 2 covers AI as a safety component of critical infrastructure: road traffic management, water/gas/electricity supply, heating systems, and digital infrastructure management.

AI managing infrastructure where failure threatens life or safety is automatically high-risk. This also intersects with NIS2 cybersecurity requirements.
"2.10 The \"Significant Risk\" Threshold" ~15 min

Not every AI system in an Annex III domain is automatically high-risk. Article 6(3) provides a narrow exception for AI systems that don't pose a "significant risk of harm."

When You Can Claim the Exception

Your AI system is NOT high-risk if it:

  1. Performs a narrow procedural task (e.g., sorting documents by format, not content)
  2. Improves the result of a previously completed human activity (e.g., grammar check on a hiring manager's written feedback)
  3. Detects patterns without replacing human assessment (e.g., flagging anomalies for human review, not making decisions)
  4. Performs a preparatory task for an Annex III assessment (e.g., formatting data for a human credit analyst)

The Profiling Caveat

These exceptions do NOT apply if the AI system profiles natural persons. Profiling means any form of automated processing of personal data to evaluate personal aspects (performance, behavior, economic situation, health, preferences, interests, reliability, location, movements).

If your AI system profiles people in any way — even to "assist" a human decision-maker — you cannot claim the Art. 6(3) exception. Assume high-risk.
2.11 Module 2 Quiz ~15 min

Classify These Products

For each product, identify: (a) the risk tier, (b) which Annex III category (if high-risk), and (c) whether the Art. 6(3) exception might apply.

  1. A chatbot that answers customer FAQ using GPT-4.
  2. An AI that screens resumes and ranks candidates for a recruiter.
  3. An AI that generates marketing copy from product descriptions.
  4. An AI that predicts which insurance claims are likely fraudulent.
  5. An AI that sorts incoming emails into categories (spam, urgent, normal).
Show Answers
  1. Limited Risk. Customer-facing chatbot requires transparency disclosure (Art. 50) — must tell users they're interacting with AI. Not high-risk since it's not in an Annex III domain.
  2. High Risk. Annex III Category 4 (Employment). Resume screening and ranking is explicitly covered. Art. 6(3) exception does NOT apply because it profiles candidates.
  3. Limited Risk. Content generation requires AI-generated content labeling (Art. 50). Not high-risk — marketing is not in Annex III.
  4. High Risk. Annex III Category 5 (Essential services — insurance). Fraud detection that affects claim outcomes is high-risk. If it only flags for human review without profiling, Art. 6(3) MIGHT apply — but fraud detection typically involves profiling.
  5. Minimal Risk. Email sorting is internal tooling, not customer-facing, not in an Annex III domain. No mandatory requirements. Voluntary codes of conduct apply.