Prohibited, High-Risk & GPAI
What's banned outright (Art. 5), what triggers high-risk obligations (Annex III), and what general-purpose AI providers owe under Arts. 51-56.
Part of the AI literacy training (Article 4) curriculum · Sources: Regulation (EU) 2024/1689
Article 5 bans AI practices that pose an unacceptable risk to fundamental rights. Art. 5 prohibits specific AI practices deemed to pose unacceptable risk to fundamental rights, including subliminal/manipulative/deceptive techniques causing significant harm; exploitation of vulnerabilities based on age, disability, or socio-economic situation; social scoring leading to detrimental treatment; and (subject to narrow exceptions) real-time remote biometric identification in publicly accessible spaces for law enforcement[src]
Application date: Chapter I (general provisions, including Art. 4 AI literacy) and Chapter II (prohibited practices, Art. 5) apply from 2 February 2025[src]
What's Banned
| Practice | What It Means | Why It's Banned |
|---|---|---|
| Social scoring | AI that evaluates people based on their social behavior or personal traits, leading to detrimental treatment | Violates human dignity; creates systemic discrimination |
| Subliminal manipulation | AI that deploys subliminal techniques beyond a person's consciousness to distort behavior, causing significant harm | Undermines autonomy and free will |
| Exploitation of vulnerabilities | AI targeting specific vulnerabilities (age, disability, social/economic situation) to distort behavior | Preys on those least able to protect themselves |
| Real-time remote biometric ID | Real-time facial recognition in publicly accessible spaces by law enforcement (with narrow exceptions) | Mass surveillance incompatible with privacy rights |
| Emotion recognition | AI inferring emotions in workplace and education settings (with exceptions for safety/medical) | Invasive, unreliable, discriminatory |
| Untargeted facial scraping | Creating facial recognition databases by scraping images from the internet or CCTV | Mass collection without consent violates privacy |
| Biometric categorization | AI that categorizes individuals based on biometric data to infer race, political opinions, trade union membership, religious beliefs, sex life/orientation | Creates profiles that enable discrimination |
| Predictive policing (individual) | AI predicting that a specific person will commit a crime based solely on profiling or personality traits | Presumption of innocence violated |
Exceptions
Real-time biometric identification has three narrow exceptions for law enforcement:
- Searching for specific victims (kidnapping, trafficking, sexual exploitation)
- Preventing specific, substantial, imminent threats to life or terrorist attacks
- Identifying suspects of specific serious criminal offences (those carrying prison terms of 4+ years)
Even these require prior judicial authorization and necessity/proportionality assessment.
Penalties for Prohibited Practices
Non-compliance with the prohibition of AI practices under Art. 5 is subject to administrative fines of up to EUR 35 000 000 or, for an undertaking, up to 7% of total worldwide annual turnover for the preceding financial year, whichever is higher[src]
For SMEs and start-ups the direction reverses: For SMEs (including startups), each fine under Art. 99 is capped at the lower of the percentage or absolute amount listed in paragraphs 3, 4, and 5 — not the higher[src]
These are the highest fines in the entire Act.
Title V of the AI Act creates obligations specifically for General-Purpose AI (GPAI) models — foundation models like GPT-4, Claude, Gemini, and Llama. Chapter III Section 4 (notified bodies), Chapter V (general-purpose AI models), Chapter VII (governance), Chapter XII (penalties), and Art. 78 apply from 2 August 2025, with the exception of Art. 101[src]
What's a GPAI Model?
A "general-purpose AI model" (GPAI) is an AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market, and that can be integrated into a variety of downstream systems or applications (Art. 3(63)). A "general-purpose AI model with systemic risk" is a GPAI model meeting the conditions in Art. 51[src] This covers GPT-4, Claude, Gemini, Llama, Mistral, and any model that can be adapted for multiple tasks.
This covers: GPT-4, Claude, Gemini, Llama, Mistral, and any model that can be adapted for multiple tasks.
Two Tiers of GPAI Obligations
All GPAI providers (Article 53) must:
- Prepare and maintain technical documentation (training methods, data, evaluation)
- Provide information and documentation to downstream deployers
- Establish a policy for complying with EU copyright law
- Publish a sufficiently detailed summary of training data content
GPAI with systemic risk (Article 55) — additional obligations:
- Perform model evaluations (including adversarial testing/red-teaming)
- Assess and mitigate systemic risks
- Track, document, and report serious incidents to the AI Office
- Ensure adequate cybersecurity protections
Systemic Risk Threshold
A GPAI model is presumed to have systemic risk if its training used more than 10^25 FLOPs (floating-point operations). The Commission can also designate models based on other criteria.
Models presumed systemic: GPT-4 and later, potentially Claude 3/4 Opus, Gemini Ultra. The exact list is maintained by the AI Office.
Why This Matters for Deployers
As a deployer building on GPT-4 or Claude:
- Your provider (OpenAI, Anthropic) must give you documentation under Art. 53(1)(b)
- If they don't, you can't fully comply with YOUR deployer obligations
- This is the "documentation gap" — providers may not have this ready yet
Annex III lists AI systems considered high-risk. There are 8 categories:
| # | Category | Examples | Why High-Risk |
|---|---|---|---|
| 1 | Biometrics | Facial recognition (non-prohibited), emotion recognition, biometric categorization | Fundamental rights to privacy, non-discrimination |
| 2 | Critical infrastructure | AI managing road traffic, water/gas/electricity supply, digital infrastructure | Failure can endanger life and public safety |
| 3 | Education & vocational training | Determining access to education, evaluating learning outcomes, monitoring cheating | Shapes life opportunities, potential for bias |
| 4 | Employment & workers | CV screening, hiring decisions, performance monitoring, promotion/termination decisions | Affects livelihoods, high discrimination risk |
| 5 | Essential services | Credit scoring, insurance pricing, emergency services dispatch prioritization | Access to essential resources, discrimination risk |
| 6 | Law enforcement | Risk assessment for crime prediction, lie detection, evidence evaluation | Liberty, presumption of innocence, due process |
| 7 | Migration & border | Asylum application assessment, border surveillance, visa processing | Affects vulnerable populations, fundamental rights |
| 8 | Justice & democracy | AI assisting judicial decisions, election influence analysis | Rule of law, democratic processes |
The "Safety Component" Rule
Even if an AI system doesn't fall into these categories directly, it's high-risk if it's a safety component of a product covered by EU product safety legislation (Annex I). This catches AI in medical devices, vehicles, machinery, toys, and more.
Exemptions Within High-Risk
Article 6(3) allows an Annex III system to NOT be classified as high-risk if it:
- Performs a narrow procedural task
- Improves the result of a previously completed human activity
- Detects decision-making patterns without replacing human assessment
- Performs a preparatory task for an assessment in an Annex III use case
This exemption does NOT apply if the AI system profiles natural persons.
Category 4 of Annex III covers AI systems intended to be used for recruitment, selection, or evaluation of candidates during work-related contractual relationships.
What's Covered
- CV/resume screening and ranking
- Automated interview assessment (video, text, or voice analysis)
- Candidate matching algorithms
- Performance evaluation AI
- Promotion and termination decision support
- Task allocation based on worker profiling
Why It's High-Risk
Employment AI directly affects people's livelihoods and has documented bias issues:
- Amazon's hiring AI famously discriminated against women (trained on 10 years of male-dominated resumes)
- Personality assessment AI has been shown to discriminate by race and disability
- Video interview analysis can penalize non-native speakers, people with disabilities, or those from different cultural backgrounds
Deployer Obligations for Hiring AI
All high-risk deployer obligations apply (Art. 26), plus:
- Must inform workers or their representatives that high-risk AI is in use
- Must perform DPIA before deployment (Art. 26(9))
- Must implement human oversight — a human must review and can override every AI hiring decision
- Must retain logs for at least 6 months
- Must register the system in the EU database
Category 5(b) of Annex III covers AI used for creditworthiness assessment and credit scoring, as well as risk assessment and pricing for life and health insurance.
What's Covered
- Automated credit decisions (loan approval, credit limits)
- AI-driven risk scoring for insurance underwriting
- Dynamic pricing based on individual risk profiles
- Fraud detection that affects access to financial services
Why It's High-Risk
Financial AI determines who can get a loan, a mortgage, or affordable insurance. Bias here creates systemic inequality — entire communities can be redlined by algorithms.
Healthcare AI is regulated through two paths: Annex III Category 1 (biometric systems) and Annex I (medical device regulations). AI used for medical diagnosis, treatment recommendation, or surgical assistance faces the heaviest scrutiny.
Double Regulation
Medical AI often falls under BOTH the AI Act AND the EU Medical Device Regulation (MDR 2017/745). The AI Act requirements apply on top of existing medical device requirements — they don't replace them.
Category 3 of Annex III covers AI in education: determining access to institutions, evaluating learning outcomes, assessing appropriate education levels, and monitoring student behavior during exams (anti-cheating AI).
Categories 6 and 7 of Annex III cover AI in law enforcement (crime prediction, evidence assessment, lie detection, suspect profiling) and migration (asylum processing, border surveillance, visa decisions).
These are the most politically sensitive categories. Law enforcement AI faces additional restrictions beyond standard high-risk requirements, including stricter prohibitions on real-time biometric identification (Art. 5).
Category 2 covers AI as a safety component of critical infrastructure: road traffic management, water/gas/electricity supply, heating systems, and digital infrastructure management.
Not every AI system in an Annex III domain is automatically high-risk. Article 6(3) provides a narrow exception for AI systems that don't pose a "significant risk of harm."
When You Can Claim the Exception
Your AI system is NOT high-risk if it:
- Performs a narrow procedural task (e.g., sorting documents by format, not content)
- Improves the result of a previously completed human activity (e.g., grammar check on a hiring manager's written feedback)
- Detects patterns without replacing human assessment (e.g., flagging anomalies for human review, not making decisions)
- Performs a preparatory task for an Annex III assessment (e.g., formatting data for a human credit analyst)
The Profiling Caveat
These exceptions do NOT apply if the AI system profiles natural persons. Profiling means any form of automated processing of personal data to evaluate personal aspects (performance, behavior, economic situation, health, preferences, interests, reliability, location, movements).
Classify These Products
For each product, identify: (a) the risk tier, (b) which Annex III category (if high-risk), and (c) whether the Art. 6(3) exception might apply.
- A chatbot that answers customer FAQ using GPT-4.
- An AI that screens resumes and ranks candidates for a recruiter.
- An AI that generates marketing copy from product descriptions.
- An AI that predicts which insurance claims are likely fraudulent.
- An AI that sorts incoming emails into categories (spam, urgent, normal).
Show Answers
- Limited Risk. Customer-facing chatbot requires transparency disclosure (Art. 50) — must tell users they're interacting with AI. Not high-risk since it's not in an Annex III domain.
- High Risk. Annex III Category 4 (Employment). Resume screening and ranking is explicitly covered. Art. 6(3) exception does NOT apply because it profiles candidates.
- Limited Risk. Content generation requires AI-generated content labeling (Art. 50). Not high-risk — marketing is not in Annex III.
- High Risk. Annex III Category 5 (Essential services — insurance). Fraud detection that affects claim outcomes is high-risk. If it only flags for human review without profiling, Art. 6(3) MIGHT apply — but fraud detection typically involves profiling.
- Minimal Risk. Email sorting is internal tooling, not customer-facing, not in an Annex III domain. No mandatory requirements. Voluntary codes of conduct apply.