EU AI Act Article Explainer

Article 5 of the EU AI Act: Prohibited AI Practices

Article 5 is the red line of the EU AI Act — a list of eight AI practices that cannot be placed on the market, put into service, or used in the EU at all. The list has been enforceable since 2 February 2025 and non-compliance triggers the highest fine tier in the Regulation.

What Article 5 prohibits

Article 5(1) lists eight AI practices that are banned outright. The prohibition is not softened by a risk assessment, a transparency notice, or a data protection impact assessment: the practice is simply not allowed. Unlike the rest of the Regulation, Article 5 does not wait for 2 August 2026 — it has applied since 2 February 2025.

The eight prohibited practices at a glance

  • Art 5(1)(a) — Harmful subliminal or manipulative techniques. AI systems that use subliminal, manipulative, or deceptive techniques to materially distort behaviour and cause significant harm.
  • Art 5(1)(b) — Exploiting vulnerabilities. AI systems that exploit vulnerabilities of a natural person or a specific group of persons due to age, disability, or specific social or economic situation to materially distort behaviour.
  • Art 5(1)(c) — Social scoring. AI systems that evaluate or classify people over a certain period of time based on social behaviour or personal characteristics, with the resulting score producing detrimental treatment in unrelated contexts or disproportionate to the behaviour.
  • Art 5(1)(d) — Predictive policing based solely on profiling. AI systems that assess or predict the risk of a person committing a criminal offence based solely on profiling or personality traits.
  • Art 5(1)(e) — Untargeted facial-image scraping. AI systems that create or expand facial recognition databases by untargeted scraping from the internet or CCTV.
  • Art 5(1)(f) — Emotion recognition at work or in education. AI systems that infer emotions of natural persons in workplace or education contexts, except for medical or safety reasons.
  • Art 5(1)(g) — Sensitive biometric categorisation. Biometric categorisation systems that categorise people to infer race, political opinions, trade-union membership, religious or philosophical beliefs, sex life, or sexual orientation.
  • Art 5(1)(h) — Real-time remote biometric identification in public spaces. Use of real-time RBI in publicly accessible spaces by law enforcement, except in narrowly defined circumstances (targeted search for abduction/trafficking/sexual-exploitation victims or missing persons; imminent threat to life or terrorist attack; localisation of suspects of serious crimes listed in Annex II) with judicial authorisation.

The first seven apply to everyone (providers, deployers, importers, distributors) across all sectors. Point (h) is a law-enforcement-specific rule with its own authorisation regime in Art 5(2)–(7).

Art 5(1)(a) and (b): Manipulation and exploitation

Points (a) and (b) target AI systems designed to change what people do without them noticing — either by subliminal or deceptive techniques (a), or by targeting vulnerabilities that make some groups easier to move (b). The statute requires both a manipulation mechanism and a significant-harm outcome; one without the other does not trigger the prohibition.

Examples commonly flagged under (a) and (b)

  • Dark-pattern checkout flows optimised by an AI system that hide costs or coerce consent — and that cause financial harm at scale.
  • Persuasion engines targeting minors or people in severe financial distress with high-interest product offers.
  • Voice assistants that mimic a trusted contact to induce action the user would not otherwise take.

Normal ad-tech personalisation is not automatically captured. The statute's "materially distorting behaviour" threshold plus "significant harm" is the bar — a nudge toward one of several reasonable options is not the same as a manipulation that causes someone to decide against their own interest in a way that hurts them.

Art 5(1)(c): Social scoring

Point (c) prohibits AI-driven social scoring systems that classify people based on social behaviour or personal characteristics and then produce detrimental treatment in contexts unrelated to where the data was collected, or treatment that is disproportionate to the behaviour. This is the EU's answer to China-style generalised citizen scoring.

Narrowly-scoped creditworthiness or fraud-detection systems are not captured here — those are governed by the high-risk regime in Annex III, not the Article 5 prohibition. The prohibition's distinguishing feature is cross-domain detrimental treatment: using someone's social-media activity to refuse them a hospital bed, for example.

Art 5(1)(d): Predictive policing based solely on profiling

Point (d) prohibits AI systems that assess or predict criminal-offence risk for a natural person based solely on profiling them or on their personality traits. "Solely" is load-bearing: an AI system that supports human assessment of a person's involvement in a criminal activity already tied to objective, verifiable facts is not captured.

The line separates forensic decision-support (allowed, under Annex III high-risk rules) from pure-profiling risk scores (banned). If your system can output a risk score without any case-specific objective evidence being entered, (d) bites.

Art 5(1)(e): Untargeted facial-image scraping

Point (e) prohibits AI systems that create or expand facial recognition databases by untargeted scraping of facial images from the internet or CCTV footage. This was written against products like Clearview AI and its successors — mass collection without a specific targeting purpose is banned.

Targeted biometric collection for lawful purposes (e.g. an employer enrolling staff in a building-access system with their consent) is not captured. The prohibition is on the untargeted-scraping build process, not on all facial recognition.

Art 5(1)(f): Emotion recognition at work or in education

Point (f) prohibits AI systems that infer emotions of natural persons in workplace and education settings. The statute carves out medical and safety uses (stress monitoring in an air-traffic-control seat is the archetypal example), but the default in HR, call-centre quality, and classroom contexts is that emotion recognition is banned.

Note on call-centre sentiment analysis

AI systems analysing agent voice tone or word choice to infer emotion fall under (f) unless they are deployed for a narrow safety reason. Quality-assurance sampling that infers an emotional state to score the interaction is the archetypal example of what (f) rules out.

Art 5(1)(g): Sensitive biometric categorisation

Point (g) prohibits biometric categorisation systems that individually categorise natural persons based on biometric data to deduce or infer race, political opinions, trade-union membership, religion, philosophical beliefs, sex life, or sexual orientation — the Article 9 GDPR special-category list, rendered in AI-system form.

Labelling or filtering of lawfully acquired datasets (e.g. training-data quality work) is not captured. The prohibition targets the inference step: taking a biometric input and outputting a sensitive-category attribute.

Art 5(1)(h): Real-time remote biometric identification in public spaces

Point (h) prohibits real-time remote biometric identification (RBI) in publicly accessible spaces, used for law-enforcement purposes. Three narrowly defined exceptions exist (targeted search for abduction, trafficking, sexual-exploitation victims, or missing persons; prevention of imminent threats to life or terrorist attacks; localisation of suspects of serious crimes listed in Annex II). The exceptions require judicial pre-authorisation under Art 5(3), a fundamental-rights impact assessment, and registration in the EU database.

In duly justified situations of urgency, Art 5(3) allows use to commence without prior authorisation, provided authorisation is requested within 24 hours; if authorisation is rejected, use stops immediately and all resulting data, results, and outputs must be deleted. Each use must also be notified to the national market surveillance authority and data protection authority under Art 5(4).

Commercial operators never fall in (h). The prohibition binds law-enforcement authorities and the Member States authorising their use. Commercial biometric identification in public spaces is governed elsewhere (high-risk rules, GDPR Art 9).

When does Article 5 apply?

Article 5 (together with the AI-literacy duty in Article 4) has been enforceable since 2 February 2025 [src]. The rest of the Regulation ramps in later (most high-risk and transparency rules apply from 2 August 2026 [src]); the prohibitions came first.

If you have an AI system live in the EU today that falls in any of (a)–(h), you are already past the deadline. Removing the practice (or the features of the product that trigger the prohibition) is the only remediation path.

Penalties for violating Article 5

Non-compliance with Article 5 triggers the Regulation's highest fine tier: up to EUR 35 million or 7% of worldwide annual turnover, whichever is higher [src]. This is strictly higher than the operator-obligation tier (EUR 15M / 3%) that covers most other Regulation breaches.

SMEs and start-ups benefit from the inverted SME cap: the lower of the absolute figure or the percentage applies, not the higher [src]. For a start-up with EUR 2 million in turnover, the ceiling is EUR 140,000.

Check whether your AI stack triggers a prohibition

Scan your AI stack to see whether any of your use cases or services fall into the unacceptable-risk tier. Free, no signup.

Scan Your AI Stack Free

This article explains Article 5 of the EU AI Act (Regulation 2024/1689). It is not legal advice. The boundaries of each prohibition (particularly the "materially distorting behaviour" threshold in (a) and (b), the contextual-appropriateness test in (c), the "solely on profiling" test in (d), and the workplace/education scope in (f)) are interpretive and depend on your specific context. Consult qualified counsel for formal compliance assessment.