EU AI Act Article Explainer

Article 50 of the EU AI Act: Transparency Obligations Explained

Article 50 is the transparency layer of the EU AI Act — it applies regardless of risk level and catches almost every company that deploys AI with EU users. This is what it requires, who it applies to, and what to do before the law is generally applicable on 2 August 2026.

What Article 50 requires

Article 50 creates four distinct transparency duties that travel with AI systems interacting with natural persons in the EU. The obligation is not tied to risk tier: even AI systems outside the high-risk category trigger transparency duties if they fall into any of the four sub-obligations.

The four Article 50 sub-obligations at a glance

  • Art 50(1) — AI interaction disclosure. Providers must ensure AI systems designed to interact with natural persons make that AI nature clear to users.
  • Art 50(2) — AI-generated content marking. Providers of generative AI must mark synthetic text, image, audio, and video as AI-generated in a machine-readable format.
  • Art 50(3) — Emotion recognition / biometric categorisation disclosure. Deployers must inform natural persons when these systems are being used on them.
  • Art 50(4) — Deep fake disclosure. Deployers must disclose when AI-generated or manipulated content constitutes a deep fake.

Art 50(1) and Art 50(3) both carry a narrow law-enforcement carve-out: AI systems authorised by law to detect, prevent, investigate, or prosecute criminal offences are outside these disclosure duties, subject to safeguards for the rights of third parties. For commercial products this exception does not apply.

The provider-side duties (Art 50(1) and Art 50(2)) and the deployer-side duties (Art 50(3) and Art 50(4)) sit on different actors — a chatbot company acting as a deployer is only on the hook for (3) and (4) in most cases, while the provider of the underlying model carries (1) and (2). Multi-actor scenarios are common for SaaS tools using OpenAI, Anthropic, or Google's APIs.

Art 50(1): Tell users they're interacting with AI

The most common sub-obligation. If your product includes a chatbot, a voice assistant, an AI-driven support agent, or any AI system that converses with users, you must ensure the system makes its AI nature clear.

What the disclosure looks like in practice

  • A banner above a chat UI: "You are chatting with an AI assistant."
  • A persistent label in a voice assistant: "AI voice — powered by [provider]."
  • A first-message disclosure: "Hi, I'm an AI-powered support agent. Type 'human' to connect with a person."
  • An obvious, unmistakable AI branding in the product surface where no reasonable user would mistake the interaction for human-delivered.

What doesn't satisfy the obligation

A disclosure buried in the terms of service six months before the interaction does not count. Art 50(5) applies to all four sub-obligations and requires the information to be provided "at the latest at the time of the first interaction or exposure." A generic "powered by AI" footer on a site is typically not enough on its own — the notice has to reach the user at the point where they start interacting.

Art 50(2): Mark AI-generated content (provider duty)

Providers of AI systems generating synthetic audio, image, video, or text must mark the outputs in a machine-readable format, so downstream systems can detect the content as artificially generated. The marking must be "effective, interoperable, robust, and reliable" and must survive common editing and sharing.

In practice this typically means C2PA (Coalition for Content Provenance and Authenticity) metadata for images and video, watermarking or metadata for text, and similar approaches for audio. This is a provider-side duty — if you are deploying a third-party model, the provider is responsible for the marking, but as the deployer you must not remove or disable it.

A Commission proposal under the Digital Omnibus on AI would introduce a grace period for this obligation — tracked in the legislative train. Under current law, the obligation applies from 2 August 2026.

Art 50(3): Disclose emotion-recognition and biometric categorisation

Deployers of emotion-recognition systems and biometric-categorisation systems must inform the natural persons exposed — before the exposure. If your AI analyses facial expressions, voice tone, body language, or physiological signals to infer emotion, the people being analysed must know.

Check the Art 5 prohibition first

Certain uses of emotion recognition (in the workplace and in educational settings) are prohibited outright under Article 5. A disclosure notice does not permit those uses. Confirm your use case is not prohibited before implementing the Art 50(3) notice.

The notice must be clear, timely, and specific. A support-centre script that "may analyse sentiment for quality purposes" is weaker than the statute expects; tell the user explicitly that emotion recognition is running, what is processed, and for how long.

Art 50(4): Disclose deep fakes

Deployers of AI systems that generate or manipulate image, audio, or video content constituting a deep fake must disclose that the content is AI-generated or manipulated. The disclosure must be "clearly visible and recognisable" to the average person.

A watermark, a caption, or a persistent label stating "AI-generated content" are typical approaches. There is a narrow exception for content that is part of an "obviously artistic, creative, satirical, fictional, or analogous work" — but this exception is tight, and where in doubt, disclose.

Note that Art 50(2) machine-readable marking and Art 50(4) visible disclosure are separate duties. For a deep-fake product you may need both: the metadata (provider duty) and the visible label (deployer duty).

When does Article 50 apply?

Article 50 applies from 2 August 2026, the date the EU AI Act becomes generally applicable. Starting from that date, the transparency disclosures must be in place for every AI system to which Article 50 applies [src].

Because Article 50 applies regardless of risk level, this is the single most universal obligation in the EU AI Act. If you deploy any AI system that interacts with people in the EU, the clock is running — generate a transparency notice tailored to your stack.

Penalties for non-compliance

Failing to meet Article 50's transparency obligations is an operator-obligation breach. Fines are up to EUR 15 million or 3% of worldwide annual turnover, whichever is higher [src]. For SMEs and start-ups, the cap is the lower of the two rather than the higher — a material reduction for small companies.

Enforcement is by Member State market surveillance authorities, with the AI Office coordinating cross-border cases. Expect proceedings to take months, not weeks; that is enough time to remediate documented issues but not enough to start from scratch.

Generate your Article 50 transparency notice

Scan your AI stack, and we'll generate a transparency notice tailored to your specific services, use cases, and role. Free, no credit card required.

Scan Your AI Stack Free

This article explains Article 50 of the EU AI Act (Regulation 2024/1689). It is not legal advice. The scope of "obvious from the circumstances" in Art 50(1) and the boundaries of the artistic-creative exception in Art 50(4) are interpretive and depend on your specific context. Consult qualified counsel for formal compliance assessment.