EU AI Act Article Explainer

Article 4 of the EU AI Act: AI Literacy Explained

Article 4 is the only AI Act obligation that applies to every deployer and provider, no risk classification involved. It has been in force since 2 February 2025 — well before the Act's general application date.

What Article 4 requires

Providers and deployers of AI systems must take measures to ensure, to their best extent, a sufficient level of AI literacy among staff and any other persons dealing with the operation and use of AI systems on their behalf, having regard to their technical knowledge, experience, education, and training, and the context in which the AI systems are to be used.

"To their best extent" is a load-bearing qualifier: Art. 4 is an obligation of means, not result. Documented good-faith effort calibrated to context is what the law asks for, not a guarantee that every staff member achieves a fixed competence level.

Three things make Art. 4 unusual

  • No risk-tier gate. The duty applies whether your AI system is minimal, limited, high-risk, or general-purpose. A SaaS using OpenAI for an internal Q&A bot owes the same Art. 4 obligation as a hospital deploying a high-risk diagnostic system — calibrated to context and risk, but the duty is universal.
  • Already in force. Chapter I (which contains Art. 4) and Chapter II (Art. 5 prohibitions) applied from 2 February 2025. Most other deployer obligations wait until 2 August 2026; Art. 4 is enforceable now.
  • Both providers AND deployers. Art. 4 names both. If your team trains, fine-tunes, or modifies a model under your own authority you may be a provider; if you only call third-party APIs you are a deployer. Either way the literacy duty attaches to your staff.

Who counts as "staff handling AI on your behalf"

Art. 4's scope is broader than just engineers building the AI system. The text covers anyone "dealing with the operation and use" of the system. In practice this means:

  • Engineers and product managers deciding which AI features ship.
  • Customer-support staff escalating from the AI to a human, or interpreting AI-generated outputs.
  • Compliance and legal staff auditing the AI system's outputs against company policy.
  • Contractors, agencies, and outsourced operators who run AI on your behalf — Art. 4 catches "any other persons" handling the system, not just employees.

The literacy level is calibrated to each role's technical knowledge, experience, education, training, and the context. A customer-support agent does not need a machine-learning degree; they need enough understanding to recognise when the AI's output should not be trusted in their specific context.

What "sufficient" actually means (with no Commission guidance yet)

The Act does not define "sufficient" further than the calibration above. The Commission has not published Art. 4 guidance as of April 2026; CEN-CENELEC JTC 21 is drafting harmonised standards for the broader AI Act framework with publication targeted by Q4 2026, but no AI-literacy-specific harmonised standard exists.

In the absence of formal guidance, treat "sufficient" as a defensible-floor question: would your staff's literacy level survive a national authority's questions if a serious incident were investigated? A reasonable working answer for an SMB SaaS team:

  • Every staff member touching an AI feature can describe, in plain terms, what the AI system does and does not do — including its known failure modes.
  • Anyone deploying or operating the system understands the risk classification of their use case (minimal / limited / high-risk / prohibited under Art. 5) and the obligations attaching to it.
  • The training is documented — date, content covered, attendees — so an authority asking "how do you ensure literacy?" gets a concrete answer.

A note on certifications

Some vendors are marketing "Art. 4 certified" training programmes. There is currently no Commission-recognised AI literacy certification — the AI Office has neither approved nor accredited any private programme. Your training plan is up to you, and the documented evidence is what matters under audit.

Penalties

Art. 99 sets out the AI Act's penalty regime, but its specific tiers (Art. 99(3) for Art. 5 prohibited practices, Art. 99(4) for the operator obligations in Arts. 16, 22-24, 26, 31, 33-34, 50, and Art. 99(5) for incorrect information) do not name Art. 4 directly. Penalties for Art. 4 infringements fall under the Member-State catch-all in Art. 99(1), which leaves Member States to set the rules for infringements not enumerated elsewhere — subject to the proportionate, dissuasive, and effective requirement.

For SMEs, Art. 99(6) inverts the formula across all explicit tiers: caps are the lower of the percentage or absolute, not the higher.

In practice national authorities have signalled a graduated approach: documented good-faith effort plus a credible plan tends to land in the warning + deadline-to-remediate zone, not the fine zone. The financial ceiling exists for material non-engagement. For the full Art. 99 schedule and worked examples of the SME-cap inversion, see EU AI Act fines.

Common questions

We're a tiny team. Does Art. 4 still apply?

Yes. Art. 4 has no headcount threshold or SME exemption. The literacy duty scales to context — a 5-person SaaS does not need the literacy programme of a Fortune 500 — but the obligation itself attaches.

We just call OpenAI's API. Are we really a deployer?

Yes. Art. 3(4) defines a deployer as anyone using an AI system under their own authority, except for personal non-professional use. Calling a third-party API in your product makes you the deployer; the API provider is the provider. Your team is on the hook for Art. 4 literacy regardless of who trained the model.

We did GDPR training. Is that enough?

No. GDPR training covers personal data processing; Art. 4 covers AI system operation. There is overlap — both demand staff understand the data flows — but a GDPR module that does not address AI system failure modes, risk classification under the AI Act, or the deployer / provider distinction does not satisfy Art. 4.

Is there a recognised AI literacy certification?

Not yet. As of April 2026 the AI Office has neither approved nor accredited any private programme. Vendor "Art. 4 certified" badges are marketing. CEN-CENELEC JTC 21 has the harmonised-standards mandate but is targeting Q4 2026 for publication, and AI-literacy-specific standards are not first in line.

What's the simplest credible path to compliance?

A documented training plan covering: (1) what your AI features do and don't do; (2) the risk classification of each use case; (3) the obligations attaching to each role on your team; (4) when to escalate AI outputs to a human; (5) the date and attendees of each training session. Our AI literacy training (Article 4) is structured to satisfy these for SaaS teams using third-party AI.

Related