EU AI Act Article Explainer
Article 27 of the EU AI Act: FRIA Explained
Article 27 creates the Fundamental Rights Impact Assessment — the deployer-side duty that kicks in before you first use certain high-risk AI systems. This is what it requires, who is on the hook, and how it relates to the DPIA you may already run under data-protection law.
What Article 27 requires
Before the first deployment of a covered high-risk AI system, the deployer must perform a Fundamental Rights Impact Assessment (FRIA) covering six statutory elements — process, period, affected persons, specific risks, human oversight, and mitigations. The FRIA must be notified to the market surveillance authority using a questionnaire template developed by the AI Office.
The six FRIA elements under Art 27(1)
- (a) Deployer processes. How the high-risk AI system will be used, in line with its intended purpose.
- (b) Period and frequency. Within what time window and how often the system will run.
- (c) Affected persons. Categories of natural persons and groups likely to be affected.
- (d) Specific risks of harm. Informed by the provider's transparency information (Art 13).
- (e) Human oversight measures. Implemented per the provider's instructions for use, aligned with Art 26 deployer duties.
- (f) Mitigations and complaint mechanism. What happens if risks materialise — internal governance and complaint intake.
The FRIA is a living document. Art 27(2) requires the deployer to update it whenever any element changes materially — a new process, new affected groups, a new provider version, a new oversight arrangement. Treating the FRIA as a one-off artefact filed before launch misreads the statute.
Who is on the hook?
Art 27(1) restricts the duty to specific categories of deployers. You need a FRIA before first deployment if any one of these applies:
- • You are a body governed by public law or a private entity providing public services, and you deploy any high-risk AI system other than a critical-infrastructure system.
- • You deploy a high-risk AI system used for creditworthiness assessment or credit scoring (Annex III point 5(b)).
- • You deploy a high-risk AI system used for life or health insurance risk assessment and pricing (Annex III point 5(c)).
"Private entity providing public services" bites wider than expected
The category covers more than obvious government contractors. It reaches SaaS companies whose product is used by a public-sector customer to deliver a service to citizens — schools using an AI-graded assessment tool, hospitals using an AI triage system, benefits agencies using eligibility scoring. National-law classification of "body governed by public law" varies by Member State; when the status is ambiguous, running the FRIA is the lower-risk path.
FRIA is not a DPIA in disguise
Art 27(4) directs the deployer to complement rather than replace any DPIA already conducted under the General Data Protection Regulation. The two assessments answer different questions and sit under different statutes.
| Dimension | DPIA (data-protection law) | FRIA (EU AI Act Art 27) |
|---|---|---|
| Focus | Personal-data processing risks | Fundamental-rights risks from AI use |
| Trigger | High-risk processing of personal data | Public-sector deployer or credit-scoring / insurance deployment |
| Who runs it | Data controller | Deployer of the AI system |
| Interaction | Stands alone under GDPR | Complements the DPIA under Art 27(4) |
The fundamental-rights analysis (affected groups, discrimination risk, human-oversight adequacy) is distinct and must stand on its own — a DPIA that focuses on personal-data lawful basis does not answer whether the AI system disadvantages particular groups.
Notifying the market surveillance authority
Art 27(3) requires the deployer to notify the market surveillance authority of the FRIA results, submitting the filled-out questionnaire based on the template developed by the AI Office (Art 27(5)). Deployers operating under the exceptional-circumstances derogation in Art 46(1) may be exempt from the notification duty. Retain any submission acknowledgment as evidence.
Until the AI Office publishes the template, deployers structure the FRIA around the six Art 27(1) elements and keep it ready to transfer to the template once published. Starting with a structured six-section report makes the later template mapping mechanical.
When does Article 27 apply?
Article 27 applies from 2 August 2026 together with the rest of the high-risk regime [src]. If you will deploy a covered high-risk system on or after that date, the FRIA must exist and be notified before the first use.
For systems deployed before 2 August 2026 in scenarios that trigger Art 27, the FRIA must be in place by the general-application date. Waiting until the first material change after the deadline is not an option — the duty is "prior to deploying" (Art 27(1)).
Penalties for skipping a required FRIA
Failing to perform a required FRIA is a deployer-obligation breach. Fines fall under the operator-obligations tier: up to EUR 15 million or 3% of worldwide annual turnover, whichever is higher [src]. SMEs and start-ups benefit from the inverted SME cap — the lower of the absolute figure or the percentage applies instead.
The FRIA is also an evidentiary artefact. On a reasoned competent-authority request, you are expected to produce the completed assessment, the notification acknowledgment, and the review log showing any change-triggered updates.
Generate a FRIA tailored to your deployment
Our FRIA template walks you through the six statutory elements in under a minute — free, no credit card required.
Generate a FRIARelated Guides
This article explains Article 27 of the EU AI Act (Regulation 2024/1689). It is not legal advice. National-law classification of "body governed by public law" and "private entity providing public services" varies by Member State. Whether a completed DPIA already discharges elements of the FRIA depends on how closely that DPIA mapped the fundamental-rights impact of the AI system. Consult qualified counsel for formal applicability assessment.