High Risk deployer

Your EU AI Act Obligations

As a deployer (you use AI built by others) of high-risk AI stacks, you have 5 obligations under the EU AI Act.

5
Obligations
Days Until Deadline
High
Risk Level
AIActStack.com

Your Obligations

Want the full picture? Read our complete deployer obligations guide.

Start here Highest priority obligation

Deployer Duties for High-Risk Systems

Article 26 →

Deployers of high-risk AI systems must: take appropriate technical and organisational measures to use the system per instructions (Art. 26(1)); assign human oversight to natural persons with necessary competence, training, and authority (Art. 26(2)); ensure input data is relevant and representative where they have control (Art. 26(4)); monitor operation, suspend use on serious incident suspicion, and inform the provider, distributor, and authorities (Art. 26(5)); keep automatically generated logs for at least 6 months unless otherwise required (Art. 26(6)); inform workers' representatives and affected workers before deploying in the workplace (Art. 26(7)); register system use in the EU database (Art. 26(8)); and carry out a data protection impact assessment (DPIA) where required (Art. 26(9)).

~24h to implement Critical

Disclose AI Interaction to Users

Article 50 →

Providers and deployers of certain AI systems must comply with transparency obligations in Art. 50, including: informing natural persons that they are interacting with an AI system (Art. 50(1)); marking synthetic audio/image/video/text outputs in a machine-readable format (Art. 50(2)); informing persons exposed to emotion-recognition or biometric categorisation systems (Art. 50(3)); disclosing AI-generated deep fakes and AI-generated text published on matters of public interest (Art. 50(4)). Information must be given clearly at first interaction (Art. 50(5)).

~8h to implement Critical

Value-Chain Actor Treated as Provider

Article 25 →

A distributor, importer, deployer, or any other third party is treated as the provider of a high-risk AI system — and becomes subject to all Art. 16 provider obligations — in any of three circumstances: they put their name or trademark on a high-risk AI system already on the market (absent a contractual allocation); they make a substantial modification to a high-risk AI system already on the market; or they modify the intended purpose of a non-high-risk AI system, including a GPAI system, such that it becomes high-risk. When one of these happens, the original provider ceases to be the provider for that specific system but must cooperate, share documentation, and provide reasonably-expected technical access to the new provider. Product manufacturers whose regulated product contains a high-risk AI safety component are treated as providers of that AI system under additional conditions.

~20h to implement Critical

You have 5 obligations and 102 days left. Track your progress.

Create Account & Save

AI Literacy for Staff

Article 4 →

Providers and deployers must take measures to ensure a sufficient level of AI literacy among staff and any other persons dealing with the operation and use of AI systems on their behalf, having regard to their technical knowledge, experience, education, and training, and the context in which the AI systems are to be used.

~8h to implement Important

Fundamental Rights Impact Assessment (FRIA)

Article 27 →

Before deploying a high-risk AI system referred to in Art. 6(2) — with the exception of Annex III point 2 systems — deployers that are bodies governed by public law, or private entities providing public services, and deployers of Annex III points 5(b) and (c) systems must perform a Fundamental Rights Impact Assessment (FRIA). The FRIA must describe: the deployer's processes using the system; the period and frequency of use; the categories of natural persons and groups likely to be affected; the specific risks of harm (taking into account the provider's Art. 13 information); the implementation of human-oversight measures per the instructions for use; and the measures to be taken if those risks materialise, including internal governance and complaint mechanisms. The FRIA applies to first use, may be reused across similar cases (including the provider's own impact assessment), must be updated when any element changes, must be notified to the market surveillance authority using a template from the AI Office, and complements (does not replace) an Art. 35 GDPR DPIA or Art. 27 LED assessment where applicable.

~16h to implement Important

Provider Request Emails

Send these to your AI providers to request the documentation you need for compliance.

Anthropic (Claude)

Subject: EU AI Act Compliance — Documentation Request for Anthropic (Claude)
Dear Anthropic (Claude) Compliance Team,

We are writing to request documentation required under the EU AI Act (Regulation 2024/1689) for our use of your AI services. As a deployer of AI systems that incorporate your technology, we have specific compliance obligations that require information from you as the upstream provider.

Under the EU AI Act, we require the following:

1. TRANSPARENCY INFORMATION (Article 13 / Article 50)
   — Intended purpose and limitations of your AI models
   — Performance metrics and known biases
   — Information about training data characteristics

2. TECHNICAL DOCUMENTATION (Annex IV)
   — System architecture description
   — Design specifications and development methodology
   — Accuracy, robustness, and cybersecurity measures

3. CONFORMITY INFORMATION (Article 47)
   — Your EU Declaration of Conformity (if applicable)
   — CE marking status for high-risk AI system components
   — Any conformity assessment results

4. RISK MANAGEMENT (Article 9)
   — Known risks associated with your AI models
   — Recommended risk mitigation measures for deployers
   — Any usage restrictions or conditions

The EU AI Act enforcement deadline is August 2, 2026. We would appreciate receiving this documentation at your earliest convenience to ensure our compliance.

Please let us know if you have questions about this request or if there is a dedicated compliance contact we should work with.

Best regards,
[Your Name]
[Your Company]

---
Generated by AIActStack — EU AI Act compliance for AI-powered companies.
Scan your obligations free → https://aiactstack.com

Generate compliance documents automatically

Save your results to get AI-generated Article 50 transparency notices and DPIA templates, pre-filled with your specific AI stack details. Ready in seconds.

Track your compliance progress

Save these results to your dashboard. Generate compliance documents, track provider requests, and stay ahead of the deadline.

Create Account & Save

Related guides