EU AI Act Provider Guide

EU AI Act Compliance for OpenAI, Anthropic & Google AI Users

By AIActStack · Published April 4, 2026 · Last updated April 17, 2026

You call OpenAI's API. Anthropic's. Google's. A "deployer" is a natural or legal person, public authority, agency, or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity[src] Your providers are providers under the same Regulation. You each have obligations. The problem: some of your obligations require documentation from them. This guide breaks down what you need, from whom, and gives you the emails to ask for it.

1. The supply chain problem nobody talks about

The EU AI Act assigns obligations based on roles. When you use a third-party AI API, the Act creates a supply chain with at least two parties:

Provider (upstream)

The company that developed or trained the AI model and placed it on the market. OpenAI, Anthropic, Google, Mistral, Cohere, HuggingFace.

Deployer (you)

The company that uses the AI system under its own authority. You integrate the API into your product. You serve it to your users. Your obligations exist independently of your provider's.

Here's where it gets complicated. Several of your deployer obligations depend on information that only your provider has:

  • You need to conduct a risk assessment — but you need to know the model's known risks and limitations (Article 9)
  • You need to implement human oversight — but you need the provider's guidance on capabilities and limitations to do this effectively (Article 26(2))
  • You need to write transparency disclosures — but you need details about the AI system to inform your disclosure to users (Article 50)
  • For high-risk systems, you need a DPIA — which requires the provider's technical documentation as input (Article 26(9))

The regulation assumes this information flows from provider to deployer. In practice, most providers haven't set up structured compliance documentation programs yet. The result: you have obligations you can't fully meet without information you don't have.

The fix is straightforward. Ask for it. This guide gives you the exact emails.

Which providers are in your stack?

Scan your AI stack in 2 minutes. Get your obligations and provider-specific email templates.

Scan Your AI Stack Free →

2. The four categories of documentation you need

Every documentation request to your AI provider should cover these four areas. They map directly to specific articles in the regulation.

1

Transparency Information

Article 13 / Article 50

Intended purpose and limitations of the AI model. Performance metrics and known biases. Information about training data characteristics. You need this regardless of risk level — Article 50 transparency obligations apply to all chatbots and content generation systems.

2

Technical Documentation

Annex IV

System architecture description. Design specifications and development methodology. Accuracy, robustness, and cybersecurity measures. Required for high-risk systems. Annex IV specifies 9 sections the provider must cover.

3

Conformity Information

Article 47

EU Declaration of Conformity (if applicable). CE marking status for high-risk AI system components. Conformity assessment results. Relevant only if your use case is high-risk (Annex III).

4

Risk Management

Article 9

Known risks associated with the AI model. Recommended risk mitigation measures for deployers. Usage restrictions or conditions. Critical for high-risk systems, valuable for any deployment.

Key point: If your use case is limited-risk (chatbot, content generation), you still need category 1 (transparency). If your use case is high-risk (hiring, credit scoring, medical), you need all four.

3. OpenAI (GPT-4, ChatGPT API)

OpenAI is the most common AI provider in SaaS stacks. If you use any GPT model via API, you're a deployer of OpenAI's system.

What OpenAI owes you (as a provider)

  • Transparency info about GPT model capabilities, limitations, and known biases (Article 13)
  • Technical documentation covering Annex IV requirements (for high-risk deployments)
  • Risk management information — known failure modes, recommended safeguards (Article 9)
  • Instructions for use that enable you to meet your own oversight obligations (Article 14)

What OpenAI has published so far

OpenAI publishes model cards and system cards for flagship models (GPT-4, GPT-4o), covering high-level capabilities, limitations, and safety evaluations. They provide a usage policy and API terms of service. However, these documents were not written to satisfy EU AI Act Article 13 or Annex IV requirements specifically. There are gaps — particularly around training data characteristics, formal risk assessment outputs, and conformity declarations.

What you owe regardless of OpenAI

Your obligation Article Risk level
Disclose AI interaction to users Art. 50(1) Limited+
Label AI-generated content Art. 50(2) Limited+
Implement human oversight Art. 26(2) High only
Data protection impact assessment Art. 26(9) High only
Monitor operation & report incidents Art. 26(5) High only
Keep automatically generated logs Art. 26(6) High only

Email template: request documentation from OpenAI

Copy this and send it to OpenAI's compliance team. References the specific articles and documentation categories.

Dear OpenAI (GPT-4, ChatGPT) Compliance Team,

We are writing to request documentation required under the EU AI Act (Regulation 2024/1689) for our use of your AI services. As a deployer of AI systems that incorporate your technology, we have specific compliance obligations that require information from you as the upstream provider.

Under the EU AI Act, we require the following:

1. TRANSPARENCY INFORMATION (Article 13 / Article 50)
   — Intended purpose and limitations of your AI models
   — Performance metrics and known biases
   — Information about training data characteristics

2. TECHNICAL DOCUMENTATION (Annex IV)
   — System architecture description
   — Design specifications and development methodology
   — Accuracy, robustness, and cybersecurity measures

3. CONFORMITY INFORMATION (Article 47)
   — Your EU Declaration of Conformity (if applicable)
   — CE marking status for high-risk AI system components
   — Any conformity assessment results

4. RISK MANAGEMENT (Article 9)
   — Known risks associated with your AI models
   — Recommended risk mitigation measures for deployers
   — Any usage restrictions or conditions

The EU AI Act enforcement deadline is August 2, 2026. We would appreciate receiving this documentation at your earliest convenience to ensure our compliance. 

Please let us know if you have questions about this request or if there is a dedicated compliance contact we should work with.

Best regards,
[Your Name]
[Your Company]

---
Generated by AIActStack — EU AI Act compliance for AI-powered companies.
Scan your obligations free → https://aiactstack.com

Using multiple AI providers?

The scanner generates separate email templates for each provider in your stack. Plus your full obligation checklist.

Scan Your AI Stack Free →

4. Anthropic (Claude)

Anthropic positions itself as a safety-focused lab. If you use Claude via API, you're a deployer and Anthropic is your provider.

What Anthropic owes you

Same four documentation categories as any provider: transparency (Art. 13/50), technical documentation (Annex IV), conformity information (Art. 47), and risk management (Art. 9).

What Anthropic has published

Anthropic publishes model cards for Claude models, usage policies, and has been transparent about safety evaluations and red-teaming results. They publish a Responsible Scaling Policy. Like OpenAI, these documents predate the EU AI Act's enforcement timeline and weren't structured to satisfy Article 13 or Annex IV specifically. The gap remains: formal risk assessment outputs, training data documentation, and conformity declarations are not publicly available in the format the regulation expects.

Where Anthropic is commonly used in high-risk contexts

Claude is increasingly used for HR screening (resume analysis, candidate evaluation) and internal compliance workflows. If you use Claude for any Annex III use case — hiring, credit decisions, medical triage — you're deploying a high-risk AI system and need all four documentation categories.

Email template: request documentation from Anthropic

Dear Anthropic (Claude) Compliance Team,

We are writing to request documentation required under the EU AI Act (Regulation 2024/1689) for our use of your AI services. As a deployer of AI systems that incorporate your technology, we have specific compliance obligations that require information from you as the upstream provider.

Under the EU AI Act, we require the following:

1. TRANSPARENCY INFORMATION (Article 13 / Article 50)
   — Intended purpose and limitations of your AI models
   — Performance metrics and known biases
   — Information about training data characteristics

2. TECHNICAL DOCUMENTATION (Annex IV)
   — System architecture description
   — Design specifications and development methodology
   — Accuracy, robustness, and cybersecurity measures

3. CONFORMITY INFORMATION (Article 47)
   — Your EU Declaration of Conformity (if applicable)
   — CE marking status for high-risk AI system components
   — Any conformity assessment results

4. RISK MANAGEMENT (Article 9)
   — Known risks associated with your AI models
   — Recommended risk mitigation measures for deployers
   — Any usage restrictions or conditions

The EU AI Act enforcement deadline is August 2, 2026. We would appreciate receiving this documentation at your earliest convenience to ensure our compliance. 

Please let us know if you have questions about this request or if there is a dedicated compliance contact we should work with.

Best regards,
[Your Name]
[Your Company]

---
Generated by AIActStack — EU AI Act compliance for AI-powered companies.
Scan your obligations free → https://aiactstack.com

5. Google (Gemini)

Google's Gemini API (formerly Bard/PaLM) is used across SaaS for content generation, search augmentation, and recommendations. If you use Gemini via API, Google is the provider and you're the deployer.

What Google has published

Google publishes AI Principles, model cards for Gemini, and has a dedicated responsible AI team. They've been active in EU AI Act consultations. Google has published some documentation about Gemini's capabilities and safety evaluations, and has more infrastructure for compliance documentation than most providers. However, formal Article 13/Annex IV documentation packages for downstream deployers are still not publicly available as standardized deliverables.

Google-specific considerations

  • Google Cloud's AI terms may include some compliance provisions — review your enterprise agreement for any AI Act clauses
  • Vertex AI provides more control and logging than the consumer Gemini API — relevant for Article 26(6) log retention obligations
  • Google's GDPR Data Processing Addendum covers data processing terms but does not address AI Act-specific obligations — you need separate compliance documentation

Email template: request documentation from Google

Dear Google (Gemini) Compliance Team,

We are writing to request documentation required under the EU AI Act (Regulation 2024/1689) for our use of your AI services. As a deployer of AI systems that incorporate your technology, we have specific compliance obligations that require information from you as the upstream provider.

Under the EU AI Act, we require the following:

1. TRANSPARENCY INFORMATION (Article 13 / Article 50)
   — Intended purpose and limitations of your AI models
   — Performance metrics and known biases
   — Information about training data characteristics

2. TECHNICAL DOCUMENTATION (Annex IV)
   — System architecture description
   — Design specifications and development methodology
   — Accuracy, robustness, and cybersecurity measures

3. CONFORMITY INFORMATION (Article 47)
   — Your EU Declaration of Conformity (if applicable)
   — CE marking status for high-risk AI system components
   — Any conformity assessment results

4. RISK MANAGEMENT (Article 9)
   — Known risks associated with your AI models
   — Recommended risk mitigation measures for deployers
   — Any usage restrictions or conditions

The EU AI Act enforcement deadline is August 2, 2026. We would appreciate receiving this documentation at your earliest convenience to ensure our compliance. 

Please let us know if you have questions about this request or if there is a dedicated compliance contact we should work with.

Best regards,
[Your Name]
[Your Company]

---
Generated by AIActStack — EU AI Act compliance for AI-powered companies.
Scan your obligations free → https://aiactstack.com

6. HuggingFace, Mistral & Cohere

HuggingFace

HuggingFace is both a model hub and an inference provider. If you use HuggingFace's hosted inference API, they are the provider. If you download a model from the Hub and self-host it, you may become the provider (depending on whether you modify or fine-tune the model). This distinction matters: providers have significantly more obligations than deployers.

Model cards on HuggingFace vary wildly in quality. Some include detailed bias evaluations and limitations. Many don't. Check the specific model card for the model you're using.

Mistral AI

Mistral is a French AI lab — headquartered in the EU and subject to the AI Act directly. This may mean Mistral is ahead of US providers on compliance documentation, since they face enforcement directly. Check their API documentation and terms for any AI Act-specific provisions.

Mistral publishes model information for Mistral and Mixtral models, but formal compliance documentation packages aren't publicly available yet.

Cohere

Cohere (Command, Embed models) publishes model cards and usage guidelines. Same documentation request applies — use the same four-category email template. Cohere's enterprise agreements may include compliance provisions worth reviewing.

The email template is the same for all providers. The four documentation categories (transparency, technical docs, conformity, risk management) come from the regulation itself. The scanner generates provider-specific emails for whichever services you select.

Using a provider not listed here?

The scanner supports OpenAI, Anthropic, Google, HuggingFace, Mistral, Cohere, and custom models.

Scan Your AI Stack Free →

7. What you owe regardless of your provider

Your provider's obligations don't reduce yours. Even if OpenAI hands you a perfect compliance package tomorrow, you still have deployer-specific obligations that can't be delegated.

If your use case is limited-risk (chatbot, content generation)

Obligation Article Effort Priority
Disclose AI interaction to users Art. 50(1) ~4h Critical
Label AI-generated content Art. 50(2) ~8h Critical
Disclose AI-generated deep fakes Art. 50(4) ~4h Important

If your use case is high-risk (hiring, credit scoring, medical)

All of the above, plus:

Obligation Article Effort Priority
Risk management system Art. 9 ~40h Critical
Human oversight implementation Art. 26(2) ~16h Critical
Monitor operation & report incidents Art. 26(5) ~8h Critical
Data protection impact assessment Art. 26(9) ~16h Critical
Keep automatically generated logs Art. 26(6) ~8h Important
Register in EU database Art. 49 ~4h Important
Report serious incidents Art. 73 ~8h Critical

Total estimated effort for a high-risk deployer: ~100h+ across all obligations. This is why starting now matters — with days until the August 2, 2026 deadline, waiting means compressing months of work into weeks.

8. What to do when providers don't respond

You sent the email. It's been three weeks. Nothing. This is a real scenario companies are facing right now. Here's how to handle it.

1. Send a follow-up with a deadline

Reference your original request, cite the August 2, 2026 enforcement date, and ask for a response within 14 days. Be specific about what you need. The scanner generates follow-up emails you can copy.

2. Document your attempts

Keep records of every request sent, including dates and content. If a national authority asks why your compliance documentation is incomplete, "we asked our provider and they didn't respond" is a defensible position — but only if you can prove you asked.

3. Use what's publicly available

Model cards, system cards, safety evaluations, usage policies — extract whatever you can from public documentation and note the gaps. A partial risk assessment that acknowledges "provider has not supplied formal risk documentation" is better than no risk assessment.

4. Consider your alternatives

If a provider refuses to engage on compliance documentation, factor that into your vendor evaluation. An EU-based provider like Mistral may be more responsive to AI Act documentation requests than a US provider with no EU enforcement exposure.

Important: Your obligations don't pause because your provider is slow. You are responsible for your own compliance. If you can't get documentation from a provider, you may need to conduct your own assessment of the AI system's risks based on what information is available — and document the gaps.

9. Action plan: this week

1

Scan your AI stack

Takes 2 minutes. Identifies your role, risk level, and exact obligations. Start here →

2

Send the documentation request emails

Copy the templates from this page or from your scanner results. Send one email per provider. Set a calendar reminder for 14 days.

3

Save your results and start tracking

Create an account (free) and save your scan results. Track which providers have responded, which obligations you've addressed, and what's still outstanding.

4

Start on your deployer obligations

Don't wait for providers to respond. Begin with the obligations you can fulfill independently: transparency disclosures (Art. 50), human oversight procedures (Art. 26), log retention setup (Art. 26(6)).

Key dates

Feb 2, 2025 Prohibited practices (Art. 5) + AI literacy (Art. 4) — already in effect
Aug 2, 2025 GPAI obligations + governance structures + penalty framework
Aug 2, 2026 High-risk + transparency obligations apply
Aug 2, 2027 Obligations for Annex I high-risk AI (regulated products)

Under COM(2025) 836, Annex III high-risk obligations would apply "latest by 2 December 2027" — six months later than the current 2 August 2026 date in Art. 113. The Council general approach (13 March 2026) and the IMCO+LIBE joint committee report (A-10-2026-0073, 18 March 2026) both converge on 2 December 2027 as a fixed date. NOT yet adopted as law[src]

Get weekly EU AI Act compliance updates

Regulation changes, enforcement updates, and practical compliance tips.

Related Guides

Sources

All legal claims in this guide are cross-referenced against the official EUR-Lex Regulation text. Claims are verified and updated within 14 days of official guidance changes.

This guide provides general information based on the EU AI Act text (Regulation 2024/1689). It is not legal advice. Consult a qualified legal professional for formal compliance guidance specific to your situation.