The Supply Chain Problem

How to source the documentation OpenAI, Anthropic, and Google must give you under their GPAI obligations — and what to do when they don't respond.

Part of the AI literacy training (Article 4) curriculum · Sources: Regulation (EU) 2024/1689

0 of 0 lessons completed in this module
4.0 GPAI Provider Obligations & What They Owe You ~25 min

Title V of the AI Act (Articles 51-56) creates a distinct set of obligations for providers of General-Purpose AI models. If you build on GPT-4, Claude, or Gemini, the companies behind those models owe you specific documentation. Understanding what they must provide (and what they have not yet provided) is the foundation of the supply chain problem.

What Every GPAI Provider Must Do (Art. 53)

Article 53 applies to all GPAI providers, regardless of model size. Four core obligations:

  1. Technical documentation: Providers must prepare and keep up-to-date technical documentation of the model, including its training and testing process and evaluation results. This must be made available to the AI Office and national authorities on request.
  2. Downstream information: Providers must supply information and documentation to downstream providers (and deployers who integrate the GPAI model into AI systems) to enable them to understand the model's capabilities, limitations, and comply with their own obligations.
  3. Copyright compliance policy: Providers must establish a policy for respecting EU copyright law, in particular the text and data mining opt-out right under Article 4(3) of Directive (EU) 2019/790.
  4. Training data summary: Providers must draw up and make publicly available a sufficiently detailed summary of the content used for training, following a template provided by the AI Office.

Additional Obligations for Systemic Risk Models (Art. 55)

A GPAI model is presumed to have "systemic risk" if its cumulative training compute exceeds 10^25 floating-point operations (FLOPs). The European Commission can also designate models based on other criteria such as number of users, degree of autonomy, or impact on the internal market. Models in this category include GPT-4 and successors, likely Claude 3.5/4 Opus-class models, and Gemini Ultra.

Article 55 imposes additional requirements on systemic risk models:

  • Model evaluation: Perform standardised evaluations, including adversarial testing (red-teaming), to identify and mitigate systemic risks
  • Systemic risk assessment and mitigation: Assess and mitigate possible systemic risks at Union level, including their sources
  • Serious incident tracking: Track, document, and report serious incidents and possible corrective measures to the AI Office and relevant national authorities without undue delay
  • Cybersecurity protections: Ensure an adequate level of cybersecurity for the model and its physical infrastructure

What Providers Have NOT Yet Provided

As of early 2026, the gap between what the Act requires and what providers have delivered is significant:

  • Training data summaries: No major provider has published a summary that meets the AI Office template requirements. OpenAI, Anthropic, and Google have disclosed broad categories ("internet data," "books," "code") but not the "sufficiently detailed" summaries the Act demands.
  • Downstream deployer documentation: Providers have published model cards and system cards as voluntary measures, but these were not designed to satisfy Art. 53(1)(b). They often lack the specificity deployers need for their own conformity assessments — particularly around performance metrics for specific use cases, known failure modes, and interaction logging formats.
  • Copyright compliance: None have published a verifiable policy for respecting text and data mining opt-outs, and several face active litigation on this front.
  • Codes of practice: The AI Office's codes of practice for GPAI, which will flesh out how to comply with Arts. 53 and 55, were still in development through 2025. Providers are waiting for final guidance, creating a chicken-and-egg problem.

Why This Matters to You

As a deployer, several of YOUR obligations under Article 26 require information that only the provider can give you. You cannot complete a meaningful risk assessment, DPIA, or conformity assessment without understanding the model's capabilities, limitations, training data characteristics, and known biases. The provider's failure to deliver does not eliminate your obligation — it creates legal exposure for both parties.

GPAI providers owe you technical documentation, downstream information, copyright compliance details, and training data summaries. The 10^25 FLOPs threshold for systemic risk captures all frontier models. Most of this documentation does not yet exist in the form the Act requires — which is your problem as much as theirs.
4.1 The Documentation Gap ~25 min

The AI Act creates a documentation supply chain: providers must produce specific documents and pass them downstream to deployers. Two articles are central to this obligation. Article 13 requires that high-risk AI systems be designed with sufficient transparency to enable deployers to interpret outputs and use the system appropriately. Article 47 requires providers to draw up an EU declaration of conformity for each high-risk AI system. Together, these articles define the documentation you need but probably do not have.

What Providers Must Give Deployers

Under Art. 13, providers must supply "instructions for use" that include:

  • Identity and contact details of the provider, plus their authorised representative if applicable
  • The intended purpose of the AI system and the specific conditions of use it was designed for
  • Performance metrics: the level of accuracy, robustness, and cybersecurity the system was tested and validated against, and any known circumstances that could impact performance
  • Known limitations: foreseeable conditions of misuse, their consequences, and the groups of persons on whom the system was tested (demographics, contexts)
  • Technical specifications of input data, or any other relevant information in terms of training, validation, and testing data sets used
  • Human oversight measures the deployer must implement, including technical safeguards built into the system
  • Expected lifetime and maintenance/update schedules
  • Interaction logging format so deployers know what data they will collect and how to retain it per Art. 12

Why Most Providers Have Not Sent This

There are several reasons this documentation has not materialised proactively:

  1. GPAI obligations came in on the Chapter V date. Chapter III Section 4 (notified bodies), Chapter V (general-purpose AI models), Chapter VII (governance), Chapter XII (penalties), and Art. 78 apply from 2 August 2025, with the exception of Art. 101[src] Providers had a legal argument for waiting until then, although many deployers needed the information well before that to prepare their own compliance.
  2. The model is general-purpose. GPAI providers argue their models were not designed for any specific "intended purpose" — which creates tension with Art. 13's requirement to describe the intended purpose. The provider designed a general model; you turned it into a specific application.
  3. No enforcement yet. Without enforcement action or clear codes of practice, the commercial incentive to produce this documentation is weak. Providers face cost and potential liability from detailed disclosures.
  4. Competitive sensitivity. Training data details, performance benchmarks on sensitive tasks, and known failure modes are commercially sensitive information that providers are reluctant to share broadly.

The CE Marking Question

For high-risk AI systems, the provider must affix a CE marking indicating conformity (Art. 48). No GPAI provider has issued CE markings for their models when used in high-risk applications because the conformity assessment process for these use cases has not been completed — and arguably cannot be completed by the GPAI provider alone, since they do not control how deployers use the model.

Articles 13 and 47 create a clear legal obligation for providers to supply deployers with instructions for use, performance data, known limitations, and conformity declarations. The gap exists because GPAI timelines, commercial incentives, and the general-purpose nature of the models create friction. But the obligation is unambiguous — and as a deployer, you need this documentation to fulfil your own obligations.
Use AIActStack's scanner to generate documentation request emails for your specific providers, citing the exact articles and document types you need.
4.2 What to Request from OpenAI ~20 min

OpenAI is the provider most deployers need to contact first. As the maker of GPT-4, GPT-4o, and the ChatGPT platform, OpenAI's GPAI obligations under Art. 53 are extensive, and their systemic risk obligations under Art. 55 apply to their frontier models. Here is what they have published, what is still missing, and how to structure your request.

What OpenAI Has Published

  • Model cards / system cards: OpenAI publishes system cards for major model releases (GPT-4, GPT-4o). These describe general capabilities, safety evaluations, and some limitations. However, they are framed as voluntary disclosures, not as compliance documents under the AI Act.
  • Usage policies: OpenAI maintains acceptable use policies that describe prohibited uses. These partially address "foreseeable misuse" but from a terms-of-service perspective, not a regulatory documentation perspective.
  • Safety research: Papers on red-teaming, alignment techniques, and evaluation results have been published. These contain useful performance data but are scattered across blog posts and academic papers, not consolidated as Art. 13 instructions for use.

What Is Still Missing

Required DocumentArticleStatus
Instructions for use (intended purpose, performance metrics, known limitations, human oversight guidance)Art. 13Not provided in AI Act format
Sufficiently detailed training data summaryArt. 53(1)(d)Not published
Copyright compliance policyArt. 53(1)(c)Not published (active litigation)
EU declaration of conformity (for high-risk deployments)Art. 47Not issued
Downstream deployer documentation for complianceArt. 53(1)(b)Partial — system cards exist but lack deployer-specific guidance
Model evaluation results (adversarial testing)Art. 55(1)(a)Partial — some published in system cards, not comprehensive

How to Structure the Request Email

Your email should be formal, cite specific articles, and create a paper trail. Key elements:

  1. Identify yourself: Company name, your role as a deployer under the AI Act, the specific OpenAI products you use (model name, API tier)
  2. State the legal basis: Reference Art. 53(1)(b) (GPAI downstream information), Art. 13 (transparency and instructions for use), Art. 47 (declaration of conformity)
  3. Be specific about what you need: List each document type individually — do not send a vague "please send AI Act documentation" request
  4. Explain why you need it: State that you require this information to comply with your deployer obligations under Art. 26, including your DPIA (Art. 26(9)) and conformity assessment
  5. Set a deadline: Request a response within 30 days. This is reasonable and creates urgency.
  6. Keep a copy: This email is evidence of your good-faith compliance effort if regulators ask why you deployed without full provider documentation
Use AIActStack's scanner to generate a pre-filled documentation request email for OpenAI, citing the exact articles and documents relevant to your specific use case and risk level.
Do not wait for OpenAI to proactively send you documentation. They have shown no indication of doing so at scale. You must initiate the request, and you must document that you did.
4.3 What to Request from Anthropic ~20 min

Anthropic, as the provider of Claude models, has the same GPAI obligations as OpenAI under Art. 53. Claude 3.5 Sonnet, Claude 3 Opus, and successor models likely cross the 10^25 FLOPs threshold, placing them in the systemic risk category under Art. 55. Anthropic's approach to transparency has been somewhat different from OpenAI's, which affects what you can reuse and what you still need to request.

What Anthropic Has Published

  • Model cards: Anthropic publishes model cards for Claude releases. These cover general capabilities, safety evaluations, and known limitations. The detail level is generally comparable to OpenAI's system cards.
  • Responsible Scaling Policy (RSP): Anthropic has published its internal framework for evaluating catastrophic risk before scaling models. This is a voluntary commitment, not an AI Act compliance document, but it partially addresses Art. 55 requirements around systemic risk assessment.
  • Usage policies: Terms of service and acceptable use policies describe prohibited uses, which partially map to "foreseeable misuse" documentation.
  • Safety research: Anthropic publishes technical reports on Constitutional AI, red-teaming results, and alignment research. These provide useful background but are not structured as deployer-facing documentation.

What Is Still Missing

Required DocumentArticleStatus
Instructions for use (intended purpose, performance metrics, known limitations, human oversight guidance)Art. 13Not provided in AI Act format
Sufficiently detailed training data summaryArt. 53(1)(d)Not published
Copyright compliance policyArt. 53(1)(c)Not published
EU declaration of conformityArt. 47Not issued
Downstream deployer documentation for complianceArt. 53(1)(b)Partial — model cards exist but lack deployer-specific compliance guidance
Systemic risk evaluation resultsArt. 55(1)(a)Partial — RSP framework exists, specific evaluation results vary

Anthropic-Specific Considerations

Two factors differentiate an Anthropic request from an OpenAI request:

  1. The RSP is an asset. Reference it in your request. Ask Anthropic to map their RSP commitments to specific AI Act obligations. This shows you have done your homework and makes it harder for them to deflect.
  2. Anthropic has been more vocal about EU engagement. They have published position papers on EU AI regulation and participated in consultation processes. This creates a reasonable expectation that they will respond constructively to formal deployer requests.

Structure your email the same way as the OpenAI request: identify yourself, cite Art. 53(1)(b), Art. 13, and Art. 47, list specific documents needed, explain why (your Art. 26 obligations, DPIA, conformity assessment), and set a 30-day deadline. Reference the RSP explicitly and ask how it maps to their Art. 55 systemic risk obligations.

Anthropic's RSP and model cards provide a stronger starting point than most providers, but they are voluntary disclosures, not AI Act compliance documents. You still need formal Art. 13 instructions for use, training data summaries, and copyright compliance documentation. Request it formally, reference their existing work, and document the exchange.
4.4 What to Request from Google ~15 min

Google DeepMind, as the provider of Gemini models, has the same GPAI obligations under Art. 53 as OpenAI and Anthropic. Gemini Ultra almost certainly exceeds the 10^25 FLOPs threshold. The request structure mirrors what you would send to OpenAI or Anthropic, with Google-specific considerations.

What Google Has Published

  • Model cards: Google publishes technical reports and model cards for Gemini releases with capability descriptions and safety evaluations
  • AI Principles: Google's published AI Principles (since 2018) outline ethical commitments but are corporate governance documents, not AI Act compliance materials
  • Responsible AI practices: Google has published guidelines on fairness, interpretability, and safety testing, spread across various research publications

What Is Still Missing

The gap mirrors OpenAI and Anthropic: no formal Art. 13 instructions for use, no training data summary per Art. 53(1)(d), no copyright compliance policy, and no EU declaration of conformity. Google's model cards are more detailed than most competitors on some technical benchmarks but still fall short of the structured deployer documentation the Act requires.

Google-Specific Considerations

  • EU presence: Google has a substantial EU legal and compliance infrastructure due to GDPR enforcement history. They are more likely to have internal teams working on AI Act compliance than smaller providers.
  • Multiple products: Be specific about which Google AI product you use — Gemini API, Vertex AI, Google Cloud AI services, or embedded AI features. Each may have different compliance paths.
  • GDPR precedent: Google has received billions in GDPR fines across EU member states. They understand EU regulatory enforcement is real. This context may make them more responsive to formal compliance requests than providers without this history.
Apply the same request template: identify yourself, cite Arts. 53(1)(b), 13, and 47, list specific documents, explain your deployer obligations under Art. 26, set a 30-day deadline. Google's EU enforcement history means they may be better prepared to respond — but you still need to ask.
4.5 What to Do When Providers Don't Respond ~25 min

You sent the request. Thirty days pass. No response, or a vague response that does not address your specific document requests. This is the scenario most deployers will face in 2025-2026. The question is: what is your legal exposure, and what can you do about it?

Your Legal Exposure as Deployer

The uncomfortable truth: the AI Act does not give deployers a blanket exemption when their provider fails to deliver documentation. Article 26 states your obligations unconditionally — there is no clause that says "unless your provider did not cooperate." If you deploy a high-risk AI system without adequate documentation, you bear liability for non-compliance with your deployer obligations, even if the root cause is provider silence.

However, a deployer who can demonstrate good-faith effort to obtain documentation will be in a fundamentally different position from one who never asked. Regulators enforcing the Act will consider the reasonableness of your compliance effort.

Can You Deploy Without Provider Documentation?

For limited-risk deployments (chatbots, content generation): yes, with appropriate transparency disclosures under Art. 50 and reasonable internal documentation of what you know about the model. The documentation gap is less critical here because your obligations are narrower.

For high-risk deployments (hiring, credit, healthcare): this is legally precarious. Your DPIA (Art. 26(9)) must assess the impact of the AI system, and you cannot do that rigorously without understanding the model's training data, biases, and failure modes. Deploying high-risk AI without adequate documentation is a significant risk.

Workarounds

  1. Use publicly available model cards as partial evidence. Treat published system cards, model cards, technical reports, and safety evaluations as your best available source. Document that you reviewed them, extract relevant information, and note where they fall short of Art. 13 requirements.
  2. Document the gap explicitly. Create a "Provider Documentation Gap Analysis" document listing every Art. 13 requirement, what information you have, where it came from, and what is missing. This demonstrates diligence.
  3. Implement additional monitoring. Where provider documentation is insufficient, compensate with enhanced monitoring: more extensive output logging, more frequent human review cycles, tighter usage boundaries, additional input/output filtering.
  4. Conduct your own evaluation. Run your own performance testing for your specific use case. Measure accuracy, bias indicators, and failure rates with your actual data. This cannot replace provider documentation but demonstrates that you took responsibility.
  5. Restrict the deployment scope. Narrow the use case to reduce risk. A hiring AI that only assists with initial sorting (with mandatory human review of every candidate) is lower risk than one that makes autonomous shortlisting decisions.

Escalation: Formal Notice Under Art. 25

Article 25 addresses responsibilities along the AI value chain. If a provider fails to fulfil their obligations and this prevents you from meeting yours, you have grounds to send a formal notice documenting the provider's non-compliance. This notice should:

  • Reference the specific articles the provider has not complied with (Arts. 13, 47, 53)
  • State that their non-compliance is impeding your ability to meet Art. 26 deployer obligations
  • Request remediation within a specific timeframe (14-30 days)
  • State that you will notify the relevant national authority and/or the AI Office if they do not respond

This is a last resort, but it creates a legally defensible record. You can also file a complaint with the AI Office under Art. 89, which has investigative powers over GPAI providers.

Never assume that "my provider did not give me the documents" is a defence. It is a mitigating factor, not an exemption. Regulators will ask what you did about the gap — not just whether the gap existed.
4.6 Evaluating Provider Documentation ~20 min

Suppose your provider does respond — either with formal AI Act documentation or by pointing you to existing model cards and technical reports. How do you evaluate whether what they sent is sufficient for your compliance needs? Not all documentation is created equal, and a 20-page model card may still leave critical gaps.

Evaluation Checklist

Go through each item. If the provider documentation does not address it, flag it as a gap you must fill yourself or escalate.

RequirementArticleWhat to Look For
Intended purposeArt. 13(3)(a)Does it describe what the model is designed for? Does it cover YOUR specific use case, or only generic capabilities?
Performance metricsArt. 13(3)(b)Are accuracy, precision, recall, or other relevant metrics provided? For what tasks and datasets? Are they relevant to your deployment context?
Known limitationsArt. 13(3)(b)Are failure modes documented? Are there known demographic biases? Does it specify what the model should NOT be used for?
Foreseeable misuseArt. 13(3)(b)Are misuse scenarios described with their potential consequences? Is your use case close to any identified misuse pattern?
Training data characteristicsArt. 53(1)(d)Is there a summary of training data sources, volume, time period, and geographic/linguistic coverage? Are any known data quality issues documented?
Known biasesArt. 10(2)(f)Are bias evaluation results provided? For which protected characteristics (gender, race, age, disability)? Were mitigation measures applied?
Human oversight guidanceArt. 14Does the documentation describe how humans should supervise the system? What override mechanisms exist? What signals should trigger human intervention?
Interaction log formatArt. 12Does the documentation describe what data the system logs, in what format, and how to retain it? Can you comply with Art. 26(6) log retention based on this?
Contact informationArt. 13(3)(a)Is there a designated compliance contact, EU authorised representative, or complaint channel?

Is It Sufficient for Your DPIA?

Your DPIA under Art. 26(9) must assess the impact of the AI system on fundamental rights. To do this, you need to understand:

  • What data the model was trained on (could it encode biases against protected groups?)
  • How the model performs across different demographic groups
  • What happens when the model fails — what are the consequences for affected individuals?
  • Whether the model's outputs can be audited and explained

If the provider documentation does not answer these questions for your specific deployment, your DPIA has a blind spot. Document that blind spot, describe what compensating measures you have implemented, and note the outstanding information request to the provider.

Is It Sufficient for Your Conformity Assessment?

If your deployment is high-risk and requires a conformity assessment (Art. 43), you need the provider's documentation to demonstrate that the AI system meets the requirements of Chapter 2 of Title III. Without the provider's technical documentation covering risk management (Art. 9), data governance (Art. 10), and accuracy/robustness (Art. 15), your conformity assessment will have material gaps.

Create a "Provider Documentation Evaluation Matrix" — a spreadsheet mapping each Art. 13 requirement to the specific page/section of the provider documentation that addresses it, or marking it as "gap." This becomes part of your compliance file.
4.7 The Conformity Declaration Chain (Art. 47) ~20 min

Providers of high-risk AI systems must draw up a written, machine-readable, signed EU declaration of conformity per Art. 47 including the information listed in Annex V, keep it up to date, and keep a copy at the disposal of national competent authorities for 10 years after the AI system has been placed on the market or put into service[src]

The problem for deployers of third-party AI is what happens when this declaration does not exist — or when it cannot exist in the form the Act envisions.

The Pass-Through Problem

The AI Act was designed primarily for a straightforward supply chain: a provider builds an AI system, conducts a conformity assessment, issues a declaration, affixes CE marking, and places it on the market. A deployer then uses that assessed system according to the provider's instructions.

GPAI models break this model. OpenAI does not build a "hiring screening AI system" — they build GPT-4, a general-purpose model. You build the hiring screening system by integrating GPT-4 with your application logic, prompts, and data pipeline. This creates a fundamental question: who conducts the conformity assessment for the resulting high-risk system?

Three Scenarios

ScenarioWho Does the Conformity Assessment?Provider's Art. 47 Declaration?
Provider sells a ready-to-use high-risk AI system (e.g., turnkey hiring AI product)The providerYes — provider issues declaration before placing on market
Deployer uses a GPAI model as a component in a self-built high-risk systemThe deployer (who is now arguably a provider under Art. 25 if the modification is substantial)No — the GPAI provider's declaration covers the model, not your system
Deployer uses a GPAI model with minimal modification in a high-risk contextUnclear — this is the grey zoneThe provider has not issued one for this use case

Your Liability as Deployer

If you deploy a high-risk AI system and no valid EU declaration of conformity exists for it, you are operating a non-conforming AI system in the EU market. Non-compliance with operator or notified body obligations (other than Art. 5) is subject to administrative fines of up to EUR 15 000 000 or, for an undertaking, up to 3% of total worldwide annual turnover for the preceding financial year, whichever is higher. Covers provider obligations (Art. 16), authorised representatives (Art. 22), importers (Art. 23), distributors (Art. 24), deployers (Art. 26), notified bodies (Arts. 31, 33, 34), and transparency obligations (Art. 50)[src]

The practical reality is that most deployers using GPAI models in high-risk contexts will need to take responsibility for the conformity assessment of their integrated system — even though they do not control the underlying model. This means:

  • You need the provider's technical documentation (Art. 11, Annex IV) as input to your own assessment
  • You must document the integration — how you combined the GPAI model with your application logic, what constraints you applied, what testing you performed
  • You must assess whether your modifications qualify as "substantial" under Art. 25, which would make you a provider of the integrated system with full provider obligations
  • If you are a provider of the integrated system, you must conduct the conformity assessment yourself (or engage a notified body for biometric systems under Art. 43(1))

What the Declaration Must Contain

Per Annex V, the EU declaration of conformity must include: the AI system identification, provider name and address, a statement that the declaration is issued under the provider's sole responsibility, a description of the AI system, references to harmonised standards or common specifications applied, the conformity assessment procedure followed, and the date and signature. It must be kept up to date.

The conformity declaration chain breaks when deployers use general-purpose models in high-risk contexts. No GPAI provider has issued Art. 47 declarations for downstream high-risk deployments. If you are building a high-risk AI system on top of a GPAI model, you are very likely the entity responsible for the conformity assessment — and you need the provider's documentation to do it properly.
If you are using a GPAI model in a high-risk context and have not assessed whether Art. 25 makes you a "provider" of the combined system, do that assessment now. The answer determines whether you have deployer obligations or the much heavier provider obligations.
4.8 Module 4 Quiz ~15 min

Draft a Documentation Request

Your company uses OpenAI's GPT-4 API for a customer-facing chatbot deployed in the EU. Draft an email to OpenAI's compliance team requesting the documentation you need under Articles 13 and 53. Be specific about what documents and why you need them.

Supply Chain Analysis

Answer these without looking back at the lessons:

  1. Name the four core obligations every GPAI provider has under Art. 53.
  2. What is the FLOPs threshold for systemic risk? Which models likely exceed it?
  3. Your provider sends you their published model card and says "this satisfies our Art. 13 obligations." What three things do you check to verify this claim?
  4. You use Claude's API to power a hiring tool. Anthropic has not responded to your documentation request. Can you legally deploy the tool? What should you do?
  5. Who is responsible for the conformity assessment when you build a high-risk system using a GPAI model as a component?
Show Answers
  1. Technical documentation, downstream information to deployers, copyright compliance policy, and training data summary.
  2. 10^25 FLOPs. GPT-4 and successors, Claude 3/4 Opus-class models, and Gemini Ultra likely exceed it.
  3. Check whether it covers: (a) performance metrics relevant to your specific use case, (b) known limitations and foreseeable misuse scenarios, (c) human oversight guidance and interaction log format. A generic model card almost certainly does not cover all Art. 13 requirements for your deployment.
  4. Deploying a high-risk system without provider documentation is legally precarious. You should: (a) use publicly available model cards as partial evidence, (b) document the gap and your request attempts, (c) conduct your own performance evaluation, (d) implement enhanced monitoring, (e) consider sending a formal Art. 25 notice and/or filing with the AI Office. Simply deploying without action is not defensible.
  5. You are. As the entity building the integrated system, you likely qualify as a provider under Art. 25 (substantial modification of intended purpose). The GPAI provider's obligations cover the model; your conformity assessment must cover the system you built with it.