EU AI Act Guide
GPAI Code of Practice: What It Is and Whether to Adopt It
The General-Purpose AI Code of Practice is the voluntary compliance route for GPAI model providers under the EU AI Act. Adopting it is not mandatory; not adopting it exposes the provider to closer scrutiny and the full Article 99(4) fines. This guide covers what the Code is, what it commits signatories to, and how to decide whether to sign.
What the Code of Practice is
The General-Purpose AI Code of Practice was drawn up by independent experts under the multi-stakeholder process set out in the AI Act and published in final form on 10 July 2025. It offers GPAI providers a documented way to demonstrate compliance with Article 53 (obligations for all GPAI providers) and Article 55 (additional obligations for GPAI models with systemic risk).
Adopting the Code is voluntary — the AI Act does not require it. But the Code is the only currently-published mechanism for GPAI providers to demonstrate compliance in a structured way pending harmonised standards. Non-adoption exposes the provider to case-by-case scrutiny from the AI Office and to the Art 99(4) fine tier for GPAI-obligation breaches.
Why the Code exists
The AI Act's obligations for GPAI providers (Art 53 and Art 55) are drafted at the level of duties: publish a training-data summary, cooperate with the AI Office, put risk-mitigation measures in place for systemic-risk models, and so on. The Regulation does not specify how. Harmonised standards under Article 40 eventually fill the "how" gap; until they publish, the Code is the interim bridge.
A provider that signs the Code and operates per its measures is presumed compliant with the corresponding duties, in the same way that conformity with a harmonised standard creates presumption of conformity with the underlying requirement.
Who this applies to
Providers of general-purpose AI models placed on the Union market or made available in the Union. "General-purpose AI model" has a specific statutory definition — it is not every large model, but models that display significant generality and can competently perform a wide range of distinct tasks, and that are integrated into a variety of downstream systems [src].
Companies building on top of a third-party GPAI model (OpenAI GPT, Anthropic Claude, Google Gemini, Mistral, and so on) are not directly covered — the Code binds the GPAI model provider, not the downstream system provider. But downstream providers depend on the GPAI provider's Art 53/55 compliance for their own stack, so whether your upstream model provider has signed the Code is now a supply-chain-diligence question.
What the Code covers
As structured in the Code's 10 July 2025 publication, the document is organised around three chapters, each mapping to a cluster of AI Act GPAI duties. The Code itself is an external document not part of the Regulation's statutory text; verify the current chapter structure at code-of-practice.ai before signing, as the Commission may publish revised versions.
Transparency
Operationalises the Art 53(1)(a)–(b) duty to maintain and provide model documentation (including to the AI Office and to downstream providers) and the Art 53(1)(d) training-data-summary publication duty. Signatories commit to specific documentation templates and disclosure cadences.
Copyright
Operationalises the Art 53(1)(c) duty to put a policy in place to comply with EU copyright law, including the text-and-data-mining opt-out under Directive 2019/790. Signatories commit to specific respect-for-opt-out implementation and documentation practices.
Safety and security (GPAI with systemic risk only)
Operationalises the Art 55 additional duties for GPAI models with systemic risk — model evaluation, adversarial testing, incident reporting, cybersecurity protections. Signatories commit to risk-assessment frameworks, red-teaming disciplines, and incident-reporting flows. This chapter applies only to providers whose model is designated as systemic-risk under Article 52 [src].
When do the underlying obligations apply?
Chapter V of the AI Act (the GPAI rules) applies from 2 August 2025 [src]. GPAI-model providers that put their model on the Union market on or after that date must demonstrate compliance with Articles 53 and 55 from day one. The Code is the available mechanism to structure that demonstration.
For GPAI models placed on the market before 2 August 2025, the Regulation provides a transition period before the full set of obligations attaches. Providers should check the specific transition rules in Article 111.
Who has signed the Code
Signatory status is a public fact maintained by the AI Office. Consult code-of-practice.ai for the current list before committing to a GPAI model as part of your stack — a non-signatory upstream provider increases the compliance burden on your downstream side, because you can no longer assume Art 53/55 are met.
Non-signatories are not automatically non-compliant; they simply face a higher evidentiary burden when asked to demonstrate Art 53/55 conformity case-by-case. Some large GPAI providers have declined to sign the Code on specific-chapter grounds; read their public statements before drawing conclusions.
If you are a GPAI provider: should you sign?
The decision is straightforward for most providers and hinges on one question: can you demonstrate Art 53 (and Art 55 if systemic-risk) compliance through some other structured mechanism?
- • Sign the Code if you have no independent Art 53/55 compliance programme and no harmonised-standard conformity alternative. The presumption of conformity it provides is more efficient than bespoke case-by-case demonstration.
- • Decline the Code on a specific chapter if your internal Art 53/55 programme genuinely exceeds the Code's requirements (e.g. a more mature systemic-risk-evaluation programme than the Safety chapter). Document the alternative in audit-ready form.
- • Wait for harmonised standards is not a strategy. Harmonised standards under Article 40 are years away for most GPAI duties; the Code is the current reality.
For Union-established providers, the path is signing the Code and operating under its templates. Third-country GPAI providers additionally owe the authorised-representative duty in Art 54 before placing a model on the Union market [src].
Penalties for non-compliance with Art 53 / 55
Non-compliance with the GPAI obligations in Articles 53 and 55 exposes the provider to administrative fines of up to EUR 15 million or 3% of worldwide annual turnover, whichever is higher [src]. Art 101 puts enforcement in the hands of the Commission directly (not Member State market-surveillance authorities), and Commission decisions are reviewable by the Court of Justice of the European Union with unlimited jurisdiction to cancel, reduce, or increase the fine.
Signing the Code is not a fine-avoidance shield; it is a compliance-demonstration mechanism. A signatory that fails to operate per the Code is still exposed to enforcement action. The practical upside of signing is structured evidence, not immunity.
Check your GPAI exposure
Scan your AI stack to determine whether you are building a GPAI model (provider), using one (deployer), or reselling a third-party GPAI system (downstream provider). Free, no signup.
Scan Your AI Stack FreeRelated Guides
This guide covers the General-Purpose AI Code of Practice as published on 10 July 2025 and the EU AI Act (Regulation 2024/1689) obligations it operationalises. It is not legal advice. Whether a specific model meets the statutory GPAI definition in Art 3(63), whether a GPAI model qualifies as systemic-risk under Art 51, and whether an alternative internal programme genuinely discharges Art 53/55 depend on facts specific to the provider and model. Consult qualified counsel for formal assessment.