Custom Medical Diagnosis Obligations (High Risk)
As a provider (you use AI built by others) of high-risk AI stacks, you have 25 obligations under the EU AI Act.
Your Obligations
Want the full picture? Read our complete deployer obligations guide.
Post-Market Monitoring System
Article 72 →Providers of high-risk AI systems must establish and document a post-market monitoring system proportionate to the AI technologies and the risks. The system must actively and systematically collect, document, and analyse performance data (from deployers and other sources) across the system's lifetime to evaluate continuous compliance with the Chapter III Section 2 requirements. Monitoring is based on a post-market monitoring plan — part of the Annex IV technical documentation — that follows a Commission implementing-act template. Sectoral post-market monitoring under Annex I Union harmonisation legislation may be integrated where it offers equivalent protection.
Serious Incident Reporting
Article 73 →Providers must report serious incidents to the market surveillance authorities of the Member States where the incident occurred immediately after establishing a causal link, and in any event no later than 15 days after becoming aware, or 2 days for widespread infringements or serious and irreversible disruption to critical infrastructure.
Disclose AI Interaction to Users
Article 50 →Providers and deployers of certain AI systems must comply with transparency obligations in Art. 50, including: informing natural persons that they are interacting with an AI system (Art. 50(1)); marking synthetic audio/image/video/text outputs in a machine-readable format (Art. 50(2)); informing persons exposed to emotion-recognition or biometric categorisation systems (Art. 50(3)); disclosing AI-generated deep fakes and AI-generated text published on matters of public interest (Art. 50(4)). Information must be given clearly at first interaction (Art. 50(5)).
You have 25 obligations and 94 days left. Track your progress.
Create Account & SaveGPAI Authorised Representative for Non-EU Providers
Article 54 →Providers of general-purpose AI models established in third countries must, before placing the model on the Union market, appoint an authorised representative (AR) established in the Union by written mandate. The AR must verify the technical documentation has been drawn up and the Art. 53 (and where applicable Art. 55) obligations have been fulfilled; keep a copy of the technical documentation at the disposal of the AI Office and national competent authorities for 10 years; respond to AI Office requests with information and documentation demonstrating compliance; and cooperate with the AI Office and competent authorities on any action concerning the model (including when integrated into downstream AI systems).
Accuracy, Robustness & Cybersecurity
Article 15 →High-risk AI systems must be designed and developed to achieve an appropriate level of accuracy, robustness, and cybersecurity, and to perform consistently in those respects throughout their lifecycle.
Compliance with the High-Risk Requirements
Article 8 →High-risk AI systems must comply with the requirements in Chapter III Section 2, taking account of their intended purpose and the generally acknowledged state of the art. The risk-management system under Art. 9 must be taken into account when ensuring compliance with those requirements. Where a product contains an AI system subject to both this Regulation and Union harmonisation legislation in Section A of Annex I, the provider is responsible for full compliance with both regimes, and may integrate the AI-Act documentation, testing, and reporting into the sectoral documentation already required, to avoid duplication and minimise burden.
Authority Cooperation & Document Production
Article 21 →Providers of high-risk AI systems must, on reasoned request from a competent authority, provide all information and documentation necessary to demonstrate conformity, in an official Union language indicated by the Member State concerned. Providers must also give the authority access to the automatically generated logs of the high-risk AI system where those logs are under their control. Information obtained by the authority is subject to the Regulation's confidentiality regime.
Data Governance
Article 10 →Training, validation, and testing data sets used for high-risk AI systems must meet specific quality criteria regarding relevance, representativeness, errors, completeness, data-governance practices, and bias mitigation measures.
Documentation Retention (10 Years)
Article 18 →Providers of high-risk AI systems must keep, at the disposal of national competent authorities for 10 years after the system is placed on the market or put into service, the technical documentation, the QMS documentation, any notified-body decisions and approved changes, and the EU declaration of conformity. The 10-year retention runs from placement/put-into-service, not from creation of the document.
Conformity Assessment
Article 43 →Providers of high-risk AI systems must carry out a conformity assessment before placing the system on the market or putting it into service (Art. 43), using either the internal-control procedure in Annex VI or the notified-body procedure in Annex VII depending on the system type and whether harmonised standards were applied.
Log Retention (Six Months)
Article 19 →Providers of high-risk AI systems must keep the automatically generated logs referred to in Art. 12(1), to the extent such logs are under their control. Without prejudice to applicable Union or national law (in particular data-protection law), the logs must be kept for a period appropriate to the intended purpose of the system, of at least six months. Providers that are financial institutions subject to Union financial-services governance requirements must instead maintain these logs as part of the documentation kept under their sectoral law.
Provider Obligations Umbrella
Article 16 →Providers of high-risk AI systems have the obligations listed in Art. 16, including ensuring their high-risk AI systems are compliant with the requirements, indicating name/address, having a QMS under Art. 17, keeping documentation (Art. 18) and logs (Art. 19), performing conformity assessment (Art. 43), drawing up an EU declaration of conformity (Art. 47), affixing CE marking (Art. 48), registering in the EU database (Art. 49), and taking corrective action when needed (Art. 20).
Authorised Representative for Non-EU Providers
Article 22 →Providers of high-risk AI systems established in third countries must, before making their systems available on the Union market, appoint an authorised representative (AR) established in the Union by written mandate. The AR, acting per the mandate, must: verify that the EU declaration of conformity and the technical documentation have been drawn up and the conformity assessment carried out; keep the contact details of the provider, a copy of the EU declaration of conformity, the technical documentation, and any notified-body certificate at the disposal of competent authorities for 10 years; respond to reasoned competent-authority requests (including providing access to the automatically generated logs under the provider's control); cooperate with competent authorities on risk-mitigation; and terminate the mandate, informing the market surveillance authority and the AI Office, if the provider acts contrary to its obligations.
Risk Management System
Article 9 →Providers of high-risk AI systems must establish, implement, document, and maintain a risk-management system as a continuous iterative process planned and run throughout the entire lifecycle of the high-risk AI system.
Quality Management System (QMS)
Article 17 →Providers of high-risk AI systems must put in place a documented quality management system (QMS) — written policies, procedures, and instructions — covering at least: regulatory-compliance strategy including conformity-assessment and modification management; design, development, and quality-assurance procedures; examination, test, and validation procedures; technical specifications including standards; data-management systems and procedures; the risk-management system under Art. 9; the post-market monitoring system; communication procedures with authorities; record-keeping; resource management including security-of-supply measures; and an accountability framework.
Transparency & Instructions for Use
Article 13 →Providers must ensure high-risk AI systems are designed to enable deployers to interpret outputs and use them appropriately, including instructions for use containing the information listed in Art. 13(3).
GPAI Systemic-Risk Notification
Article 52 →Where a general-purpose AI model meets the Art. 51(1)(a) threshold (cumulative training compute greater than 10^25 floating-point operations, as the proxy for high-impact capability), its provider must notify the Commission without delay and within two weeks of the requirement being met or becoming known. The notification may include arguments that the model, exceptionally, does not present systemic risks; the Commission decides whether to accept them. The Commission may also designate a model ex officio, including on a qualified alert from the scientific panel. Designated providers may request reassessment after six months.
Misclassification Procedure (Non-High-Risk Claim)
Article 80 →Where a market surveillance authority has sufficient reason to consider that an AI system classified by its provider as non-high-risk under Art. 6(3) is in fact high-risk, it must evaluate the classification against the Art. 6(3) conditions and Commission guidelines. If the system is high-risk, the authority requires the provider to bring it into compliance and take corrective action within a prescribed period. Providers that fail to comply, or that misclassified deliberately to circumvent the high-risk requirements, are subject to Art. 99 fines. Market surveillance authorities may perform checks taking into account information stored in the EU database.
EU Declaration of Conformity
Article 47 →Providers of high-risk AI systems must draw up a written, machine-readable, signed EU declaration of conformity per Art. 47 including the information listed in Annex V, keep it up to date, and keep a copy at the disposal of national competent authorities for 10 years after the AI system has been placed on the market or put into service.
Register in EU Database
Article 49 →Providers of high-risk AI systems listed in Annex III (except law-enforcement systems under Annex III point 1) must register themselves and their systems in the EU database established under Art. 71 before placing the system on the market or putting it into service (Art. 49).
AI Literacy for Staff
Article 4 →Providers and deployers must take measures to ensure a sufficient level of AI literacy among staff and any other persons dealing with the operation and use of AI systems on their behalf, having regard to their technical knowledge, experience, education, and training, and the context in which the AI systems are to be used.
Real-World Testing Outside Sandboxes
Article 60 →Providers or prospective providers of Annex III high-risk AI systems may conduct real-world testing outside AI regulatory sandboxes only when all of the following conditions are met: a real-world testing plan has been drawn up and submitted to the market surveillance authority in the Member State where testing will occur; the authority has approved the plan (or the statutory tacit-approval period has elapsed under national law); the provider is established in the Union or has appointed a Union legal representative for the testing; the testing does not exceed the duration necessary to meet its objectives and in no case more than six months (extendable by up to another six months on notification); additional subject-protection conditions for persons potentially affected by the testing are satisfied; and the testing is registered in the EU database before it begins.
National Risk-Procedure Cooperation
Article 79 →Where a Member State market surveillance authority has sufficient reason to consider that an AI system presents a risk to health, safety, or fundamental rights, it must evaluate the system against all AI Act requirements and obligations. If the system is non-compliant, the authority must, without undue delay, require the relevant operator to take corrective action (bring into compliance, withdraw, or recall) within a period it prescribes and in any event within the shorter of 15 working days or the period provided by relevant Union harmonisation legislation. Operators must cooperate. Cross-border non-compliance is notified to the Commission and other Member States. Where the operator fails to act, the authority takes provisional measures (prohibiting or restricting, withdrawing, recalling) and notifies the Commission and other Member States.
High-Risk Derogation Assessment
Article 6 →A provider that considers an AI system listed in Annex III not to be high-risk under the Art. 6(3) derogation (narrow procedural task; improving prior human activity; detecting decision-making patterns without replacing human assessment; or preparatory-to-use-case task) must document that assessment before the system is placed on the market or put into service, and register both the system and the assessment in the EU database under Art. 49. Profiling of natural persons always keeps the system high-risk. The Commission can amend the list of exempting conditions.
Compliant-But-Risky Procedure
Article 82 →Even where a high-risk AI system complies with the Regulation, if a market surveillance authority finds — after consulting the relevant national public authority under Art. 77(1) — that the system nevertheless presents a risk to health, safety, fundamental rights, or other aspects of public-interest protection, the authority must require the operator to take appropriate measures so the system no longer presents that risk within a prescribed period. The provider or relevant operator must ensure corrective action is taken across all affected systems on the Union market within the timeline fixed by the authority. Member States notify the Commission and the Commission evaluates whether the measure is justified.
Generate compliance documents automatically
Save your results to get AI-generated Article 50 transparency notices and DPIA templates, pre-filled with your specific AI stack details. Ready in seconds.
Track your compliance progress
Save these results to your dashboard. Generate compliance documents, track provider requests, and stay ahead of the deadline.
Create Account & Save