The EU AI Act is the European Union’s comprehensive legal framework for governing artificial intelligence, introducing a risk-based approach to regulation and establishing new obligations for organizations that provide, deploy, or use AI in the EU market. Below are the essentials organizations need to know to navigate compliance under the Act.
Overview of EU AI Act
The EU AI Act sets out to ensure that AI systems are safe, respect fundamental rights, and foster trustworthy innovation across the EU. It applies to a broad spectrum of operators in the AI value chain—including providers, deployers, importers, distributors, and product manufacturers regardless of their physical location, if their AI systems or outputs are used within the EU.
Key Features:
- Risk-based classification: AI systems are categorized as unacceptable risk (banned), high risk (strict requirements), limited risk (transparency obligations), or minimal risk (few or no requirements).
- Prohibited practices: Social scoring, manipulative AI, certain biometric identification, and AI exploiting vulnerabilities are banned.
- High-risk AI: Includes systems impacting health, safety, fundamental rights, or essential services. These require conformity assessments, registration, and continuous oversight.
Who Is Impacted?
- Providers: Organizations or individuals developing or having an AI system developed for market placement under their branding.
- Deployers: Entities using AI systems, such as deploying chatbots for customer service.
- Importers: Entities within the EU bringing in AI systems from outside the region.
- Non-EU Operators: The Act applies to non-EU providers and deployers if their AI outputs are used in the EU; representatives in the EU must be appointed to ensure compliance.
Requirements
High-Risk AI Systems
- Risk management: Implement continuous risk assessments and mitigation throughout the entire lifecycle.
- Data governance: Ensure quality, origin, and bias mitigation of training and testing data.
- Technical documentation: Maintain detailed documentation covering system design, capabilities, and compliance measures.
- Transparency: Clearly inform users when interacting with AI-powered systems and label AI-generated content (e.g., deepfakes).
- Logs and impact assessments: Maintain operational logs and conduct fundamental rights assessments when offering essential services.
General-Purpose AI (GPAI) Models
- Copyright compliance: Respect EU copyright laws and provide public summaries of training datasets.
- Systemic risk: GPAI models using significant computing resources or posing a broad societal risk face additional obligations, such as incident reporting, cybersecurity, and documentation.
Penalties for AI system operators
Tier 1: Non-compliance with the prohibitions
The heftiest fines are given for using or making available systems that are prohibited by the AI Act. Non-compliance with prohibitions carries the heftiest fine under the EU AI Act – up to €35,000,000 or up to 7% of annual worldwide turnover for companies. This also surpasses the penalties under GDPR, with the AI Act therefore imposing some of the heftiest penalties for non-compliance in the EU.
Tier 2: Non-compliance with obligations
The second highest fines are = for non-compliance with specific obligations for providers, representatives, importers, distributors, deployers, notified bodies, and users. Non-compliance with the relevant provisions is subject to fines of up to €15,000,000 or up to 3% of annual worldwide turnover for companies.
Specifically, these penalties can be issued for non-compliance with
Specifically, these penalties are incurred by not meeting the following provisions on obligations:
- Obligations of the providers of HRAIs under Article 16
- Obligations of authorized representatives under Article 22
- Obligations of the importers of HRAIs under Article 23
- Obligations of the distributors of HRAIs under Article 24
- Obligations of the deployers of HRAIs under Article 26
- Requirements and obligations of notified bodies under Articles 29-34
- Transparency obligation for providers and users of certain AI systems under Article 50
Tier 3: Supplying incorrect, incomplete, or misleading information to the authorities
Failure to supply the correct or incomplete information is a violation of Article 21 of the AI Act, which requires cooperation with component authorities. Upon request by a competent national authority, providers of HRAIs shall provide the necessary information and documentation to demonstrate the conformity of the HRAI with the relevant requirements.
Replying with incorrect, incomplete, or misleading information to a request of national authorities or notified bodies is subject to fines of up to €7,500,000 or 1% of the total worldwide turnover, whichever is higher.
Are there any considerations for SMEs?
In the case of SMEs, including start-ups, fines will be whichever is lower of the percentage and value instead of the higher of the two.
Administrative fines against providers of GPAI models
Providers of GPAI models can be issued fines of up to 3% of total worldwide turnover or 15 million EUR, whichever is higher under Article 101. Fines can be incurred if a provider of a GPAI intentionally or negligently:
- Infringes the provisions of the EU AI Act that are relevant to GPAIs
- Fails to comply with a request for document or information or supplies incorrect, incomplete, or misleading information under Article 91,
- Fails to comply with a measure requested under Article 93,
- Fails to make available to the Commission access to the GPAI model or GPAI with systemic risk with a view to conducting an evaluation under Article 92.
Administrative fines against Union bodies
According to Article 100, the European Data Protection Supervisor can also impose administrative fines on Union agencies, bodies, and institutions. Fines could be up to €1,500,000 for non-compliance with the prohibitions of the Act and €750,000 for non-compliance with obligations other than those laid down in Article 5.
How are penalties decided?
The general principle of the AI Act is that penalties shall be effective, dissuasive, and proportionate to the type of offense, previous actions, and profile of the offender. As such, the EU AI Act acknowledges that each case is individual and designates the fines as a maximum threshold, although lower penalties can be issued depending on the severity of the offense. Factors that may be considered when determining penalties include:
- The nature, gravity, and duration of the offense,
- The intentional or negligent character of infringements,
- Any actions to mitigate the effects,
- Previous fines,
- The size, annual turnover, and market share of the offender,
- Any financial gain or loss resulting from the offense,
- Whether the use of the system is for professional or personal activity.
As there is no union-wide central authority to issue fines, taking the above into consideration, penalty amounts generally depend on the national legal system of the Member States. On the other hand, for the providers of GPAI models and for the Union bodies, the fines are imposed by the Commission and the European Data Protection Supervisor, respectively.
Timeline
- August 1, 2024: Law enters force.
- February 2, 2025: Prohibitions on banned AI practices become enforceable.
- August 2, 2025: GPAI rules for new models take effect; legacy GPAI models must comply by August 2027.
- August 2, 2026: High-risk AI requirements are enforced.
- August 2, 2027: All AI systems regulated under other EU product safety laws must comply.
Practical Guidance
Certifyi helps organizations automate evidence collection, reporting, and compliance monitoring—streamlining the path to meeting the EU AI Act requirements. AI-driven risk identification and continuous monitoring build trust with stakeholders and support adherence to global standards such as SOC 2, ISO 27001, GDPR, and HIPAA. Certifyi’s scalable platform adapts to organizations of every size and stage.
Next Steps
- Request a demo to see how Certifyi simplifies EU AI Act compliance.
- Contact us for tailored support or learn more about our Partner Program and blog resources.
Engage with Certifyi to proactively manage risk, automate compliance, and secure stakeholder trust in a rapidly evolving regulatory landscape.