Table of Contents
Quick Answer
The EU AI Act (Regulation (EU) 2024/1689) is the world's first comprehensive AI law, fully applicable from 2 August 2026, with penalties reaching EUR 35 million or 7% of global annual turnover.
- Risk-based tiers: Unacceptable, High, Limited, Minimal
- General-Purpose AI (GPAI) rules in force since 2 August 2025
- High-risk systems require conformity assessment, CE marking, and post-market monitoring
What Is the EU AI Act?
The EU AI Act is a binding regulation adopted by the European Parliament and Council on 13 June 2024, published in the Official Journal of the European Union on 12 July 2024 as Regulation (EU) 2024/1689. It entered into force on 1 August 2024 and creates a harmonised legal framework for placing AI systems on the EU market. The Act applies extraterritorially: any provider or deployer whose AI output is used in the EU must comply, even if the company is headquartered outside Europe.
The law is enforced nationally by Member State market-surveillance authorities and centrally by the European AI Office (housed within DG CNECT of the European Commission) for General-Purpose AI models.
Key Details / Requirements
Risk Tier
Examples
Core Obligations
Applicable From
Unacceptable
Social scoring, manipulative AI, untargeted facial scraping
Banned
2 February 2025
High-risk
Recruitment, credit scoring, critical infrastructure, medical devices
Conformity assessment, risk management, CE marking, post-market monitoring
2 August 2026 (Annex III); 2 August 2027 (Annex I)
General-Purpose AI
Foundation models above 10^25 FLOPs
Transparency, copyright policy, systemic-risk mitigation
2 August 2025
Limited
Chatbots, deepfakes
Disclosure to users
2 August 2026
Minimal
Spam filters, AI in video games
No specific obligations
N/A
Penalty Structure (Article 99)
Violation
Maximum Fine
Prohibited AI practices (Art. 5)
EUR 35M or 7% of global turnover
Non-compliance with high-risk or GPAI obligations
EUR 15M or 3% of global turnover
Supplying incorrect information to authorities
EUR 7.5M or 1% of global turnover
Real-World Examples / Case Studies
OpenAI, Google DeepMind, Anthropic, Meta, Mistral all signed the EU Code of Practice for General-Purpose AI in July 2025, committing to transparency summaries and copyright compliance under Article 53.
Clearview AI was previously fined EUR 20 million by the Italian Garante in 2022 for untargeted facial-image scraping — exactly the category now permanently banned under Article 5(1)(e) of the AI Act.
LinkedIn paused EU AI training on user data in September 2024 after Ireland's Data Protection Commission intervened, illustrating the overlap between the AI Act and GDPR.
What This Means for Businesses
Any company deploying AI in the EU — from a US SaaS using an LLM to a Singaporean manufacturer shipping smart devices — must map each system to a risk tier before August 2026. High-risk providers need a Quality Management System (Article 17), technical documentation (Annex IV), automatic logs (Article 12), and human oversight (Article 14).
SMEs and startups get reduced fee tiers (Article 99(6)) and access to regulatory sandboxes (Article 57), which every Member State must operate by 2 August 2026.
Compliance Checklist
- Inventory every AI system, including vendor-provided models
- Classify each system against Annex I and Annex III
- Run a Fundamental Rights Impact Assessment (Article 27) for high-risk deployers
- Establish human oversight protocols and logging
- Register high-risk systems in the EU database (Article 71)
- Draft transparency notices for limited-risk AI
- For GPAI: publish a public training-data summary and a copyright compliance policy
FAQs
Q: Does the EU AI Act apply to US companies?
Yes — Article 2 applies to any provider whose AI outputs are used in the EU, regardless of establishment.
Q: When do the GPAI rules start?
2 August 2025 for models placed on the market after that date; 2 August 2027 for pre-existing models.
Q: Is ChatGPT high-risk?
ChatGPT itself is a GPAI model. Its use can become high-risk depending on deployment (e.g., used for recruitment screening under Annex III).
Q: Who enforces the Act?
National market-surveillance authorities for AI systems; the European AI Office for GPAI.
Q: Does the Act override GDPR?
No — both apply concurrently. The AI Act is lex specialis for AI-specific obligations.
Q: Are open-source models exempt?
Partially. Open-source GPAI models get reduced obligations unless they pose systemic risk (Article 53(2)).
Q: What is a regulatory sandbox?
A controlled environment established under Article 57 where providers can test AI systems with regulatory guidance.
Conclusion
The EU AI Act is the global benchmark for AI regulation. With enforcement starting in 2026 and fines reaching 7% of global turnover, every organisation shipping AI into Europe needs a documented compliance programme today.
Start your compliance roadmap with Misar AI's AI governance toolkit — built to the EU AI Act and ISO/IEC 42001.