Loi européenne sur l'IA
Learn the essentials of EU AI Act compliance, from risk management to transparency, and how our solutions support building trustworthy, compliant AI systems.
The EU AI Act is the world’s first comprehensive regulation for artificial intelligence. It establishes rules to ensure AI systems are safe, transparent, ethical, and respect fundamental rights, while fostering innovation. Like GDPR did for data privacy, it is expected to become a global benchmark for AI governance.
The Act applies to providers, deployers, importers, and distributors of AI systems in the EU, regardless of whether the system was developed inside or outside the EU. If the AI system’s output affects people within the EU, the regulation applies.
- Unacceptable Risk: Fully banned (e.g., social scoring by governments, manipulative AI).
- Risque élevé: AI in critical sectors (employment, education, healthcare, law enforcement, infrastructure) with strict compliance obligations.
- Limited Risk: Subject to transparency requirements (e.g., chatbots must disclose they are AI).
- Minimal Risk: Most AI systems, with no mandatory requirements, but encouraged to adopt voluntary codes of conduct.
- Risk management systems.
- High-quality and unbiased training/testing data.
- Robust data governance.
- Detailed technical documentation.
- Logging and record-keeping.
- Transparency for users.
- Human oversight.
- Accuracy, robustness, and cybersecurity controls.
- Conformity assessments before deployment.
The Act sets tiered fines:
Up to €35 million or 7% of global annual revenue for the most severe violations (e.g., banned AI practices).
Lower fines for less severe breaches (e.g., record-keeping failures).
These fines are comparable to or higher than GDPR penalties.
While obligations remain the same, SMEs and startups are recognized as needing proportional support. The EU provides tools such as the AI Act Compliance Checker and tailored guidance to reduce compliance burden. Partnerships with external advisors (like RT) can help SMEs integrate compliance efficiently and affordably.
August 2025: Core obligations for high-risk General Purpose AI (GPAI) systems start.
February 2025: AI literacy requirements for providers and deployers come into force.
2026 onward: Full enforcement of high-risk and conformity assessment requirements.
Organizations should start preparing now to avoid last-minute gaps.
Article 4 mandates that providers and deployers ensure sufficient AI literacy among staff and system operators. This includes understanding system capabilities, risks, limitations, and ethical implications. Training should be tailored to the role and technical background of the users.
ISO 42001 provides a structured framework for AI governance, risk management, transparency, and ethical deployment. Adopting ISO 42001 can serve as a “gateway compliance framework,” helping organizations meet many requirements of the EU AI Act efficiently.
High-risk AI systems must include mechanisms for human monitoring and intervention. This ensures humans can override or correct AI outputs, preventing harmful or discriminatory outcomes and maintaining accountability.
Non-EU companies offering AI systems that affect people in the EU must comply. This makes the Act extraterritorial, similar to GDPR. Global AI providers will need to adapt their practices or risk losing access to the EU market.
- Assess AI systems: Identify risk categories and roles (provider, deployer, etc.).
- Use EU tools: Leverage the EU AI Act Compliance Checker.
- Implement governance: Establish an AI Management System (ISO 42001 recommended).
- Train staff: Invest in AI literacy programs.
- Engage experts: Seek legal, technical, and compliance support to map obligations and implement frameworks.