Compliance

EU AI Act high-risk systems: application August 2, 2026

On August 2, 2026, the EU AI Act enters its most structuring phase: high-risk system obligations defined in Annex III. Here is the operational roadmap for providers and deployers, with the 7 mandatory technical pillars in production.

11 min

The EU AI Act (Regulation 2024/1689) became EU law on August 1, 2024. Its application is staggered across 4 milestones. The most structuring is August 2, 2026: high-risk system obligations defined in Annex III enter into force.

By that date, every provider and deployer of an AI system in the European market falling under one of the high-risk categories must demonstrate compliance with 7 technical requirements. Sanctions: up to €15 million or 3% of worldwide turnover.

Are you concerned?

Two cumulative conditions:

Condition 1: AI system used in the European Union. Whether you are headquartered in Paris, San Francisco or Tel Aviv, if your AI system is used by people in the EU, the AI Act applies.

Condition 2: high-risk use case. Annex III defines 8 categories:

  1. Biometrics (post-remote identification, categorization, emotion recognition at work/school).
  2. Critical infrastructure (road traffic, water, gas, electricity).
  3. Education and vocational training (admission, evaluation, ranking).
  4. Employment (recruitment, selection, performance evaluation, task assignment).
  5. Access to essential public and private services (credit scoring, insurance, social benefits).
  6. Law enforcement.
  7. Migration, asylum, border control.
  8. Justice and democratic processes.

Additionally, a system is high-risk if it is a safety component of an already-regulated product (toy, machinery, medical device, vehicle).

The 7 mandatory technical pillars

Pillar 1 — Risk management system (Article 9)

Documented and operational risk management system covering the entire lifecycle of the AI system: design, development, deployment, monitoring, retirement. Iterative methodology, clear roles, evidence of application.

Pillar 2 — Data governance (Article 10)

Quality of training, validation and test data: relevance, representativeness, accuracy. Traceability of data sources, identification of potential biases, mitigation measures, data sheets.

Pillar 3 — Technical documentation (Article 11 + Annex IV)

Living technical documentation: general system description, design choices, training data, performance metrics, accuracy and robustness measures, limits and known failure modes. Mandatory format defined in Annex IV.

Pillar 4 — Logging and traceability (Article 12)

Automatic logging of system events: input data, decisions made, errors. Retention sufficient to enable post-hoc audits. For high-risk systems, retention >= 6 months minimum, often longer.

Pillar 5 — Transparency for users (Article 13)

Clear information to users: nature of the AI system, capabilities, limits, intended uses, level of accuracy. Avoid the "black box" effect for the user.

Pillar 6 — Human oversight (Article 14)

Operational human oversight: designated humans capable of monitoring the system, understanding its capabilities and limits, intervening or stopping operation if necessary. Tools and information necessary to make oversight effective.

Pillar 7 — Accuracy, robustness and cybersecurity (Article 15)

Demonstrated accuracy and robustness levels (mandatory metrics in technical documentation). Resistance to errors, faults, inconsistencies. Cybersecurity measures appropriate to risks (adversarial testing, incident response).

Roadmap to August 2, 2026

Phase 1 — Mapping and classification (1 month)

Inventory of AI systems used or provided. AI Act classification (prohibited / high-risk / limited risk / minimal risk). Role identification (provider / deployer). Argued classification matrix.

Phase 2 — Gap analysis (3 weeks)

For each high-risk system, gap vs requirements of Articles 9-15. Compliance scoring per requirement. Action plan to fill the gaps.

Phase 3 — Compliance implementation (3 to 6 months)

Quality system implementation (Article 17), Annex IV documentation drafting, monitoring and logging, robustness and cybersecurity controls, conformity assessment preparation.

Phase 4 — Market preparation (1 month)

Conformity assessment (self-assessment or notified body depending on case), CE marking, EU database registration, post-market surveillance plan.

ISO 42001 to facilitate the journey

ISO/IEC 42001:2023 is recognized as a means of partial presumption of conformity for the quality system (Article 17) and risk management (Article 9). For an organization already certified ISO 27001, ISO 42001 reuses 60-70% of the existing system (Annex SL).

Recommended sequence: ISO 27001 if not already done, then ISO 42001 in 6-9 additional months, then EU AI Act compliance pooled.

For deployers (using third-party AI)

If you use OpenAI, Anthropic, Google or Mistral in a high-risk use case, you are a deployer and must:

  • Audit your provider (DDQ aligned with Article 53 — technical documentation, training data summary, copyright respect).
  • Implement human oversight on your side.
  • Log decisions and outputs.
  • Inform end users of the AI nature of the system.
  • Conduct an impact assessment (Article 27 for certain deployers).

Sanctions and operational risks

EU AI Act sanctions are among the most severe in European law:

  • Prohibited practices (Article 5): up to €35M or 7% of worldwide turnover.
  • High-risk and GPAI obligations: up to €15M or 3% of worldwide turnover.
  • Incorrect information to authorities: up to €7.5M or 1%.

The commercial risk is just as concrete: an enterprise client will require your EU AI Act compliance before integrating your system. Non-compliance closes markets.

Frequently asked questions

When does the EU AI Act enter into force for high-risk systems?

The EU AI Act (Regulation 2024/1689) applies to high-risk systems defined in Annex III from August 2, 2026. Prohibitions (Article 5) have been in force since February 2, 2025, and General Purpose AI model obligations since August 2, 2025. Full application on August 2, 2027.

Is my AI system high-risk under the EU AI Act?

A system is high-risk if it falls under an Annex III use case (recruitment, credit scoring, biometrics, critical infrastructure, justice, education, essential services) OR if it is a safety component of an already-regulated product. Case-by-case argued analysis is essential — qualification entails heavy obligations (quality system, documentation, monitoring).

What obligations for a high-risk AI system?

Quality management system (Article 17), technical documentation (Annex IV), risk management (Article 9), training data governance, traceability (logs), user transparency, human oversight, robustness and cybersecurity. Conformity assessment before market placement, CE marking, registration in the EU database.

What is the difference between provider and deployer of an AI system?

The provider develops or has the system developed and places it on the market. The deployer uses the system under its own authority. Obligations differ: the provider bears technical compliance, the deployer bears operational implementation (monitoring, logging, user transparency, impact assessment).

What sanctions does the EU AI Act provide?

Up to €35M or 7% of worldwide turnover for violations of prohibitions (Article 5). Up to €15M or 3% for obligations applicable to high-risk systems and GPAI. Up to €7.5M or 1% for providing incorrect information to authorities.

A connected topic at your company?

A 20-minute scope call. No cold commercial pitch.

Book on Calendly