Service · EU AI Act

EU AI Act compliance — audit, classification, documentation.

High-risk system obligations apply on August 2, 2026. For AI providers and deployers operating in Europe: scoping, classification, Annex IV documentation, ISO 42001 alignment. Run directly by an AI security and cybersecurity expert.

Book a 20-min scope call

Are you concerned by the EU AI Act?

You are concerned if you provide or deploy an AI system in the European Union. The key distinction:

  • Provider: you develop or have an AI system developed and place it on the market. Heavy obligations for high-risk systems (quality system, Annex IV documentation, conformity assessment, CE marking).
  • Deployer: you use a third-party AI system under your own authority. Operational obligations (monitoring, logs, user transparency, impact assessment).

Application timeline:

  • 2 February 2025: prohibitions (Article 5 — cognitive manipulation, social scoring, certain biometrics).
  • 2 August 2025: obligations on General Purpose AI models (GPAI).
  • 2 August 2026: high-risk systems (Annex III) — the critical deadline.
  • 2 August 2027: full application.

Is your system high-risk?

Annex III defines 8 major high-risk use cases:

1. Biometrics (post-remote biometric identification, categorization, emotion recognition at work/school).

2. Critical infrastructure (road traffic, water, gas, electricity management).

3. Education and vocational training (admission, evaluation, ranking).

4. Employment (recruitment, selection, evaluation, task assignment).

5. Access to essential public and private services (credit, insurance, social benefits).

6. Law enforcement.

7. Migration, asylum, border control.

8. Justice and democratic processes.

Additionally, a system is high-risk if it is a safety component of an already-regulated product (toy, machinery, medical device, vehicle). Qualification is rarely trivial — it requires argued case-by-case analysis.

WeeSec methodology — 4 phases

Phase 1 — Mapping and classification (2 weeks). Inventory of AI systems, EU AI Act classification (prohibited / high-risk / limited risk / minimal risk), role identification (provider / deployer), preliminary impact assessment. Deliverable: AI Act register and argued classification matrix.

Phase 2 — Gap analysis (3 weeks). For each high-risk system, gap vs detailed requirements: Article 9 (risk management), Article 10 (data governance), Article 11 (Annex IV documentation), Article 13 (user transparency), Article 14 (human oversight), Article 15 (robustness, accuracy, cybersecurity). Deliverable: compliance report with score per requirement.

Phase 3 — Compliance implementation (3 to 6 months depending on scope). Setting up the quality management system (Article 17), drafting Annex IV technical documentation, deploying human oversight and logging, integrating robustness and cybersecurity controls, preparing conformity assessment.

Phase 4 — Market preparation (1 month). Conformity assessment (self-assessment or notified body depending on case), CE marking preparation, EU database registration, post-market surveillance plan.

Articulation with ISO 42001 and NIST AI RMF

ISO/IEC 42001:2023 is the first international AI management standard, published in December 2023. Recognized as a means of partial presumption of conformity for the quality system (Article 17 EU AI Act) and risk management (Article 9). For an organization already certified ISO 27001, ISO 42001 reuses 60-70% of the existing system (Annex SL).

NIST AI RMF (Risk Management Framework) with its Generative AI Profile (NIST AI 600-1, July 2024) provides an operational framework for AI risk management. Voluntary but globally recognized, it complements ISO 42001 well on technical aspects.

WeeSec articulates all three (EU AI Act + ISO 42001 + NIST AI RMF) in a unified approach to avoid redundancy.

GPAI compliance (general-purpose models)

If you develop a general-purpose AI model (foundation model), specific obligations apply since August 2, 2025:

  • Technical documentation compliant with Annex XI.
  • Copyright respect policy.
  • Public summary of training data (Article 53).
  • Risk evaluation and mitigation.

For systemic-risk GPAI (FLOPs > 10^25, typically the largest models): reinforced obligations (documented adversarial testing, serious incident tracking, model cybersecurity measures).

If you use a third-party GPAI (OpenAI, Anthropic, Google, Mistral) in production, you must audit the provider (structured DDQ). WeeSec has a DDQ template aligned with the EU AI Act.

Sanctions — why not wait

EU AI Act sanctions are among the most severe in European law:

  • Prohibited practices (Article 5): up to €35M or 7% of annual worldwide turnover.
  • High-risk system and GPAI obligations: up to €15M or 3% of worldwide turnover.
  • Incorrect information to authorities: up to €7.5M or 1%.

In France, competent authorities are CNIL (fundamental rights, personal data), DGCCRF (consumer protection), and Arcom (audiovisual and online platforms). These authorities can inspect, sanction, and require suspension of a non-compliant system.

Operational risk is also commercial: an enterprise client will require your EU AI Act compliance before integrating your system — non-compliance closes markets.

Direct scoping, no commitment.

A 20-minute scope call to qualify your need and provide a firm quote.

Book on Calendly
FAQ

Frequently asked questions.

How do I know if my AI system is high-risk under the EU AI Act?

A system is high-risk if it falls under an Annex III use case (biometrics, critical infrastructure, education, recruitment, essential services, law enforcement, migration, justice) OR if it is a safety component of an already-regulated product. The classification must be argued in writing, ideally with joint legal and technical analysis — that is the first phase of the WeeSec audit.

How much does EU AI Act compliance cost?

For a scale-up with 1-3 AI systems, full compliance costs €40-80K over 4 to 6 months (mapping, gap analysis, documentation, quality system implementation, conformity assessment preparation). For a provider of multiple high-risk systems or a GPAI actor: €80-200K. For a simple mapping / gap analysis: €12-25K.

Does ISO 42001 exempt from EU AI Act obligations?

No, but ISO 42001 is explicitly recognized as a means of partial presumption of conformity for the quality system (Article 17) and risk management (Article 9). An ISO 42001 certification considerably facilitates the demonstration of EU AI Act compliance and is highly valued by enterprise clients sensitive to responsible AI.

Who is the French authority for the EU AI Act?

The EU AI Act is implemented in France by several authorities depending on scope: CNIL (personal data, fundamental rights), DGCCRF (consumer protection), Arcom (audiovisual, platforms), ANSSI (model cybersecurity for systemic-risk GPAI). An interministerial coordination is in place. The choice of competent authority depends on the use case.

My SaaS uses OpenAI / Anthropic in backend — am I concerned?

You are an AI system deployer and concerned if your use case falls under Annex III (recruitment, credit scoring, biometrics, etc.). You must also verify that your GPAI provider (OpenAI, Anthropic, Google, Mistral) complies with its Article 53 obligations — technical documentation, training data summary, copyright. WeeSec has a DDQ template aligned with the EU AI Act to qualify these providers.

What is the difference between the EU AI Act and GDPR?

The two regulations are complementary: GDPR covers personal data, the EU AI Act covers AI systems. If your AI system processes personal data, you are subject to both. The EU AI Act introduces requirements specific to AI systems (technical documentation, human oversight, robustness, transparency about AI nature) that don't exist in GDPR.