Are you concerned by the EU AI Act?
You are concerned if you provide or deploy an AI system in the European Union. The key distinction:
- Provider: you develop or have an AI system developed and place it on the market. Heavy obligations for high-risk systems (quality system, Annex IV documentation, conformity assessment, CE marking).
- Deployer: you use a third-party AI system under your own authority. Operational obligations (monitoring, logs, user transparency, impact assessment).
Application timeline:
- 2 February 2025: prohibitions (Article 5 — cognitive manipulation, social scoring, certain biometrics).
- 2 August 2025: obligations on General Purpose AI models (GPAI).
- 2 August 2026: high-risk systems (Annex III) — the critical deadline.
- 2 August 2027: full application.
Is your system high-risk?
Annex III defines 8 major high-risk use cases:
1. Biometrics (post-remote biometric identification, categorization, emotion recognition at work/school).
2. Critical infrastructure (road traffic, water, gas, electricity management).
3. Education and vocational training (admission, evaluation, ranking).
4. Employment (recruitment, selection, evaluation, task assignment).
5. Access to essential public and private services (credit, insurance, social benefits).
6. Law enforcement.
7. Migration, asylum, border control.
8. Justice and democratic processes.
Additionally, a system is high-risk if it is a safety component of an already-regulated product (toy, machinery, medical device, vehicle). Qualification is rarely trivial — it requires argued case-by-case analysis.
WeeSec methodology — 4 phases
Phase 1 — Mapping and classification (2 weeks). Inventory of AI systems, EU AI Act classification (prohibited / high-risk / limited risk / minimal risk), role identification (provider / deployer), preliminary impact assessment. Deliverable: AI Act register and argued classification matrix.
Phase 2 — Gap analysis (3 weeks). For each high-risk system, gap vs detailed requirements: Article 9 (risk management), Article 10 (data governance), Article 11 (Annex IV documentation), Article 13 (user transparency), Article 14 (human oversight), Article 15 (robustness, accuracy, cybersecurity). Deliverable: compliance report with score per requirement.
Phase 3 — Compliance implementation (3 to 6 months depending on scope). Setting up the quality management system (Article 17), drafting Annex IV technical documentation, deploying human oversight and logging, integrating robustness and cybersecurity controls, preparing conformity assessment.
Phase 4 — Market preparation (1 month). Conformity assessment (self-assessment or notified body depending on case), CE marking preparation, EU database registration, post-market surveillance plan.
Articulation with ISO 42001 and NIST AI RMF
ISO/IEC 42001:2023 is the first international AI management standard, published in December 2023. Recognized as a means of partial presumption of conformity for the quality system (Article 17 EU AI Act) and risk management (Article 9). For an organization already certified ISO 27001, ISO 42001 reuses 60-70% of the existing system (Annex SL).
NIST AI RMF (Risk Management Framework) with its Generative AI Profile (NIST AI 600-1, July 2024) provides an operational framework for AI risk management. Voluntary but globally recognized, it complements ISO 42001 well on technical aspects.
WeeSec articulates all three (EU AI Act + ISO 42001 + NIST AI RMF) in a unified approach to avoid redundancy.
GPAI compliance (general-purpose models)
If you develop a general-purpose AI model (foundation model), specific obligations apply since August 2, 2025:
- Technical documentation compliant with Annex XI.
- Copyright respect policy.
- Public summary of training data (Article 53).
- Risk evaluation and mitigation.
For systemic-risk GPAI (FLOPs > 10^25, typically the largest models): reinforced obligations (documented adversarial testing, serious incident tracking, model cybersecurity measures).
If you use a third-party GPAI (OpenAI, Anthropic, Google, Mistral) in production, you must audit the provider (structured DDQ). WeeSec has a DDQ template aligned with the EU AI Act.
Sanctions — why not wait
EU AI Act sanctions are among the most severe in European law:
- Prohibited practices (Article 5): up to €35M or 7% of annual worldwide turnover.
- High-risk system and GPAI obligations: up to €15M or 3% of worldwide turnover.
- Incorrect information to authorities: up to €7.5M or 1%.
In France, competent authorities are CNIL (fundamental rights, personal data), DGCCRF (consumer protection), and Arcom (audiovisual and online platforms). These authorities can inspect, sanction, and require suspension of a non-compliant system.
Operational risk is also commercial: an enterprise client will require your EU AI Act compliance before integrating your system — non-compliance closes markets.