CRA Compliance  ·  AI Act  ·  European Union

Product & AI security
for European startups & scale-ups.

CRA and AI Act compliance, audit and hardening of your product and AI systems. For teams that take their security seriously, without slowing down the product roadmap.

Among our clients
Doctrine Doctrine
Pretto pretto
Axonaut axonaut
Mobile Club
01 · Context

Three pressures stacking up. Your customers, your investors and the regulator see all three.

i.

Regulatory deadlines

Cyber Resilience Act and AI Act roll out in parallel between August 2026 and December 2027. At every milestone, binding obligations. Without demonstrated compliance, no more placement on the EU market, no more updates to already-deployed products. The preparation window is closing.

ii.

AI security risks in production

Prompt injection, data leakage, model poisoning, shadow AI, vendor dependency. Attacks on AI systems don't show up in classic pentests. The cost of an incident exceeds the technical bill by a wide margin.

iii.

Due diligence pressure

Your enterprise customer security questionnaires are getting longer. Your investors ask precise questions about AI governance. A credible roadmap has become a commercial prerequisite, not a bonus.

02 · Services

Two engagements, scoped.
Centred on what you must demonstrate, and do, concretely.

Service 01

CRA audit, roadmap & implementation.

A complete journey, from exposure mapping to technical implementation. To calmly meet the September 2026 and December 2027 milestones.

  • Mapping of your CRA exposure
  • Product & supply-chain security audit
  • CI/CD pipeline & infrastructure hardening
  • Compliance roadmap (milestones 09/2026 & 12/2027)
  • Technical implementation of the roadmap
  • Documentation ready for external audit
Service 02

Audit, roadmap & AI security hardening.

For editors integrating AI in production (AI natives), as well as those consuming third-party AI services (Bedrock, OpenAI, Mistral, copilots, embedded chatbots).

  • Governance scoping for models & AI use (including shadow AI & vendor risk)
  • AI Act compliance: documentation, evaluation, reporting
  • Threat modeling: prompt injection, data leakage, model manipulation, jailbreak, agent hijacking
  • Technical guardrails implementation: input sanitization, output classifiers, tool & secret isolation, execution sandbox
  • ML pipeline security: model signing, dataset validation, prompt & output observability
  • Direct implementation or in-pair with your teams. No slides, code.
03 · Method

One method, four phases. Same for both engagements, adapted to your scope.

  1. 01

    Initial scoping

    20 minutes to understand your product, your stack, your regulatory exposure and the questions your customers or investors are already asking. We validate together whether the engagement makes sense, and at what scope.

  2. 02

    Technical diagnostic

    In-depth review: code, infrastructure, AI pipelines, internal processes. Identification of gaps vs. CRA, AI Act and best practices. Prioritization by business criticality, not dogma.

  3. 03

    Implementation & support

    Implementation of remediations, directly or in-pair with your teams. Target architectures, controls, runbooks. Security is written in code, not in slides.

  4. 04

    Final deliverable & roadmap

    Complete dossier, audit-ready documentation, prioritized and budgeted roadmap for what comes next. Your teams know exactly what was done, why, and what remains to maintain, autonomously.

04 · Founder's note

A note from the founder.

I founded WeeSec because product and AI security can't be solved by a checklist, nor by an off-the-shelf tool. It's built with your teams, in your code, over time.

Three commitments, no more: scope what must be scoped, harden what deserves it, and transfer enough so that you leave autonomous.

Experience
10+ years of experience. Large groups (BNP Paribas, Société Générale) then startups & scale-ups, 80+ CTOs supported.
Education
Doctor-engineer in cybersecurity from Institut Mines-Télécom.
Applied AI
MIT Applied AI certified.
Posture
Vendor-neutral practice. No editor commission, no commercial ties to recommended tools.
Market reading
Demonstrated strategic reading of the AI for Cyber EU market.
05 · FAQ

Frequently asked questions.

How long does an engagement take?

It depends entirely on scope. An audit can be scoped in a few weeks; a complete implementation spans several months. The initial scoping serves precisely to set a realistic range with you, no commitment.

What concrete deliverables will I receive?

Depending on the engagement: exposure mapping, detailed audit report, prioritized and budgeted roadmap, documented target architecture, operational runbooks, audit-ready documentation, knowledge transfer workshops. Everything is delivered in a format your teams can take over and evolve.

Who actually performs the audit?

The founder personally leads engagements, and may bring in senior consultants when the mission calls for it.

Do you work under NDA?

Systematically, before any detailed technical exchange. The NDA can be yours or a standard one we provide. Confidentiality is non-negotiable. It's also the baseline posture in this trade.

Which standards and frameworks do you cover?

Cyber Resilience Act, AI Act, ISO 27001, ISO 42001, NIST CSF, NIST AI RMF, OWASP (including LLM Top 10), SOC 2, depending on what serves your situation. The goal is to align with what your customers and the regulator actually expect, not to stack certifications.

How much does an engagement cost?

Each engagement is scoped according to your exposure and perimeter. Pricing is defined at initial scoping, after understanding your context, not before. Direct Calendly to discuss in 20 minutes.

Next step: 20 minutes to scope.

No commitment, no cold commercial offer. We look together whether the engagement makes sense.

Book a Calendly call