I do.
Penetration tests check if attackers can get in. They don't check if your AI agent has permissions it should never have had, or if anyone approved the actions it took last Tuesday. That's a different layer. That's what gets audited.
See what an AI infrastructure scan looks like with Agent Shield →
PRODUCTS
From standalone sessions to in-depth audits of your AI systems.
START HERE
Strategic · AI Infrastructure
90 min · Assessment with technical evidence · 7,500 DKK
Map your exposure. Who owns the decision when the system fails? Output: action card with three prioritised next steps.
See the product →Open Source · Governance Framework
The umbrella framework that ties Airlock, Airlock CLI, and runtime policy enforcement into one coherent governance layer.
Agent Shield is the full governance stack for AI agents. It connects constraint files, compliance reporting, and runtime enforcement so nothing slips through the cracks.
View on GitHub →Open Source · Governance Spec
A config file that defines what your AI agent is not allowed to do. If it's not in the file, the agent can't do it.
One YAML file. No code required. Drop it into your project and your agent is blocked from doing what it shouldn't.
View on GitHub →Read the rationale →Open Source · Compliance Reporting
Run one command. Get a report showing which EU AI Act articles your system covers — and which it doesn't.
Compliance evidence you can hand to your DPO or auditor. Generated from your terminal in seconds.
View on GitHub →GO DEEPER
Stress test · Pre-deployment
25,000 DKK · 5 business days
Simulated API failures against your agent endpoints. Fail-closed verification — does your agent break out? Binary EU AI Act exposure report.
See product →Analysis · Deterministic
15,000 DKK · Deterministic analysis
Deterministic review of what your agents can and cannot do. We map actual capability boundaries, test whether your architecture stops on unknown input, and deliver concrete policy patches ready to deploy. Not recommendations. Enforceable rules.
See product →Deterministic security
PII filtering before the LLM call
Removes national IDs, emails, API keys and credentials from prompts before they leave your network. Deterministic — not a system prompt.
See demo →Cross-model · Adversarial audit
15,000–25,000 DKK · Governance patch as deliverable
Four AI agents audit your session logs in two phases. Phase 1: PII-Attacker, EU-Validator, and Policy-Patcher scan in parallel via cross-model adversarial review. Phase 2: the Skeptic agent challenges each finding adversarially. Output is a governance-patched.yaml with concrete fixes — not a report. You review the diff like a pull request.
FULL DELIVERY
Full delivery · Governance
From 85,000 DKK · Incl. 3 months monitoring (10,000 DKK/mo)
The complete governance delivery. Agent Containment Wrapper, OPA rules, Honeypot stress test, Gateway Scrubber, Audit Ledger, DARMA mapping, and Governance Dashboard. Includes 3 months post-delivery monitoring — monthly semantic drift scan to verify your setup holds. Delivered in your environment — whether you use GitHub, Azure DevOps, or just a server.
See product →Runtime governance
Immutable evidence
Append-only log that documents what your AI system actually did. System prompt hashing, drift detection and replay capability.
See product →Runtime · Multi-agent governance
Local inference · Fail-closed
Every handoff between your agents is validated against policy before data flows through. Fail-closed: unauthorised transfers are blocked. Local inference — your data never leaves your network.
See product →Governance Framework
Five layers of governance for AI agents. The framework the market hasn't built.
Case Study · Substack
When two frontier AI models were asked to secure files, both invented security infrastructure that does not exist. Read the full case on Substack.

About Me
15 years in the Danish public sector — from sensitive personal data in child welfare to AI implementation in Fredensborg Municipality and KU Lighthouse. I know what compliance looks like when it has to work inside real organizations.
Good technology fails in organizations that lack governance. Access control, chains of responsibility, traceability. That is the layer most teams skip.
At KU Lighthouse I built Research Translator — 83% time savings on manual research work. At Fredensborg Kommune I implemented an AI invoicing solution in two months. Agent Shield scored 88/100 on my own system. I published the 12 errors it found.
Flux AI builds governance in as architecture. Deterministic policy rules decide what gets blocked and what gets through. Every session is logged immutably. I hold myself to that standard before I hold you to it.
WHAT I BELIEVE
AI governance is not advisory. It is infrastructure.
A prompt instruction is not access control. A system prompt is not a security policy.
If the architecture cannot deterministically prevent an unauthorized action, the system has no governance.
It has documented hope.
Book a call — find out where you stand.
I'm not sure where I stand
Start here. Pick what fits.
Free screening
30 min · No charge · No commitment
AI Governance Diagnostic
90 min · 7,500 DKK ex. VAT
Regulatory Scan
Self-service · Free
I suspect something is wrong
Honeypot Audit™
25,000 DKK · 5 business days
Agent Containment Review
15,000 DKK · Deterministic analysis
Swarm Auditor™
15,000–25,000 DKK · Cross-model adversarial audit
We need to act now
Full Governance Delivery
From 85,000 DKK · Scoped individually
All sessions are conducted remotely or on-site in Copenhagen.
Write to info@fluxai.dk...