Runtime Governance

Audit Ledger™

An immutable log that documents what your AI system actually did — not what it was configured to do. Evidence, not monitoring.

The problem

What most organisations have

  • Application logs showing requests and errors
  • Monitoring dashboards with latency and uptime
  • System prompts in a config file that can be changed
  • No documentation of what happened at 09:22

What a supervisory authority asks

  • What did the system decide on 14 February at 09:22?
  • Which rules were active at that time?
  • Can you prove the system would make the same decision today?
  • Who approved that configuration?

Monitoring answers "is the system running?" — Audit Ledger answers "can you prove what it did?"

What Audit Ledger captures

Each entry can stand alone without context. It can never be overwritten.

audit_ledger.jsonlappend-only
{
  "session_id": "sess_example_003",
  "timestamp_iso": "2026-02-14T09:22:41Z",
  "operator": "org_operator",
  "system_prompt_hash": "sha256:a1b2c3d4e5f6...",
  "active_guardrails": [
    "pii_output_filter",
    "human_approval_gate",
    "data_access_logger"
  ],
  "verdict": "FAIL",
  "findings_count": 3,
  "model_used": "llm-provider",
  "tokens_input": 1847,
  "tokens_output": 612,
  "latency_ms": 2340
}

session_id + timestamp

When and which session

system_prompt_hash

SHA-256 of the active instructions — proof of configuration

active_guardrails

Which rules were active during the session

verdict + findings

What the system decided and what it found

model + tokens

Which model, input/output size

latency_ms

Response time — performance under governance

Audit Ledger in DARMA

Agent Shield finds gaps. Gateway Scrubber prevents data leaks. Audit Ledger documents that both worked. Without it, the other two are claims. With it, they are evidence.

DelegationConfig Snapshot
AuthorizationSystem Prompt Hash
RuntimeSession Log
Model IntegrityBehavioral Baseline
AccountabilityImmutable Ledger

The full governance stack

Three layers covering before, during and after deployment.

Before deployment

Honeypot Audit™

Static analysis of your codebase. Finds gaps in delegation, access and traceability.

See product
During deployment

Gateway Scrubber™

Deterministic filtering of PII, credentials and sensitive data before they reach the LLM.

See demo
After deployment

Audit Ledger™

Immutable log with system prompt hashing, drift detection and replay capability.

You are here

Two deliverables

Audit Ledger is a layer in a unified governance delivery — not a standalone tool.

Honeypot Audit™

25.000 kr.

5 business days. Agent Shield scan of your codebase with report and recommendations.

  • Agent Shield static analysis
  • DARMA mapping of your setup
  • Report with prioritised recommendations

Governance Setup

Fra 85.000 kr.

8–10 business days. Everything in Infrastructure Audit + Gateway Scrubber assessment + Audit Ledger configured for your system.

  • Agent Shield static analysis
  • Swarm Auditor runtime analysis
  • Gateway Scrubber assessment (architecture recommendation)
  • Audit Ledger configured for your agents
  • replay.py drift detection setup
  • DARMA mapping documented

Up to 3 agent endpoints and one LLM provider. Complexity beyond this is assessed in the initial session.

Next time a DPO, auditor or supervisory authority asks — you have the documentation ready.

Write to info@fluxai.dk

We will determine which delivery fits your setup.