Deterministic Security
See how a Gateway Scrubber removes personally identifiable data before it is sent to a language model. Deterministic — not probabilistic.
Everything runs in your browser. No data leaves your machine.
Click a step to see the detail
Masking happens in the user's browser. An attacker — or a malicious agent — can bypass it by talking directly to the API. The national ID still leaves your network.
Data still leaves your network unfiltered.
The scrubber sits in your backend proxy, before the API call to the LLM. The user has no access to the code. Data is sanitised on the server.
Data is filtered before it ever leaves your network.
A Gateway Scrubber filters three types of data before they leave your network: personal data, credentials and topics your AI must not address.
Automatic filters for national IDs, phone numbers, emails and names. Maps directly to GDPR Art. 9 and EU AI Act Art. 10.
Prevents users and agents from leaking API keys, passwords or tokens in the prompt. Blocks at the source.
Blocks specific topics your chatbot must never address — financial advice, medical diagnoses, internal project names.
Enter text containing personally identifiable data and watch the scrubber work in real time.
Enter text in the left field or click "Load sample"
DARMA is FluxAI's governance framework for agentic AI. Five layers: Delegation, Authorization, Runtime, Model Integrity, Accountability. Gateway Scrubber is not a standalone product — it is one layer in that architecture.
Personal data must be processed in a manner that ensures appropriate security. A Gateway Scrubber prevents PII from being sent to third parties.
Health data, biometric data and national IDs require heightened protection. The scrubber blocks them deterministically.
Data protection must be built into the system architecture. A system prompt is an instruction to the model — it can be bypassed and leaves no log. A Gateway Scrubber is an infrastructure control that runs before the API call. Art. 25 requires the latter.
Art. 32 requires technical measures proportionate to the risk. When an agent sends free-text prompts to an external LLM, the risk is data disclosure. The measure is a proxy that sanitises before sending — not an instruction asking the model to refrain.
Art. 10 requires data governance over what is sent into the AI system. A Gateway Scrubber enforces this at the source — it sanitises input deterministically before it becomes part of the model's context.
Art. 15 requires robustness against attempts to compromise the system. A system prompt can be manipulated via prompt injection. A backend scrubber operates outside the model's context window — it is not accessible to attacks via input.
A Gateway Scrubber requires backend access, not a browser demo. Book a session and get an assessment of where it fits in your architecture.
Write to info@fluxai.dkNot sure if it's relevant? Take the quiz first →