AI chat app leaks 300M messages from 25M users
An unsecured database of the popular AI app "Chat & Ask AI" exposed 300 million chat messages — including company internals, personal data, and confidential business information.
The AI Data Control Layer. We secure every AI interaction while enabling full model access. One interface. Full control. Every model. Plug & Play in 30 min.
10+ AI deployments·100% GDPR compliant·5.5 months avg. ROI
The AI dilemma
75% of knowledge workers already use AI. 8.5-13% of all prompts contain sensitive data. Legacy DLP and firewalls were not built for natural language.
You block ChatGPT and other models for all employees. Sounds safe — but your staff use them anyway, on private devices, through workarounds. Shadow IT grows, and the company falls behind.
You let employees use ChatGPT or other models without controls. Teams get more productive — but customer data, contracts, and HR records flow unchecked into US clouds. No audit, no traceability. A GDPR violation can cost up to 4% of your annual revenue.
Both options are bad.
There is a third way.
Not theory. Real incidents. Real companies.
An unsecured database of the popular AI app "Chat & Ask AI" exposed 300 million chat messages — including company internals, personal data, and confidential business information.
Confidential business data flows into AI platforms at scale and without controls. Most companies have no idea what their teams are typing in.
A hacker offered 50 million supposed Europcar records for sale. Analysis revealed the data had been generated by ChatGPT. AI itself becomes the weapon.
Questions are the new files.
Every prompt is a potential data leak.
Brane AIF makes sure sensitive data never leaves your company.
The third way
Brane AIF is an AI Interaction Firewall — on-premise, auditable, GDPR compliant. Employees work with the same models they'd use in the cloud. The data doesn't leave the company.
Brane AIF sits between your employees and AI. Safe Prompt analyzes every request semantically, protects sensitive data automatically, and routes intelligently — giving your teams unified access to all top AI models through a single interface, without the security risk.
One interface for every model. Like ChatGPT — just secure.
Sensitive? Process locally. Non-critical? Best cloud model, anonymized.
Full AI power. Data stays protected. Every request, every routing decision, and every data access logged.
How can I help?
This is what your employees see.
Safe Prompt protects every prompt.
„Write an email to Mr. Müller asking them to send the contract to 0171-555-3842. Budget: €240,000“
Detects PII in real time. When in doubt, always local.
Anonymized to the cloud, re-inserted in the response.
Every decision logged. GDPR export with one click.
The gap
Every layer validates something. None of them validate the conversation between humans and AI models.
Category definition
Security controls that govern human-AI interactions by semantically inspecting prompts and enforcing data protection policies — before information reaches the model.
Blocks and redacts inline — before data leaves the company.
Understands intent, not just keywords.
Unified policy across all LLMs.
Bidirectional prompt and response flow.
Architecture
Every prompt goes through four stages before it reaches a model. All on-premise. All in real time.
Analyze prompt content for sensitive data.
Semantic classification of user intent.
Real-time enforcement of data rules and compliance.
Forwards to the right model endpoint based on sensitivity.
Data never leaves your perimeter. Processed on-premise.
Best performance, lowest cost. Anonymized before transmission.
The honest comparison
Cloud AI, in-house build, or Brane AIF — every option has strengths. What matters is which priorities you set.
ChatGPT Enterprise, Copilot, etc.
GPU servers, ML team, in-house build.
On-premise. Managed. All models.
Why not DLP?
DLP was built for rigid file patterns. GenAI prompts are fluid, context-dependent, and semantic. A fundamentally different problem needs a fundamentally different solution.
„GenAI without interaction security is not innovation — it's liability.“
Open-source AI
The local models in Brane AIF are not static. They continuously get more powerful — and your system receives every update automatically. No in-house build, no deployment, no stress.

Source: Artificial Analysis, Apr 2026
The system
BRANE AIF ships as a turnkey system — with all models, Safe Prompt, and Audit Trail pre-installed. Plug in, power on, go.


Models run directly on the device. No cloud, no dependency.
Why Brane AIF
You keep full control
Open standards, open APIs. Switch between AI providers at any time — no ecosystem coercion, no dependency. Your data stays in your network, under your control.
Local processing instead of cloud fees
Sensitive requests run locally — free of charge. Only non-critical prompts go anonymized to the cloud. The more you process locally, the less you pay. Predictable fixed costs instead of variable token bills.
Zero outbound connection — if you want
All AI models run fully local, without internet. Not a single byte leaves your network — guaranteed. Ideal for KRITIS, government, and high-security environments. Cloud connectivity is optional and can be enabled at any time.
Continuous updates — no downtime
New AI models, Safe Prompt improvements, and security patches arrive automatically. For air-gap systems via USB. The hardware stays — the software grows. And all that at a fraction of the energy consumption of cloud AI.

„Every prompt is a potential data leak. Firewalls protect networks, DLP protects files — but nobody protects what employees type into AI chatbots. That's exactly why we built Brane AIF.“
Every cloud AI request consumes energy for compute, cooling, and networking. Data centers need 30–60% extra energy for infrastructure (PUE). Brane AIF routes every request intelligently — from fully local to hybrid with cloud. You set the mix, Guardian Core enforces it.
4 modes: Local-Only, Local-Preferred, Cloud-Preferred, Cloud-Only — Guardian Core routes every request per your policy.
Sources: IEA — Energy and AI (2026) · PUE data: Uptime Institute (2025) · Hardware: NVIDIA
Operations
Brane Admin runs directly on the device. You see latencies, model health, active sessions, and the full audit log in real time — no cloud round-trip.
Economics
Cloud AI costs scale linearly with employee count. Brane AIF inverts the unit economics.
Per-seat, per-token pricing. Costs scale linearly with adoption. The more employees use AI, the higher the bill.
Sensitive requests run locally at near-zero marginal cost. Only non-critical prompts go anonymized to the cloud. Predictable cost base instead of variable token exposure.
Controlled AI with predictable costs vs. cloud-only. The larger the rollout, the bigger the advantage.
„The larger the rollout, the bigger the advantage. This is infrastructure economics, not SaaS economics.“
Roadmap
Brane AIF grows with your requirements. Firewall today, your complete AI infrastructure tomorrow.
Guardian Core with 42 routing decisions, Presidio-based PII detection, Smart Rehydration. RAG with reranker and citations. Image, video, and document generation. M365 integration. Audit trail and EU/US data residency.
Connector ecosystem, cost alerts and anomaly detection, multi-site management, fleet agent for box orchestration, ISO 27001 and SOC 2 certification, industry policy templates.
Autonomous multi-step agents inside the customer network, ERP chatbot (SAP/Boxsoft), model and agent marketplace, white-label for partners. US compliance layer (SOC 2 Type II, CCPA).
FAQ
ChatGPT Enterprise and Copilot run on servers in the US or an EU cloud. Brane AIF runs on your hardware, in your network. No data leaves the building. We anonymize automatically via Smart Rehydration before anything reaches a cloud — including Microsoft.
No — and you don't have to choose. Brane AIF uses all the major models (GPT-5.2, Claude Opus 4.6, Gemini 3 Pro) AND powerful local models like DeepSeek V3.2, GLM-4.7, and Qwen3. Safe Prompt routes automatically: sensitive data locally, non-critical anonymized to the cloud.
Under 30 minutes. You receive a turnkey system. Plug in, power on, start. No Kubernetes, no DevOps team required.
Brane AIF is a compact, turnkey system — roughly the size of a small desktop PC. No server room needed. A network port and a power outlet are enough.
No — no vendor lock-in. Brane AIF uses open standards. You can switch between cloud models (OpenAI, Claude, Gemini) at any time or go fully local open-source.
DeepSeek V3.2, GLM-4.7, Qwen3, the Llama-4 family for language, Mistral-Embed for retrieval. Customer-specific models on request. New models are delivered via signed updates.
Ethernet port, REST API compatible with the OpenAI format, Active Directory integration, optionally an on-prem identity provider.
All requests are versioned on local disk with masked PII. Logs can be exported to SIEM (syslog, JSON).
GDPR, EU AI Act risk-class readiness, BSI-compliant encryption, TISAX-ready, ISO 27001. Data processing stays on the Brane node.
Standard SLA with 24h response time. Premium SLA with 4h response and a replacement node shipped within Germany.
We demonstrate the device remotely or on-site and discuss integration with your existing infrastructure.