← Back to Compliance Center
🔄

Incident Response

Four severity levels, defined response targets, and kill switches for every autonomous component.

Last reviewed 2026-04 · Engineering · Owned by AI Safety Officer

On this page

  1. Overview
  2. Severity ladder
  3. Detection signals
  4. Kill switches
  5. Multi-provider failover
  6. Operator access
  7. Customer comms
  8. Related documents

Overview

Votriz maintains a documented IR plan covering both general security incidents and AI-specific scenarios. Every action is logged in the immutable security_audit_log with full traceability. The 72-hour breach-notification clock for personal data (GDPR Article 33) starts at the moment Votriz becomes “aware” of the breach — that's the trigger documented in our DPA.

Severity ladder

P1 — Critical

Cross-tenant data exposure via AI output, PII to an unauthorized third party, lead-generator returning rows for the wrong org.

StepTarget
AI Safety Officer paged15 min
Affected component disabled15 min
Scope assessment via audit log1 h
Affected customers notified4 h
Root cause + fix deployed24 h
Public post-mortem published72 h

P2 — High

Prompt injection that produces harmful content (caught at the approval gate), system-prompt extraction, brand-monitor safety guard bypass.

P2 doesn't require customer notification by default — the human approval gate caught the problem. If content went out, escalate to P1.

P3 — Medium

Quality regression: Brand-DNA scores drop, user-reported issue rate spikes, hallucinated facts.

P4 — Low

AI provider timeout, rate limiting, latency outside SLA.

Detection signals

Five signals page the on-call engineer:

Kill switches

Three layers, each progressively more drastic.

1. Per-agent disable (preferred)

Comment out the agent's cron registration in votriz-worker/main.py and rebuild. In-flight jobs finish; the agent stops being scheduled.

cd ~/votriz/docker docker compose -f docker-compose.votriz.yml up -d --build votriz-worker

2. Global AI disable

Set the worker-wide kill flag:

docker exec votriz-worker env VOTRIZ_AI_DISABLED=true docker restart votriz-worker

Every agent that calls services/llm.py short-circuits with a "degraded mode" log line and skips its tick. Manual content + publishing flows still work.

3. Network block (nuclear)

Last resort — only when confirmed exfiltration is in progress and we need to physically prevent more LLM round-trips:

sudo iptables -A OUTPUT -d api.anthropic.com -j DROP sudo iptables -A OUTPUT -d api.openai.com -j DROP

The rule is timestamped in the incident channel and removed when the issue is resolved.

Multi-provider failover

If the primary AI provider (Anthropic) is unavailable:

Operator access

Access typeRequirement
Production shellEncrypted VPN tunnel; tagged identity required
Database directEncrypted VPN tunnel + isolated container network
Container managementEncrypted VPN tunnel + container CLI
DeploymentAutomated deployment pipeline; no manual production shell

All operator access is logged. No customer data is accessed without a documented reason that lands in the audit log.

Customer comms

Customer notification template (P1)

“We identified a security incident affecting our systems on [DATE]. We immediately disabled the affected component and are conducting a thorough investigation. Your data remains isolated and protected. We will provide a full incident report within 72 hours. For questions, contact [email protected].”

Status page

votriz.com/status — 30-second polling, per-service health indicators, push updates on subscribe.

Related documents

Questions or a custom security review?

Enterprise customers receive dedicated security reviews and direct access to our security team. Reach us anytime at [email protected].

Talk to security →