From Detection to Board Report — a complete command framework for organisations that operate AI and need to lead when it fails. Built on the V-AIM methodology. Practitioner-built, executive-ready.
When a server goes down, the failure is visible and bounded. When an AI system fails, it often keeps running, silently producing wrong outputs at scale, with nobody noticing until the damage has compounded.
The blast radius of an AI incident is different. The accountability questions are different. The regulatory obligations are different. And the reputational consequences move faster.
Most organisations have IT incident response playbooks. Almost none have AI-specific ones. That is the gap this playbook closes.
This isn't a theoretical framework. Every section has been built from real incident patterns and practical CAIO experience, designed to be used under pressure, not studied in advance.
Five-level V-SEV severity scale — V1 (Irregularity) through V5 (Systemic Trust Event) — with real-world examples, response SLAs, escalation paths, and the R3AI incident categories: Reliability, Resilience, and Responsibility.
Six command roles defined with full responsibilities and decision authority: Executive Sponsor, Incident Lead, Technical Containment Lead, Legal & Compliance Lead, Communications Lead, and Business Owner.
Minute-by-minute action map for Hours 0–24: how to confirm, declare, contain, escalate, and begin investigation inside the V-AIM six-stage process without making decisions you'll regret.
When EU AI Act, GDPR, FCA, and sector-specific notification obligations apply, with deadlines and required actions. Covers the 72-hour GDPR window and AI Act Article 73 serious incident reporting.
Three rules, four stakeholder sequences, and the most common errors that destroy trust. What to say, what not to say, and in what order — for internal, customer, regulatory, and media audiences.
The twelve readiness prerequisites that must be in place before an incident occurs. These are the governance conditions that determine whether you respond or simply react — your pre-incident preparation checklist.
The structured post-incident review framework: Trust, Root Cause, Accountability, Correction, and Evolution. Applied to every V-AIM stage so findings become governance improvements, not just action plans nobody completes.
Part 10: a comprehensive guide to what good AI incident performance looks like at leadership level. KPIs, board reporting standards, and the metrics that demonstrate governance maturity — not just technical recovery.
Pre-approve these with your Legal Counsel before any incident occurs. Having them ready saves hours of drafting under pressure, and creates a legally defensible record from the first moment of awareness.
Each case study applies the playbook to a real-world AI security incident, not to document what went wrong, but to show what an effective response would have looked like at every stage.
A finance employee was deceived by a deepfake video call impersonating the CFO, resulting in $25M in fraudulent transfers. The playbook maps the V-AIM response from detection through governance overhaul, and identifies the single pre-incident control — out-of-band verification — that would have prevented it entirely.
McKinsey's internal AI assistant "Lilli" produced inaccurate client-facing outputs, raising questions about governance oversight, validation processes, and how a major consulting firm manages AI quality at enterprise scale. A high-profile case study in AI reliability failure and the reputational consequences of inadequate pre-deployment governance.
A credit approval AI found to be replicating historical lending bias six months post-deployment. The V-AIM response — system suspension, retrospective audit, proactive FCA engagement — demonstrates how organisations can respond to bias incidents in a way that demonstrates governance responsibility rather than regulatory vulnerability.
The AI Incident Command Playbook delivers the V-AIM framework, templates, and case studies your leadership team needs — for less than an hour of consulting time.
PDF, delivered instantly via Gumroad on purchase. The six templates are embedded in the PDF and designed to be printed or filled digitally. A Word/Google Doc version of the templates is available on request at no extra charge — email james@velinor.io after purchase.
V-AIM (Velinor AI Incident Management) is the structured six-stage methodology at the core of this playbook: Prepare → Detect → Contain → Govern → Recover → Learn. It provides a consistent command language so every member of your response team — technical and non-technical — is working from the same model.
Both. The regulatory references cover the EU AI Act, UK GDPR, and FCA context. The governance framework and templates are jurisdiction-agnostic — they apply to any organisation operating AI anywhere.
Yes, if you operate AI. Standard IT incident response processes do not account for the characteristics that make AI incidents different — silent detection, accumulating blast radius, ambiguous accountability, and AI-specific regulatory obligations. The V-AIM framework sits above your IT runbooks, not in place of them.
That's exactly how it's designed to be used. The 12 Non-Negotiables section is structured for pre-incident preparation, the appendix templates include a note on pre-approval, and the regulatory section supports a legal review conversation before anything goes wrong.
Yes, if you'd like to deploy this across a leadership team or include it in an internal governance programme, get in touch at james@velinor.io.
The playbook is designed to be self-sufficient. If you're working through an active incident or want James to review your organisation's incident command posture, a CAIO Discovery Call is the right starting point.