From Detection to Board Report — a step-by-step framework for organisations that operate AI and need to respond when it fails. Practitioner-built, executive-ready.
When a server goes down, the failure is visible and bounded. When an AI system fails, it often keeps running — silently producing wrong outputs at scale, with nobody noticing until the damage has compounded.
The blast radius of an AI incident is different. The accountability questions are different. The regulatory obligations are different. And the reputational consequences move faster.
Most organisations have IT incident response playbooks. Almost none have AI-specific ones. That is the gap this playbook closes.
This isn't a theoretical framework. Every section has been built from real incident patterns and practical CAIO experience — designed to be used under pressure, not studied in advance.
SEV-1 to SEV-4 definitions with examples, response SLAs, and escalation paths — so everyone on the team is working from the same understanding of urgency.
Six roles defined with responsibilities and typical ownership — Incident Commander, Technical Lead, Legal Counsel, Communications Lead, Business Impact Lead, Board Liaison.
Step-by-step actions for the first 24 hours: how to confirm, declare, contain, escalate, and begin investigation without making decisions you'll regret.
When EU AI Act, GDPR, FCA, and sector-specific notification obligations apply — with deadlines and required actions. Built to help Legal Counsel move fast.
Three rules, four stakeholder sequences, and the most common errors that destroy trust. What to say, what not to say, and in what order — for internal, customer, regulatory, and media audiences.
Eight non-negotiable gates that must be passed before restoring an AI system — so you don't restart a broken system because of operational pressure.
The Five Whys methodology applied to AI failure patterns. Root cause categories, facilitation structure, and how to turn findings into governance improvements — not just action plans nobody completes.
What good board AI incident reporting looks like, what boards should never receive, and the board report template that delivers the right level of information to non-technical directors.
Pre-approve these with your Legal Counsel before any incident occurs. Having them ready saves hours of drafting under pressure — and creates a legally defensible record from the first moment of awareness.
Each case study applies the playbook to a real-world AI security incident — not to document what went wrong, but to show what an effective response would have looked like at every stage.
An AI-generated synthetic media attack impersonated senior leadership on a live video call, resulting in $77M in fraudulent transfers. The playbook maps the response from detection to governance overhaul — and identifies the single governance control that would have prevented it.
A single individual filed 180 fraudulent claims through an AI identity system by exploiting its single-claim design. The case illustrates how AI systems can be "working correctly" while enabling fraud at scale — and what the cross-claim detection failure means for governance design.
A credit approval AI discovered to be replicating historical lending bias six months post-deployment. The response — system suspension, retrospective audit, proactive FCA engagement — demonstrates how organisations can respond to bias incidents in a way that demonstrates governance responsibility rather than regulatory vulnerability.
The AI Incident Response Playbook delivers the framework, templates, and case studies your leadership team needs — for less than an hour of consulting time.
PDF, delivered instantly via Gumroad on purchase. The six templates are embedded in the PDF and designed to be printed or filled digitally. A Word/Google Doc version of the templates is available on request at no extra charge — email james@velinor.io after purchase.
Both. The regulatory references cover the EU AI Act, UK GDPR, and FCA context. The governance framework and templates are jurisdiction-agnostic — they apply to any organisation operating AI anywhere.
Yes, if you operate AI. Standard IT incident response processes do not account for the characteristics that make AI incidents different — silent detection, accumulating blast radius, ambiguous accountability, and AI-specific regulatory obligations. This playbook sits above your IT runbooks, not in place of them.
That's exactly how it's designed to be used. The appendix templates include a note on pre-approval, and the regulatory section is structured to support a legal review conversation before anything goes wrong.
Yes — if you'd like to deploy this across a leadership team or include it in an internal governance programme, get in touch at james@velinor.io.
The playbook is designed to be self-sufficient. If you're working through an active incident or want James to review your organisation's incident response posture, a CAIO Discovery Call is the right starting point.