EU AI Act now enforcing. Is your organisation ready for an AI incident?  Get the Playbook →
Fractional CAIO · Executive Playbook Series · Vol. 1

The AI Incident Response Playbook

From Detection to Board Report — a step-by-step framework for organisations that operate AI and need to respond when it fails. Practitioner-built, executive-ready.

30-Page Playbook 6 Editable Templates 3 Case Studies EU AI Act Aligned Instant Download
Get the Playbook — £49 See what's inside ↓
One-time purchase
£49
Instant PDF download
Buy Now →
Secure checkout via Gumroad
PDF + templates delivered instantly
The Problem

AI incidents are not IT incidents. Most organisations are treating them like they are.

When a server goes down, the failure is visible and bounded. When an AI system fails, it often keeps running — silently producing wrong outputs at scale, with nobody noticing until the damage has compounded.

The blast radius of an AI incident is different. The accountability questions are different. The regulatory obligations are different. And the reputational consequences move faster.

Most organisations have IT incident response playbooks. Almost none have AI-specific ones. That is the gap this playbook closes.

The Stakes

EU AI Act penalties are now real

From February 2025, organisations deploying high-risk AI face penalties of up to €30M or 6% of global turnover for serious incidents — including how they are handled. How you respond is now as regulated as what you deploy.

The Gap

Only 29% of organisations have comprehensive AI governance plans

According to Diligent Institute's 2025 Business Risk Index, 60% of leaders now cite technology as their top risk concern — but fewer than 1 in 3 have a plan for when things go wrong.

The Cost

$77 million lost in a single deepfake incident

A government organisation lost $77M because there was no out-of-band verification protocol for AI-mediated communications. A governance gap — not a technology failure.

What's Inside

Everything your leadership team needs, when it matters most.

This isn't a theoretical framework. Every section has been built from real incident patterns and practical CAIO experience — designed to be used under pressure, not studied in advance.

🔴

AI Incident Classification Matrix

SEV-1 to SEV-4 definitions with examples, response SLAs, and escalation paths — so everyone on the team is working from the same understanding of urgency.

👥

Response Team Roles

Six roles defined with responsibilities and typical ownership — Incident Commander, Technical Lead, Legal Counsel, Communications Lead, Business Impact Lead, Board Liaison.

Hours 0–24 Framework

Step-by-step actions for the first 24 hours: how to confirm, declare, contain, escalate, and begin investigation without making decisions you'll regret.

📋

Regulatory Notification Guide

When EU AI Act, GDPR, FCA, and sector-specific notification obligations apply — with deadlines and required actions. Built to help Legal Counsel move fast.

📢

Communication Playbook

Three rules, four stakeholder sequences, and the most common errors that destroy trust. What to say, what not to say, and in what order — for internal, customer, regulatory, and media audiences.

🔄

Recovery Criteria Checklist

Eight non-negotiable gates that must be passed before restoring an AI system — so you don't restart a broken system because of operational pressure.

🔎

Post-Incident Review Framework

The Five Whys methodology applied to AI failure patterns. Root cause categories, facilitation structure, and how to turn findings into governance improvements — not just action plans nobody completes.

📊

Board Reporting Guide

What good board AI incident reporting looks like, what boards should never receive, and the board report template that delivers the right level of information to non-technical directors.

6 Editable Templates

Ready to adapt. Ready to use in hours.

Pre-approve these with your Legal Counsel before any incident occurs. Having them ready saves hours of drafting under pressure — and creates a legally defensible record from the first moment of awareness.

Case Studies

Three real incidents. Applied to the framework.

Each case study applies the playbook to a real-world AI security incident — not to document what went wrong, but to show what an effective response would have looked like at every stage.

Case Study 01 · SEV-1

The $77M Deepfake Fraud

An AI-generated synthetic media attack impersonated senior leadership on a live video call, resulting in $77M in fraudulent transfers. The playbook maps the response from detection to governance overhaul — and identifies the single governance control that would have prevented it.

Case Study 02 · SEV-2

Identity Verification Bypass — $3.4M

A single individual filed 180 fraudulent claims through an AI identity system by exploiting its single-claim design. The case illustrates how AI systems can be "working correctly" while enabling fraud at scale — and what the cross-claim detection failure means for governance design.

Case Study 03 · SEV-2

Model Bias in a Production Lending System

A credit approval AI discovered to be replicating historical lending bias six months post-deployment. The response — system suspension, retrospective audit, proactive FCA engagement — demonstrates how organisations can respond to bias incidents in a way that demonstrates governance responsibility rather than regulatory vulnerability.

Who This Is For

Built for the people who have to govern AI incidents — not just fix them.

Chief AI Officers & CAIOs The governance layer you need when an incident escalates past the technical team. The board reporting and regulatory sections are written for your context specifically.
CTOs & Technology Directors Responsible for AI systems but without an AI-specific incident framework? This is the executive-layer playbook that sits above your technical runbooks.
Risk & Compliance Leaders The regulatory mapping, notification obligations, and post-incident review framework are built for the governance and compliance lens — including EU AI Act Article 73 context.
CEOs & COOs Accountable for AI incidents even if not technical? This playbook tells you exactly what decisions you need to make and when — and what your leadership team should be doing.
Legal Counsel The notification obligation tables, the evidence preservation guidance, and the pre-approval templates are written to be useful to legal teams who need to move fast under regulatory deadlines.
Communications & PR Leaders The communication sequencing, the three rules, and the pre-approved templates give communications teams what they need before the first journalist calls.
About the Author
James A Lang

James A Lang — Fractional CAIO

James is a Fractional Chief AI Officer who works with organisations to build and govern AI with the discipline it demands. He has written extensively on AI security, governance, and the leadership questions that actually matter when AI is operating at scale. This playbook distils the frameworks James uses with client organisations — made available to any leadership team that needs them.

About James →  ·  Read the Ideas →

Get the Playbook

One purchase. No subscription.

The AI Incident Response Playbook delivers the framework, templates, and case studies your leadership team needs — for less than an hour of consulting time.

Executive Playbook · Vol. 1
£49
One-time payment · Instant PDF download
Buy Now — £49 →
✓ 30-page playbook   ✓ 6 editable templates
✓ 3 case studies   ✓ Lifetime access
Questions

Frequently asked

What format does the playbook come in?

PDF, delivered instantly via Gumroad on purchase. The six templates are embedded in the PDF and designed to be printed or filled digitally. A Word/Google Doc version of the templates is available on request at no extra charge — email james@velinor.io after purchase.

Is this relevant to UK organisations or EU organisations specifically?

Both. The regulatory references cover the EU AI Act, UK GDPR, and FCA context. The governance framework and templates are jurisdiction-agnostic — they apply to any organisation operating AI anywhere.

We already have IT incident response processes. Do we need this?

Yes, if you operate AI. Standard IT incident response processes do not account for the characteristics that make AI incidents different — silent detection, accumulating blast radius, ambiguous accountability, and AI-specific regulatory obligations. This playbook sits above your IT runbooks, not in place of them.

Can I use this with my legal team before any incident occurs?

That's exactly how it's designed to be used. The appendix templates include a note on pre-approval, and the regulatory section is structured to support a legal review conversation before anything goes wrong.

Do you offer team or enterprise licensing?

Yes — if you'd like to deploy this across a leadership team or include it in an internal governance programme, get in touch at james@velinor.io.

Is there support available after purchase?

The playbook is designed to be self-sufficient. If you're working through an active incident or want James to review your organisation's incident response posture, a CAIO Discovery Call is the right starting point.

Need more than a playbook?

If your organisation needs hands-on AI governance leadership — incident readiness, governance architecture, or board-level AI advisory — that's the Fractional CAIO engagement.

Book a CAIO Discovery Call

30 minutes. No obligation. No pitch.

£49
Buy Now →