New AI Incident Management Playbook Coming Soon · Join Waitlist →
Course AIblindspot™ Executive Programme Join Waiting List →
Bundle All 6 frameworks Coming Soon · Join Waitlist →
Velinor Framework Library

Resources.

Six interlocking AI governance frameworks, built from direct experience operating AI at scale. Each is available as a free 2-pager or a full document, designed to be adapted to your organisation within minutes.

Best Value · Complete Library

Get All Six Frameworks, £49

The complete Velinor AI governance library: VTF, STRIKE, AIBlindspot™, Fractional CAIO Playbook, AI Incident Management, and AI Ethics. Full documents, editable templates, and board-ready frameworks, instant download.

  • VTF, Velinor Trust Framework (36 metrics, 4 pillars)
  • STRIKE, Six-pillar AI adoption operating model
  • AIBlindspot™, 42 risks across 8 organisational categories
  • Fractional CAIO Playbook, 180-day engagement programme
  • AI Incident Management, Playbook, classification & 6 templates
  • AI Ethics Framework, Executive decision-making principles
£49
All 6 full documents
Coming Soon Join the Waitlist →

Launching soon. Join the waitlist for early-access pricing.

Six Frameworks

One integrated governance system.

Each framework is standalone, and interlocking. Together they form a complete operating model for organisations governing AI at executive level.

01 Incident Response

AI Incident Command

When AI fails, know exactly what to do.

Most organisations are deploying AI with no plan for when it goes wrong. No clear owner. No regulatory checklist. No way to explain it to the board. When the crisis hits, they're making it up as they go. This playbook for executive officers, changes that. 49 pages covering every type of AI failure — from a minor system error to a full public breakdown — with clear steps on who acts, what they do, and how fast. Regulatory deadlines, board communication guides, a 12-point readiness checklist, and six ready-to-use response documents included. When it lands on your desk, you'll either have a plan or you won't.

Free preview available · £9.99 one-time purchase · Instant PDF download via Gumroad

What's inside
  • V-SEV Classification Matrix, V1 Irregularity → V5 Systemic Trust Event
  • V-AIM Command Roles, 6 roles with decision authority
  • The 12 Non-Negotiables readiness prerequisites
  • First 24 Hours timeline — Detect & Contain stages
  • Regulatory notification, EU AI Act Article 73, GDPR, FCA
  • AI TRACE post-incident review methodology
  • Leadership Metrics Guide & board reporting standards
  • 6 editable response templates (A–F)
Scope 49-page playbook · 6 editable templates
Audience CAIO, Risk, Legal & Board
02 Governance & Audit

VTF, Velinor Trust Framework

Measure and communicate AI trustworthiness at board level.

The VTF provides organisations with a structured, evidence-based framework for measuring AI trustworthiness across four pillars, twelve measurement areas, and thirty-six metrics. It produces a normalised Trust Score (0.00–1.00) that translates technical AI performance into language boards, regulators, and executives understand.

Free 2-pager · Full document coming soon · Join the waitlist for early access

What's inside
  • Trusted Purpose, alignment, compliance & societal benefit
  • Trusted Leadership, governance structures & accountability
  • Trusted Capabilities, reliability, security & transparency
  • Trusted Outcomes, measurable impact & harm prevention
  • VTF Trust Score, 36 metrics, normalised 0.00–1.00 scale
  • VTF–STRIKE alignment mapping
  • VTF–AIBlindspot™ risk overlay
  • Board-ready trust reporting structure
Pillars 4 Pillars · 12 Areas · 36 Metrics
Audience CAIO, Board, Risk & Legal
03 AI Adoption

STRIKE

A six-pillar operating model for responsible AI adoption.

STRIKE gives organisations a complete, structured approach to implementing AI, from initial strategy through to evaluation and continuous improvement. Six interlocking pillars cover every dimension of AI adoption, ensuring nothing critical is missed and accountability is clear at every stage.

Free 2-pager · Full document coming soon · Join the waitlist for early access

The six pillars
  • S, Strategic Alignment: AI anchored to business objectives
  • T, Technical Integration: infrastructure, data & architecture
  • R, Risk Awareness: governance, compliance & bias management
  • I, Implementation: deployment, change management & adoption
  • K, Knowledge Management: skills, learning & institutional memory
  • E, Evaluation & Feedback: measurement, iteration & ROI
Structure 6 Pillars · End-to-end model
Audience CAIO, CTO, CEO & Executive Team
04 Risk Intelligence

AIBlindspot™

42 AI risks organisations ignore, until it's too late.

AIBlindspot™ catalogues forty-two repeat-offender AI risks across eight organisational categories. The Expose → Align → Trust methodology moves organisations from risk awareness to governance confidence, surfacing what standard IT risk frameworks miss and mapping every risk against the R³AI standard: Reliable, Resilient, Responsible.

Free 2-pager · Full document coming soon · Join the waitlist for early access

8 risk categories
  • BUS, Business & Strategic risks
  • OPS, Operational & Process risks
  • HUM, Human & Cultural risks
  • GOV, Governance & Compliance risks
  • TEC, Technical & Infrastructure risks
  • DAT, Data Quality & Privacy risks
  • SEC, Security & Adversarial risks
  • ENV, Environmental & Societal risks
Scope 42 risks · Lifecycle × R³AI matrix
Method Expose → Align → Trust
05 Executive Playbook

Fractional CAIO Playbook

The 180-day AI leadership engagement, from diagnostic to institutionalisation.

The complete operating model for a Fractional Chief AI Officer engagement. Five structured phases take organisations from an AI Trust Diagnostic through governance foundations, capabilities assessment, outcomes verification, and full institutionalisation, building the internal structures to sustain AI governance without ongoing external support.

Free 2-pager · Full document coming soon · Join the waitlist for early access

5-phase programme
  • Phase 1, AI Trust Diagnostic (Days 1–30)
  • Phase 2, Governance & Purpose Foundations (Days 31–60)
  • Phase 3, Capabilities Assessment (Days 61–90)
  • Phase 4, Outcomes Verification (Days 91–150)
  • Phase 5, Institutionalisation (Days 151–180)
  • Board reporting system & KPI framework
  • R³AI integration throughout
Duration 180-day structured engagement
Audience CAIO, CEO & Board
06 Coming Soon

AI Ethics Framework

Principled AI decision-making for executive teams.

This framework will give executive teams a structured approach to ethical AI decision-making, covering fairness, transparency, accountability, human oversight, and societal impact. Built for the CAIO, Risk, and Legal functions that must make principled decisions under real-world commercial pressure.

In development. Register interest for early access.

Planned coverage
  • Fairness & bias, detection, testing and mitigation
  • Transparency & explainability standards
  • Accountability structures for AI decisions
  • Human oversight requirements and thresholds
  • Societal impact assessment framework
  • Regulatory alignment (EU AI Act, GDPR)
  • Ethics review board structure
Status In development, 2026
Audience CAIO, Risk, Legal & Board
Stay Informed

Be the first to know when these launch.

The full framework documents and bundle are coming soon. Subscribe below to get notified the moment they go live — including early-access pricing for subscribers.

No spam. Unsubscribe any time. Early subscribers get exclusive pricing.

Course · Starting May 2026 Limited places

AI Incident Management:
From Response to Resilience

A structured online course for AI leaders, Risk, and Legal teams. Build your organisation's incident readiness from the ground up, classification, response protocols, regulatory obligations, board reporting, and a culture of continuous improvement. Based on the Velinor AI Incident Management framework with live case studies.

  • V-SEV classification and severity escalation
  • Building your V-AIM Command Team (6 roles)
  • Regulatory obligations, EU AI Act Article 73, GDPR & FCA
  • Board communication and stakeholder management
  • AI TRACE post-incident review and systemic improvement
  • Live case studies: McKinsey Lilli, $25M deepfake, and more

No payment now. We'll confirm your place and send joining details before May.

Course Price
£249
per person
  • Full online course access
  • All framework documents
  • 6 editable response templates
  • Live Q&A session with James
  • Certificate of completion
  • 12 months access to materials
Join Waiting List →

Starting May 2026 · Limited places

Course · Starting 2026 Limited places

AIblindspot™:
Expose, Align, Trust

The executive programme for AI leaders who need to surface hidden risks before they become Black Swan events. Learn to map 42 repeat-offender risks across 8 categories, align findings to board KPIs and regulatory frameworks, and build the evidence auditors need, using the AIblindspot™ Cards and STRIKE governance model.

  • 42 AIblindspot™ Cards across 8 risk categories
  • Expose → Align → Trust methodology in practice
  • R³AI pillars, Reliable, Resilient, Responsible
  • Regulatory mapping: EU AI Act, ISO 42001, NIST AI-RMF
  • Kill-switch drills and audit-ready evidence packs
  • AI Lifecycle risk matrix (Design → Develop → Deploy → Operate)

No payment now. We'll confirm your place and send joining details before the course opens.

Course Price
£249
per person
  • Full AIblindspot™ framework
  • 42 AIblindspot™ Cards
  • STRIKE governance model
  • R³AI pillar workbooks
  • Kill-switch drill templates
  • Live Q&A session with James
  • Certificate of completion
  • 12 months access to materials
Join Waiting List →

Starting 2026 · Limited places

Free Reading

Ideas & insights, no paywall.

James publishes frameworks, leadership questions, and commentary on what it actually means to govern AI at the executive level. All free. No sign-up required.

AI Leadership  ·  December 2025

AI in 2026 Will Expose Leadership, Not Technology

The era of experimentation is over. What matters now is discipline: clear governance, executive accountability, and measurable ROI.

Read Article →
AI Security  ·  April 2024

AI Security (AISec), A Threat Capability Matrix

A structured framework for separating fear from reality in AI security, mapping attack types against organisational readiness.

Read Article →
AI Trends  ·  December 2025

The Top 10 AI Trends of 2025: What Leaders Need to Know

From agentic systems to governance frameworks, the structural shifts that redefined what AI leadership means.

Read Article →
Browse All Articles →

Want bespoke governance tools
for your organisation?

If your AI programme needs more than a playbook, embedded AI leadership, governance design, or board-ready strategy, the CAIO Discovery Call is where to start.

Book Your Discovery Call

30 minutes. No obligation. No pitch.

Chat on WhatsApp