Velinor Framework Library

Resources.

Six interlocking AI governance frameworks — built from direct experience operating AI at scale. Each is available as a free 2-pager or a full document, designed to be adapted to your organisation within minutes.

Best Value · Complete Library

Get All Six Frameworks — £49

The complete Velinor AI governance library: VTF, STRIKE, AIBlindspot™, Fractional CAIO Playbook, AI Incident Management, and AI Ethics. Full documents, editable templates, and board-ready frameworks — instant download.

  • VTF — Velinor Trust Framework (36 metrics, 4 pillars)
  • STRIKE — Six-pillar AI adoption operating model
  • AIBlindspot™ — 42 risks across 8 organisational categories
  • Fractional CAIO Playbook — 180-day engagement programme
  • AI Incident Management — Playbook, classification & 6 templates
  • AI Ethics Framework — Executive decision-making principles
£49
All 6 full documents
Get the Bundle →

Instant download. Print-ready PDFs.

Six Frameworks

One integrated governance system.

Each framework is standalone — and interlocking. Together they form a complete operating model for organisations governing AI at executive level.

01 Governance & Audit

VTF — Velinor Trust Framework

Measure and communicate AI trustworthiness at board level.

The VTF provides organisations with a structured, evidence-based framework for measuring AI trustworthiness across four pillars, twelve measurement areas, and thirty-six metrics. It produces a normalised Trust Score (0.00–1.00) that translates technical AI performance into language boards, regulators, and executives understand.

Free 2-pager · Full document £9.99 · Print-ready PDF

What's inside
  • Trusted Purpose — alignment, compliance & societal benefit
  • Trusted Leadership — governance structures & accountability
  • Trusted Capabilities — reliability, security & transparency
  • Trusted Outcomes — measurable impact & harm prevention
  • VTF Trust Score — 36 metrics, normalised 0.00–1.00 scale
  • VTF–STRIKE alignment mapping
  • VTF–AIBlindspot™ risk overlay
  • Board-ready trust reporting structure
Pillars 4 Pillars · 12 Areas · 36 Metrics
Audience CAIO, Board, Risk & Legal
02 AI Adoption

STRIKE

A six-pillar operating model for responsible AI adoption.

STRIKE gives organisations a complete, structured approach to implementing AI — from initial strategy through to evaluation and continuous improvement. Six interlocking pillars cover every dimension of AI adoption, ensuring nothing critical is missed and accountability is clear at every stage.

Free 2-pager · Full document £9.99 · Print-ready PDF

The six pillars
  • S — Strategic Alignment: AI anchored to business objectives
  • T — Technical Integration: infrastructure, data & architecture
  • R — Risk Awareness: governance, compliance & bias management
  • I — Implementation: deployment, change management & adoption
  • K — Knowledge Management: skills, learning & institutional memory
  • E — Evaluation & Feedback: measurement, iteration & ROI
Structure 6 Pillars · End-to-end model
Audience CAIO, CTO, CEO & Executive Team
03 Risk Intelligence

AIBlindspot™

42 AI risks organisations ignore — until it's too late.

AIBlindspot™ catalogues forty-two repeat-offender AI risks across eight organisational categories. The Expose → Align → Trust methodology moves organisations from risk awareness to governance confidence — surfacing what standard IT risk frameworks miss and mapping every risk against the R³AI standard: Reliable, Resilient, Responsible.

Free 2-pager · Full document £9.99 · Print-ready PDF

8 risk categories
  • BUS — Business & Strategic risks
  • OPS — Operational & Process risks
  • HUM — Human & Cultural risks
  • GOV — Governance & Compliance risks
  • TEC — Technical & Infrastructure risks
  • DAT — Data Quality & Privacy risks
  • SEC — Security & Adversarial risks
  • ENV — Environmental & Societal risks
Scope 42 risks · Lifecycle × R³AI matrix
Method Expose → Align → Trust
04 Executive Playbook

Fractional CAIO Playbook

The 180-day AI leadership engagement — from diagnostic to institutionalisation.

The complete operating model for a Fractional Chief AI Officer engagement. Five structured phases take organisations from an AI Trust Diagnostic through governance foundations, capabilities assessment, outcomes verification, and full institutionalisation — building the internal structures to sustain AI governance without ongoing external support.

Free 2-pager · Full document £9.99 · Print-ready PDF

5-phase programme
  • Phase 1 — AI Trust Diagnostic (Days 1–30)
  • Phase 2 — Governance & Purpose Foundations (Days 31–60)
  • Phase 3 — Capabilities Assessment (Days 61–90)
  • Phase 4 — Outcomes Verification (Days 91–150)
  • Phase 5 — Institutionalisation (Days 151–180)
  • Board reporting system & KPI framework
  • R³AI integration throughout
Duration 180-day structured engagement
Audience CAIO, CEO & Board
05 Incident Response

AI Incident Management

When AI fails — know exactly what to do.

A complete playbook for detecting, containing, and recovering from AI incidents. Covers classification from SEV-1 critical failures to SEV-4 minor anomalies, regulatory notification timelines under the EU AI Act, GDPR, and FCA, board communication protocols, and a structured post-incident review — built for the CAIO and senior leadership team. Includes six editable response templates ready to deploy.

Free preview · Full playbook £9.99 · 6 editable templates included

What's inside
  • AI Incident Classification Matrix — SEV-1 to SEV-4
  • Incident Response Team — roles & responsibilities
  • Hours 0–24: Detection, containment & evidence
  • Regulatory notification — EU AI Act, GDPR, FCA
  • Hours 24–72: Communication & recovery criteria
  • Post-incident review — Five Whys structure
  • Board Incident Report template & quality criteria
  • 6 editable response templates (A–F)
Scope Full playbook · 6 editable templates
Audience CAIO, Risk, Legal & Board
06 Coming Soon

AI Ethics Framework

Principled AI decision-making for executive teams.

This framework will give executive teams a structured approach to ethical AI decision-making — covering fairness, transparency, accountability, human oversight, and societal impact. Built for the CAIO, Risk, and Legal functions that must make principled decisions under real-world commercial pressure.

In development. Register interest for early access.

Planned coverage
  • Fairness & bias — detection, testing and mitigation
  • Transparency & explainability standards
  • Accountability structures for AI decisions
  • Human oversight requirements and thresholds
  • Societal impact assessment framework
  • Regulatory alignment (EU AI Act, GDPR)
  • Ethics review board structure
Status In development — 2026
Audience CAIO, Risk, Legal & Board
Course · Starting May 2026 Limited places

AI Incident Management:
From Response to Resilience

A structured online course for AI leaders, Risk, and Legal teams. Build your organisation's incident readiness from the ground up — classification, response protocols, regulatory obligations, board reporting, and a culture of continuous improvement. Based on the Velinor AI Incident Management framework with live case studies.

  • Incident classification and severity escalation
  • Building your AI Incident Response Team
  • Regulatory obligations — EU AI Act, GDPR & FCA
  • Board communication and stakeholder management
  • Post-incident review and systemic improvement
  • Live case studies from real AI failures

No payment now. We'll confirm your place and send joining details before May.

Course Price
£249
per person
  • Full online course access
  • All framework documents
  • 6 editable response templates
  • Live Q&A session with James
  • Certificate of completion
  • 12 months access to materials
Join Waiting List →

Starting May 2026 · Limited places

Free Reading

Ideas & insights — no paywall.

James publishes frameworks, leadership questions, and commentary on what it actually means to govern AI at the executive level. All free. No sign-up required.

AI Leadership  ·  December 2025

AI in 2026 Will Expose Leadership — Not Technology

The era of experimentation is over. What matters now is discipline: clear governance, executive accountability, and measurable ROI.

Read Article →
AI Security  ·  April 2024

AI Security (AISec) — A Threat Capability Matrix

A structured framework for separating fear from reality in AI security — mapping attack types against organisational readiness.

Read Article →
AI Trends  ·  December 2025

The Top 10 AI Trends of 2025: What Leaders Need to Know

From agentic systems to governance frameworks — the structural shifts that redefined what AI leadership means.

Read Article →
Browse All Articles →

Want bespoke governance tools
for your organisation?

If your AI programme needs more than a playbook — embedded AI leadership, governance design, or board-ready strategy — the CAIO Discovery Call is where to start.

Book Your Discovery Call

30 minutes. No obligation. No pitch.