Six interlocking AI governance frameworks, built from direct experience operating AI at scale. Each is available as a free 2-pager or a full document, designed to be adapted to your organisation within minutes.
The complete Velinor AI governance library: VTF, STRIKE, AIBlindspot™, Fractional CAIO Playbook, AI Incident Management, and AI Ethics. Full documents, editable templates, and board-ready frameworks, instant download.
Launching soon. Join the waitlist for early-access pricing.
Each framework is standalone, and interlocking. Together they form a complete operating model for organisations governing AI at executive level.
When AI fails, know exactly what to do.
Most organisations are deploying AI with no plan for when it goes wrong. No clear owner. No regulatory checklist. No way to explain it to the board. When the crisis hits, they're making it up as they go. This playbook for executive officers, changes that. 49 pages covering every type of AI failure — from a minor system error to a full public breakdown — with clear steps on who acts, what they do, and how fast. Regulatory deadlines, board communication guides, a 12-point readiness checklist, and six ready-to-use response documents included. When it lands on your desk, you'll either have a plan or you won't.
Free preview available · £9.99 one-time purchase · Instant PDF download via Gumroad
Measure and communicate AI trustworthiness at board level.
The VTF provides organisations with a structured, evidence-based framework for measuring AI trustworthiness across four pillars, twelve measurement areas, and thirty-six metrics. It produces a normalised Trust Score (0.00–1.00) that translates technical AI performance into language boards, regulators, and executives understand.
Free 2-pager · Full document coming soon · Join the waitlist for early access
A six-pillar operating model for responsible AI adoption.
STRIKE gives organisations a complete, structured approach to implementing AI, from initial strategy through to evaluation and continuous improvement. Six interlocking pillars cover every dimension of AI adoption, ensuring nothing critical is missed and accountability is clear at every stage.
Free 2-pager · Full document coming soon · Join the waitlist for early access
42 AI risks organisations ignore, until it's too late.
AIBlindspot™ catalogues forty-two repeat-offender AI risks across eight organisational categories. The Expose → Align → Trust methodology moves organisations from risk awareness to governance confidence, surfacing what standard IT risk frameworks miss and mapping every risk against the R³AI standard: Reliable, Resilient, Responsible.
Free 2-pager · Full document coming soon · Join the waitlist for early access
The 180-day AI leadership engagement, from diagnostic to institutionalisation.
The complete operating model for a Fractional Chief AI Officer engagement. Five structured phases take organisations from an AI Trust Diagnostic through governance foundations, capabilities assessment, outcomes verification, and full institutionalisation, building the internal structures to sustain AI governance without ongoing external support.
Free 2-pager · Full document coming soon · Join the waitlist for early access
Principled AI decision-making for executive teams.
This framework will give executive teams a structured approach to ethical AI decision-making, covering fairness, transparency, accountability, human oversight, and societal impact. Built for the CAIO, Risk, and Legal functions that must make principled decisions under real-world commercial pressure.
In development. Register interest for early access.
The full framework documents and bundle are coming soon. Subscribe below to get notified the moment they go live — including early-access pricing for subscribers.
No spam. Unsubscribe any time. Early subscribers get exclusive pricing.
A structured online course for AI leaders, Risk, and Legal teams. Build your organisation's incident readiness from the ground up, classification, response protocols, regulatory obligations, board reporting, and a culture of continuous improvement. Based on the Velinor AI Incident Management framework with live case studies.
No payment now. We'll confirm your place and send joining details before May.
Starting May 2026 · Limited places
The executive programme for AI leaders who need to surface hidden risks before they become Black Swan events. Learn to map 42 repeat-offender risks across 8 categories, align findings to board KPIs and regulatory frameworks, and build the evidence auditors need, using the AIblindspot™ Cards and STRIKE governance model.
No payment now. We'll confirm your place and send joining details before the course opens.
Starting 2026 · Limited places
James publishes frameworks, leadership questions, and commentary on what it actually means to govern AI at the executive level. All free. No sign-up required.
The era of experimentation is over. What matters now is discipline: clear governance, executive accountability, and measurable ROI.
A structured framework for separating fear from reality in AI security, mapping attack types against organisational readiness.
From agentic systems to governance frameworks, the structural shifts that redefined what AI leadership means.