Thought Leadership

The Insights.

Frameworks, leadership questions, and commentary on what it actually means to govern AI at the executive level, from someone who's built it.

Articles & Insights

Writing on AI leadership.

Filter:
James A Lang · March 2026

What is AI Governance?

AI governance is the leadership discipline ensuring AI systems operate safely, transparently, and accountably. The core components, why it matters, and how to implement it in your organisation.

Read Article →
James A Lang · March 2026

What is a Fractional Chief AI Officer (CAIO)?

A Fractional CAIO provides senior AI leadership on a part-time basis — delivering AI governance, strategy for boards, and risk management without the full-time overhead.

Read Article →
James A Lang · March 2026

What is AI Incident Response?

AI incident response is the structured capability to detect, contain, and recover from AI failures. The V-AIM six-stage lifecycle, V-SEV severity scale, AI TRACE review, and how to prepare before incidents occur.

Read Article →
James A Lang · December 2025

AI in 2026 Will Expose Leadership, Not Technology

AI will not fail in 2026 because of technology. It will fail because of leadership. The era of experimentation is over, what matters now is discipline: clear governance, executive accountability, and measurable return on investment.

Read Article →
James A Lang · December 2025

The Top 10 AI Trends of 2025: What Leaders Need to Know

2025 marked a turning point. AI moved from experimentation to operation, and with it, a new question emerged for leaders: not what can AI do, but who is accountable when it acts. From agentic systems to governance frameworks, here's what shifted.

Read Article →
James A Lang · April 2024

Cybersecurity Alert: New Insights from Microsoft's MTAC-East Asia Report

Microsoft Threat Intelligence's MTAC-East Asia Report provides an in-depth analysis of cyber and influence operations conducted by East Asian actors, and what AI's role in those operations means for enterprise security posture.

Read Article →
James A Lang · April 2024

AISec Case Study: Compromised PyTorch Dependency Chain

Malicious binaries masquerading as PyTorch dependencies compromised sensitive data on numerous Linux systems through PyPI, unveiling the dangers of dependency confusion in software supply chains and the AI tools that depend on them.

Read Article →
James A Lang · April 2024

AI Security Case Study: Bypassing ID.me Identity Verification, $3.4 Million Lost

An individual in California exploited ID.me's identity verification flaws to file 180 fraudulent unemployment claims, obtaining over $3.4 million. A lesson in what happens when trust in AI verification outpaces its actual robustness.

Read Article →
James A Lang · April 2024

AI Security Case Studies

A curated reference of AI security incidents across sectors, from identity fraud to model poisoning. Structured to help executive teams understand the real-world attack surface their AI programmes are operating in.

Read Article →
James A Lang · April 2024

Introducing KATO AI, Kaze's Decision Engineering Platform

KATO AI enables human-machine teaming products and services through decision science, data science, and AI capabilities, removing uncertainty for business leaders. An exploration of decision engineering as a discipline for AI governance.

Read Article →
James A Lang · April 2024

AI Security Case Study: Deepfake Results in the Theft of $77 Million

An AI security vulnerability was exploited to defraud a government organisation of $77 million. This case study examines what broke down, technically and structurally, and what a governance-first approach would have caught earlier.

Read Article →
James A Lang · April 2024

AI Security (AISec), A Threat Capability Matrix

A structured approach to separating fear from reality in AI security. The Threat Capability Matrix maps attack types against organisational readiness, giving leaders a practical framework for prioritising AI security investment.

Read Article →
James A Lang · April 2024

The Top AI Security (AISec) Attack Vectors

A clear-eyed breakdown of the most significant attack vectors targeting AI systems in production, from prompt injection to model inversion. Intended for leaders who need to understand risk without getting lost in technical detail.

Read Article →
James A Lang · March 2024

Useful Resources for AI Security

A curated set of tools and reference guides to help organisations secure their AI value from cyberattacks. Practical, annotated, and kept current, because AI security literacy starts with knowing where to look.

Read Article →
James A Lang · March 2024

Characteristics of a Future AI-Enabled Cyberattack

AI is set to amplify the scale and impact of cyberattacks, enabling a broader spectrum of malicious actors to execute more sophisticated operations. What does that future look like, and how should executive teams prepare for it now?

Read Article →
James A Lang · March 2024

Reference Architecture for an AI Cyber Agent

Intelligent agents are emerging as a promising approach to advanced cybersecurity, leveraging AI to respond at machine speed. This reference architecture explores how organisations can deploy AI-driven cyber defence without creating new governance blind spots.

Read Article →
YouTube

Latest videos.

Short-form commentary on AI governance, leadership, and accountability. Subscribe at @jameslang.velinorAI

Why AI Projects Fail: The Leadership Problem No One Talks About

Why AI Projects Fail: The Leadership Problem No One Talks About

Building Systems That Survive Real-World Shocks

Building Systems That Survive Real-World Shocks

Why Resilience Is the New Leadership Test

Why Resilience Is the New Leadership Test

4 Must-Ask Questions for AI Governance

4 Must-Ask Questions for AI Governance

Build AI That Fits Your Culture

Build AI That Fits Your Culture

Explainable AI Starts with Leadership

Explainable AI Starts with Leadership

Why Most AI Fails And How Leaders Can Fix It

Why Most AI Fails And How Leaders Can Fix It

AI Governance 101: The Real Reason Organisations Get AI Wrong

AI Governance 101: The Real Reason Organisations Get AI Wrong

View all videos on YouTube →
Topics

James writes on the questions that matter to executive AI leadership.

These articles draw on direct experience building and governing AI at scale, not borrowed frameworks or academic theory. What gets published is what James is actually working through with client organisations.

New articles are added as thinking evolves. If there's a question you'd like James to write about, reach out directly.

Suggest a Topic →

Writing Themes

AI Governance & Accountability Leadership Under AI Pressure AI Security (AISec) Board-Level AI Literacy AI Maturity Frameworks Trust-by-Design Agentic AI & Risk Commercial AI Strategy

Want to work together,
not just read about it?

If you're an organisation looking for AI leadership, not just useful articles, the CAIO Discovery Call is your starting point.

Book Your Discovery Call

30 minutes. No obligation. No pitch.

Chat on WhatsApp