AI Security Case Studies

The following case studies document real-world incidents where AI security vulnerabilities were exploited. Each case is structured to provide executive teams with a clear understanding of the threat, the method of attack, and the leadership response required.

Case Study Number, AISec-0001/24

Deepfake results in the theft of $77 million

This case study provides an example of where an AI Security vulnerability was exploited at a cost of $77 million to a government organisation. Two individuals in China leveraged a camera hijack technique to bypass facial recognition authentication, establishing a fraudulent shell company and stealing $77 million through the tax system.

Read the full case study: Deepfake Results in the Theft of $77 Million

Case Study Number, AISec-0002/24

Compromised PyTorch Dependency Chain

In a striking breach of security, malicious binaries masquerading as PyTorch dependencies compromised sensitive data on numerous Linux systems through PyPI, unveiling the dangers of dependency confusion in software supply chains.

Read the full case study: AISec Case Study, Compromised PyTorch Dependency Chain

Case Study Number, AISec-0003/24

Bypassing ID.me AI Identity Verification, $3.4 million

An individual in California exploited ID.me's identity verification flaws to file 180 fraudulent unemployment claims, obtaining over $3.4 million by using fake IDs and wigs for false verifications, and was eventually sentenced to nearly seven years for wire fraud and aggravated identity theft.

Read the full case study: AI Security Case Study, Bypassing ID.me Identity Verification

Case Study Number, AISec-0004/24

Additional AI security case studies are published regularly. Each follows the same structured format to help organisations understand threat capability levels and appropriate mitigations.