The following resources are curated to help organisations, leaders, and security practitioners develop and maintain AI security capability. Each resource is annotated to help you identify what is relevant to your organisation's needs.

AI Security Development

Principles for the Security of Machine Learning, NCSC's detailed guidance on developing, deploying or operating a system with an ML component.

OWASP AI Exchange, Content feeding into standards for the EU AI Act, ISO/IEC 27090 (AI security), the OWASP ML Top 10, the OWASP LLM Top 10, and OpenCRE.

OWASP ML Top 10, The primary aim of the OWASP Machine Learning Security Top 10 project is to deliver an overview of the top 10 security issues of machine learning systems.

OWASP LLM Top 10, The OWASP Top 10 for Large Language Model Applications project aims to educate developers, designers, architects, managers, and organisations about the potential security risks when deploying and managing LLMs.

Secure by Design, Shifting the Balance of Cybersecurity Risk: Principles and Approaches for Secure by Design Software. Co-authored by CISA and international cybersecurity agencies.

AI Governance Frameworks

NIST AI RMF, The NIST AI Risk Management Framework provides a voluntary, flexible framework for organisations to better manage risks to individuals, organisations, and society associated with AI.

EU AI Act, The European Union's landmark regulation on AI, establishing requirements based on risk levels for AI systems placed on the EU market.

Singapore AI Governance Testing Framework, A software toolkit that validates the performance of AI systems against a set of internationally recognised principles through standardised tests.

Multilayer Framework for Good Cybersecurity Practices for AI, ENISA guidance to National Competent Authorities and AI stakeholders on the steps they need to follow to secure their AI systems, operations and processes.

ISO 5338: AI system life cycle processes, A set of processes and associated concepts for describing the life cycle of AI systems based on machine learning and heuristic systems.

AI Cloud Service Compliance Criteria Catalogue (AIC4), BSI's AI-specific criteria enabling evaluation of the security of an AI service across its lifecycle.

NIST IR 8269, A Taxonomy and Terminology of Adversarial Machine Learning.

An Overview of Catastrophic AI Risks (2023), Comprehensive review of the most significant catastrophic risks associated with advanced AI systems.

Threat Intelligence and Attack Frameworks

MITRE ATLAS, A knowledge base of adversary tactics and techniques based on real-world attack observations and realistic demonstrations from AI red teams and security groups.

MITRE ATT&CK, A globally accessible knowledge base of adversary tactics and techniques based on real-world observations.

CVE Database, Common Vulnerabilities and Exposures database for tracking known vulnerabilities across software and hardware.

Testing and Tooling

Open-source projects to help users security test AI models include:

  • Adversarial Robustness Toolbox (IBM)
  • CleverHans (University of Toronto)
  • TextAttack (University of Virginia)
  • Prompt Bench (Microsoft)
  • Counterfit (Microsoft)
  • AI Verify (Infocomm Media Development Authority, Singapore)

Cybersecurity Frameworks

NCSC CAF Framework, The Cyber Assessment Framework provides guidance for organisations responsible for vitally important services and activities.

NIST CSF, NIST Releases Version 2.0 of Landmark Cybersecurity Framework.

NIST SP 800-161 Rev. 1, Cybersecurity Supply Chain Risk Management Practices for Systems and Organisations.

CISA's Cybersecurity Performance Goals, A common set of protections that all critical infrastructure entities should implement to meaningfully reduce the likelihood and impact of known risks and adversary techniques.

MITRE's Supply Chain Security Framework, A framework for evaluating suppliers and service providers within the supply chain.

Risk Management

ISO 27001, Information security, cybersecurity and privacy protection. This standard provides organisations with guidance on the establishment, implementation and maintenance of an information security management system.

ISO 31000: Risk Management, An international standard that provides organisations with guidelines and principles for risk management within organisations.

NCSC Risk Management Guidance, This guidance helps cyber security risk practitioners to better understand and manage the cyber security risks affecting their organisations.

Vendor Frameworks

Databricks AI Security Framework (DASF), An actionable framework to manage AI security across the Databricks platform.

AWS AI/ML Security, Amazon's guidance on AI and machine learning security capabilities built into AWS services.

Microsoft Azure AI Security Baseline, A comprehensive list of Azure service security recommendations for AI services.

Google Secure AI Framework (SAIF), Google's conceptual framework to secure AI systems, introducing standards for building and deploying AI responsibly.

AI Security (AISec) Assessment

AISec Assessment, An audit tool that provides insights on your organisation's AI infrastructure cybersecurity maturity. Use this to establish a baseline and identify priority areas for improvement.