Link Search Menu Expand Document

AI Cybersecurity

Help Wanted: This is a new section in this guide and it needs your help to make it more comprehensive and definitive! For example, rather than just list resources, we would also like to provide a useful summary of the most important ideas. Please join us!

AI Systems must support all the standard practices and techniques for conventional Cybersecurity, plus handle new threats.

Here are some resources focused on AI cybersecurity.

Berryville Institute of Machine Learning

Berryville Institute of Machine Learning (BIML) is a group of cybersecurity experts exploring the security implications for ML/AI.

BIML resources include the following:

Coalition for Secure AI

The Coalition for Secure AI (CoSAI) is a relatively-new initiative of the Oasis Open Projects. CoSAI is an open ecosystem of AI and security experts from industry-leading organizations dedicated to sharing best practices for secure AI deployment and collaborating on AI security research and product development.

Specific work groups are focused on these areas:

  • Software supply chain security for AI systems
  • Preparing defenders for a changing security landscape
  • AI security risk governance
  • Secure design patterns for agentic systems

Resources will be published by the work groups as they become available. See, for example, their CoSAI Principles for Secure-by-Design Agentic Systems

Agent Security

Agents are potentially-complex AI Systems. Evaluating an individual LLM for trustworthiness is hard enough; an agent introduces potentially more than one LLM, plus other services, that work together to both return results and sometimes invoke actions on behalf of a user. This increases the challenges of satisfying security concerns.

Here are some resources for more information specifically about agent security:


Now that we have explored different perspectives on trust and safety, the next section provides some guidance on how to approach trust and safety in your projects.