Link Search Menu Expand Document

Exploring AI Trust and Safety

What is required for us to trust AI? There are many ways to understand and approach risk. Let’s review a few ways of categorizing risk and mitigation strategies that experts in the field have developed, many of whom are AI Alliance members. We will start with some broad perspectives on risk management, then explore more detailed taxonomies of risk and mitigation tools. Finally, we offer specific information about Cybersecurity in AI Systems.

Here are some additional sources of information that we won’t cover in more detail here, although we may expand coverage of them at a later date. See also the References:

  • International AI Safety Report 2025: A report on the state of advanced AI capabilities and risks – written by 100 AI experts including representatives nominated by 33 countries and intergovernmental organizations (January 29, 2025).
  • ACM Public Policy Products, Comments in Response to European Commission Call for Evidence Survey on “Artificial Intelligence - Implementing Regulation Establishing a Scientific Panel of Independent Experts” PDF: published by the ACM Europe Technology Policy Committee (November 15, 2024).
  • The AI inflection point: Adobe’s recommendations for responsible AI in organizations (published December 2024).
  • ClairBot from the Responsible AI Team at Ekimetrics is a research project that compares several model responses for domain-specific questions, where each of the models has been tuned for a particular domain, in this case ad serving, laws and regulations, and social sciencies and ethics.

See also the References.


After reviewing the exploration here, consider our specific recommendations for successful, safe use of AI.