
Join Our Work Group GitHub Repo
Exploring AI Trust and Safety
What is required for us to trust AI? There are many ways to understand and approach risk. Let’s review a few ways of categorizing risk and mitigation strategies that experts in the field have developed, many of whom are AI Alliance members. We will start with some broad perspectives on risk management, then explore more detailed taxonomies of risk and mitigation tools.
- NIST Artificial Intelligence Risk Management Framework: A framework developed by the National Institute of Standards and Technology, under the United States Department of Commerce.
- Trust and Safety at Meta.
- Mozilla Foundation’s guidance on Trustworthy AI.
- MLCommons Taxonomy of Hazards.
- The Trusted AI (TAI) Frameworks Project.
Here are some additional sources of information that we won’t cover in more detail here (but we may expand coverage later):
- International AI Safety Report 2025: A report on the state of advanced AI capabilities and risks – written by 100 AI experts including representatives nominated by 33 countries and intergovernmental organizations (January 29, 2025).
- ACM Public Policy Products, Comments in Response to European Commission Call for Evidence Survey on “Artificial Intelligence - Implementing Regulation Establishing a Scientific Panel of Independent Experts” PDF: published by the ACM Europe Technology Policy Committee (November 15, 2024).
- The AI inflection point: Adobe’s recommendations for responsible AI in organizations (published December 2024).
See also the References for additional resources.
After reviewing the exploration here, consider our specific recommendations for successful, safe use of AI.