Trust and Safety Evaluations
Authors | The AI Alliance Trust and Safety Work Group |
History | V0.1.0, 2024-10-12 |
Welcome to the The AI Alliance project for Trust and Safety Evaluations.
Tips:
- Use the search box at the top of this page to find specific content.
- Capitalized, italicized terms link to a glossary of terms.
Much like other software, generative AI (“GenAI”) Models and the AI Systems that use them need to be trusted and useful to their users.
Evaluation aims to provide the evidence for gaining users’ trust in models and systems. More specifically, evaluation refers to the capability of measuring and quantifying how a model or system responds to inputs. Are the responses within acceptable bounds, for example free of hate speech and Hallucinations, are they useful to users, cost-effective, etc.?
There are many organizations working on evaluations for safety, broadly defined, and other kinds of measurements, as well as Benchmarks that aggregate some evaluations and Leaderboards that let you see how some models and systems do against benchmarks, without having to execute these benchmarks yourself.
The Trust and Safety Evaluations project addresses several under-served needs:
- While very good Taxonomies of evaluation in the areas of risk and harms have emerged, there are other areas of interest where a standard taxonomy, with corresponding evaluations, would be useful. See taxonomy.
- Evaluators that implement evaluations in the taxonomy are needed. Some areas are well-covered, while others have no available evaluators. These evaluators can be aggregated into benchmarks. See evaluators.
- Leaderboards are needed that provide unique, user-configurable views on different benchmark combinations, which help users focus on the benchmarks most relevant to their needs. See leaderboards.
- Users need a reference stack of industry-standard OSS tools for evaluation, especially at Inference time. See Evaluation Platform Reference Stack.
This website provides the documentation for this project, with links to other resources, including code and leaderboards, as they become available.
Are you interested in contributing? If so, please see the contributing page for information on how you can get involved.
This site is organized into the following sections:
- Glossary of Terms
- User Personae and Their Needs
- Taxonomy
- Evaluators
- Leaderboards
- Evaluation Platform Reference Stack
Additional links:
- Contributing: We welcome your contributions! Here’s how you can contribute.
- About Us: More about the AI Alliance and this project.
- The AI Alliance: The AI Alliance website.
- Project GitHub Repo