Link Search Menu Expand Document

Testing Agents

Table of contents
  1. Testing Agents
    1. Other Tools for Testing Agents
      1. DoomArena
      2. Google’s Agent Development Kit
      3. LastMile AI’s MCP Eval
      4. IBM’s AssetOpsBench
    2. Experiments to Try
    3. What’s Next?

Agents are inherently more complex than application patterns that use “conventional” code wrapping invocations of LLMs. Agents are evolving to be more and more autonomous in their capabilities, requiring special approaches to testing. This chapter explores the requirements and available approaches.

TODO: This chapter needs contributions from experts. See this issue and Contributing if you would like to help.

Highlights:

TODO

TODOs:

  1. Research the work of experts in this area. For example, DoomArena.
  2. Catalog the unique requirements for agent testing.
  3. Provide specific examples of how to use those concepts.

Other Tools for Testing Agents

DoomArena

DoomArena from ServiceNow is a framework for testing AI Agents against evolving security threats. It offers a modular, configurable, plug-in framework for testing the security of AI agents across multiple attack scenarios.

DoomArena enables detailed threat modeling, adaptive testing, and fine-grained security evaluations through real-world case studies, such as τ-Bench and BrowserGym. These case studies showcase how DoomArena evaluates vulnerabilities in AI agents interacting in airline customer service and e-commerce contexts.

Furthermore, DoomArena serves as a laboratory for AI agent security research, revealing fascinating insights about agent vulnerabilities, defense effectiveness, and attack interactions.

Google’s Agent Development Kit

Google’s Agent Development Kit has a chapter called Why Evaluate Agents?, which provides tips for writing evaluations specifically tailored for agents.

LastMile AI’s MCP Eval

MCP Eval is an evaluation framework for testing Model Context Protocol (MCP) servers and the agents that use them. Unlike traditional testing approaches that mock interactions or test components in isolation. It is built on MCP Agent, their agent framework that emphasizes MCP as the communication protocol.

IBM’s AssetOpsBench

AssetOpsBench is a unified framework for developing, orchestrating, and evaluating domain-specific AI agents in industrial asset operations and maintenance. It is designed for maintenance engineers, reliability specialists, and facility planners, it allows reproducible evaluation of multi-step workflows in simulated industrial environments.

Experiments to Try

TODO: We will expand this section once more content is provided above.

What’s Next?

Review the highlights summarized above, then proceed to Future Ideas.