The Open Agent Hub Projects
Collaborate, experiment, and build production-ready, open-source agents.
See the Open Agent Hub website
The Open Agent Hub is a collaborative community of open-source AI projects and domain-specific work groups that seek to make AI Agents successful in the real world through fast experimentation and distillation of learning into reusable reference architectures and enterprise-quality implementations.
We welcome your feedback and we encourage you to help us build them. We also welcome your suggestions for projects we should add.
Our focus on domain-specific projects helps surface and address the challenges faced in those domains, which often surface for other domains. Our work groups include engineers, AI researchers, and subject matter experts from industry-leading organizations. Here is some of our initial work:
Industrial AI / Semiconductors: See SemiKong a foundation model for semiconductor process agents.
Expert knowledge / Legal: Try Bartlebot if you want to work with case law and other legal topics.
Finance: See Deep Research Agent for Finance, a new collaboration between finance and AI experts in the Alliance to explore the practical challenges of building and running trustworthy, production-quality AI-based finance applications.
Geospatial: See projects like GeoBench and TerraTorch.
Chemistry and Materials: See the new science foundation models for molecular analysis.
Model Context Protocol (MCP) Ecosystem and Related Projects
The Model Context Protocol (MCP) from Anthropic is quickly becoming an industry standard for communications between models, agents, data repositories, and other tools. The AI Alliance seeks to advance this protocol and foster a robust suite of tools around it to enable broad, trusted, and high-value use in production.
| Links | Description |
|---|---|
|
MCP in the Enterprise: A User Guide |
|
| MCP has enormous potential to accelerate AI adoption in enterprises. This "living" user guide features chapters written by experts on various aspects of deploying, managing, and using MCP successfully in enterprise settings. It contains the first several chapters with many more coming soon. (Contributions are welcome!.) | |
| Context Forge | |
| Context Forge is an MCP gateway, which also supports A2A and REST. It serves as a central management point for tools, resources, and prompts. It can be accessed by MCP-compatible LLM applications. It converts REST API endpoints to MCP, composes virtual MCP servers with added security and observability, and converts between protocols (e.g., stdio, SSE, etc.). (Principal developer: IBM) | |
|
Deep Research Agent for Finance |
|
| The Deep Research Agent for Finance project demonstrates MCP in action for a common design pattern, Deep Research Agent. This example shows how a financial analyst can use a deep research agent to find, aggregate, and analyze information about a public company (or other potential investment). There are many other applications possible. The app is built on MCP Agent, developed by LastMile AI, discussed next. | |
| LastMile AI MCP Agent | |
| Build effective agents using Model Context Protocol and simple to sophisticated workflow patterns. See the Deep Research Agent for Finance, discussed in the previous row, which is built with this toolkit. See the recent Alliance blog post on their lessons learned developing the orchestration feature for deep research and related use cases. Highly informative! (Principal developer: LastMile AI) | |
1 Indicates an Alliance core project.
The NLIP Project
The NLIP project is facilitating the development of an open-source protocol for intelligent agents to communicate with each other and with humans using natural language. The MCP in the Enterprise: A User Guide, discussed above has a chapter on NLIP.
| Links | Description |
|---|---|
| NLIP Project | |
|
The NLIP project is facilitating the development of an open-source protocol for intelligent agents to communicate with each other and with humans using natural language. NLIP is designed to perform the role of a meta-protocol that allows agents from other ecosystems to communicate with one another including interfaces with other protocols such as A2A, ACP, AGNTCY, MCP, NANDA, etc.
One outcome will be a new ECMA standard, TC-56 NLIP, Natural Language Interaction Protocol (draft). The organization is also developing reference implementations of the protocol and end-points. See the GitHub organization for details on these implementations. |
AI-Powered Programming Language for Agents
| Links | Description |
|---|---|
| Dana — The Agent-Native Evolution of AI Development | |
|
Dana is based on the question, “What if your agents could learn, adapt, and improve itself in production—without you?”
Dana bridges the gap between AI coding assistance and autonomous agents through agent-native programming: native agentprimitives, context-aware reason() calls that adapt output types automatically, self-improving pipelines with compositional | (“pipe”) operators, and functions that evolve through POET feedback loops (an automated prompt improvement technique). (Principal developer: Aitomatic)
|
|
Agent Knowledge and Tool Foundations
See also the Deep Research Agent for Finance, which is discussed in the MCP Ecosystem section above.
| Links | Description |
|---|---|
|
Gofannon |
|
| A repository of functions consumable by other agent frameworks. | |
|
Semiont |
|
| Wiki-like knowledge base supporting graph retrieval, where humans and agents co-create Knowledge. Includes MCP server. | |
|
Proscenium |
|
| Collaborative, Asynchronous Human/Agent Interactions. | |
|
Lapidarist |
|
| Document enrichment and knowledge structure (e.g., knowledge graph) extraction and resolution. | |
|
AllyCat |
|
| (Beginner friendly!) Get started with a simple and fun end-to-end RAG application that scrapes your website so you can ask it questions. | |
|
Bartlebot |
|
| Bartlebot is a demonstration of an AI Agent for the legal domain with a Slack integration. It is in early development. | |
|
The Living Guide to Applying AI |
|
| Tips from experts on using AI for various applications, including popular design patterns. | |
Llama Stack and Llama Stack Agents
The Llama Stack project standardizes the core building blocks that simplify AI application development. It codifies best practices across the Llama ecosystem, integrates with other open-source tools and managed services, and provides APIs for inference, evaluation, agents, MCP, and deployment requirements like observability. It is designed to support both on-premise and cloud deployments. The ecosystem provides many example applications to help developers build and deploy AI applications quickly and effectively.
AI Alliance members are contributing directly to Llama Stack development, as well as building example applications that illustrate its use in various enterprise scenarios. The llama-stack-examples project has two initial example applications, described in the table below. The first app is a simple getting-started chatbot that shows you the basics of creating an app with Llama Stack and how to run it. The second app (in development) is a deep research application, a popular class of AI applications, which will demonstrate Llama Stack support for technologies like agents and MCP. Other examples under consideration will be chosen to cover other common application patterns seen in several industries. Please join us!
| Links | Description |
|---|---|
| Llama Stack | |
|
The Llama Stack project itself. See also the Llama Stack Python Client. |
|
| Llama Stack Example Apps | |
|
A growing suite of example applications for Llama Stack that demonstrate how to build applications that use the RAG pattern and agents. See also the Llama Stack Demos for OpenShift and Kubernetes. |
|
|
AI Alliance Llama Stack Example Apps |
|
A growing suite of example applications for Llama Stack that demonstrate various stack features and common application patterns:
|
|
| CCVec - Common Crawl to Vector Stores | |
| Search, analyze, and index Common Crawl data into vector stores for RAG applications, with three interfaces: CLI, Python library, and an MCP server. (Principal developers: Common Crawl Foundation and Meta) | |
| Red Hat Lightspeed | |
| An end-to-end system management tool that predicts risks across Red Hat platforms, recommends actions, and tracks costs. Red Hat Lightspeed uses AI-powered package recommendations and planning capabilities to provide targeted guidance on increasing your systems’ day-to-day efficiency. (Principal developer: Red Hat) | |
Deployment and Scaling
Deploying and scaling AI systems is critical for cost-effective use of AI. There is the growing diversity of hardware accelerators for AI, not only for servers, but for edge devices, too. Developers want the ability to write AI applications that efficiently and transparently scale across different deployment scenarios, from PoCs and single-node deployments on development laptops and edge devices, up to large-scale clustered deployments supporting many users.
| Links | Description |
|---|---|
|
The AI Accelerator Software Ecosystem Guide |
|
| A guide to the most common AI accelerators and the software stacks they use to integrate with tools you know, like PyTorch. | |
