by AdamStrojek
Simplifies the creation of AI agents in Rust by providing strong static typing, robust error handling, and seamless integration with multiple LLM providers.
AgentAI offers a Rust‑native framework for building AI agents that can interact with a wide range of Large Language Models. It abstracts the underlying LLM APIs, supplies a toolbox for custom tools, and includes optional MCP server support for model‑context‑protocol based tooling.
# Add the crate to your Rust project
cargo add agentai
use agentai::Agent;
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let mut agent = Agent::new("You are a useful assistant");
let answer: String = agent.run("gpt-4o", "Why is the sky blue?", None).await?;
println!("Answer: {}", answer);
Ok(())
}
Enable optional features with Cargo flags, e.g., -F mcp-client
to activate MCP support.
macros
, tools-buildin
, tools-web
, and mcp-client
enable fine‑grained functionality.Q: Is AgentAI production‑ready? A: The library is under heavy development; interfaces may change without notice.
Q: How do I enable the MCP client?
A: Add the mcp-client
feature (enabled by default) or explicitly specify it with cargo add agentai -F mcp-client
.
Q: Can I use AgentAI with async runtimes other than Tokio?
A: Yes, as long as the runtime implements the required async traits used by the underlying genai
crate.
Q: Where can I find more examples?
A: The documentation hosts an examples
directory; run any example with cargo run --example <example_name>
.
Q: How do I contribute?
A: See the CONTRIBUTING.md
file in the repository for guidelines.
AgentAI is a Rust library designed to simplify the creation of AI agents. It leverages the GenAI library to interface with a wide range of popular Large Language Models (LLMs), making it versatile and powerful. Written in Rust, AgentAI benefits from strong static typing and robust error handling, ensuring reliable and maintainable code. Whether you're developing simple or complex AI agents, AgentAI provides a streamlined and efficient development process.
This library is under heavy development. The interface may change at any time without notice.
ToolBox
.ToolBox
(version 0.1.5)This release introduces the ToolBox
, a new feature providing an easy-to-use interface for supplying tools to AI agents.
We are continuously working to improve AgentAI. Here are some of the features planned for the near future:
To add the AgentAI crate to your project, run the following command in your project's root directory:
cargo add agentai
This command adds the crate and its dependencies to your project.
Available features for agentai
crate.
To enable any of these features, you need to enter this command:
cargo add agentai -F mcp-client
Features list:
mcp-client
(enabled by default) — Enables experimental support for Agent Tools based on MCP Serversmacros
(enabled by default) — Enables support for macro #[toolbox]
tools-buildin
(enabled by default) — Enables support for buildin toolstools-web
(enabled by default) — Enables support for web toolsHere is a basic example of how to create an AI agent using AgentAI:
use agentai::Agent;
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let mut agent = Agent::new("You are a useful assistant");
let answer: String = agent.run("gpt-4o", "Why is the sky blue?", None).await?;
println!("Answer: {}", answer);
Ok(())
}
For more examples, check out the examples directory. To run an example, use the following command, replacing <example_name>
with the name of the example file (without the .rs
extension):
cargo run --example <example_name>
For instance, to run the simple
example:
cargo run --example simple
Full documentation is available on docs.rs.
Contributions are welcome! Please see our CONTRIBUTING.md for more details.
This project is licensed under the MIT License. See the LICENSE file for details.
Special thanks to the creators of the GenAI library for providing a robust framework for interfacing with various LLMs.
Please log in to share your review and rating for this MCP.
Discover more MCP servers with similar functionality and use cases
by zed-industries
Provides real-time collaborative editing powered by Rust, enabling developers to edit code instantly across machines with a responsive, GPU-accelerated UI.
by cline
Provides autonomous coding assistance directly in the IDE, enabling file creation, editing, terminal command execution, browser interactions, and tool extension with user approval at each step.
by continuedev
Provides continuous AI assistance across IDEs, terminals, and CI pipelines, offering agents, chat, inline editing, and autocomplete to accelerate software development.
by github
Enables AI agents, assistants, and chatbots to interact with GitHub via natural‑language commands, providing read‑write access to repositories, issues, pull requests, workflows, security data and team activity.
by block
Automates engineering tasks by installing, executing, editing, and testing code using any large language model, providing end‑to‑end project building, debugging, workflow orchestration, and external API interaction.
by RooCodeInc
An autonomous coding agent that lives inside VS Code, capable of generating, refactoring, debugging code, managing files, running terminal commands, controlling a browser, and adapting its behavior through custom modes and instructions.
by lastmile-ai
A lightweight, composable framework for building AI agents using Model Context Protocol and simple workflow patterns.
by firebase
Provides a command‑line interface to manage, test, and deploy Firebase projects, covering hosting, databases, authentication, cloud functions, extensions, and CI/CD workflows.
by gptme
Empowers large language models to act as personal AI assistants directly inside the terminal, providing capabilities such as code execution, file manipulation, web browsing, vision, and interactive tool usage.