by mcp-use
Easily create and interact with MCP servers using custom agents, supporting any LLM with tool calling and offering multi‑server, sandboxed, and streaming capabilities.
Mcp Use provides a Python SDK that lets developers connect any LLM to MCP servers, build custom agents, and access tools such as web browsing, file operations, 3D modeling, and more without relying on closed‑source clients.
pip install mcp-use
# optional E2B sandbox support
pip install "mcp-use[e2b]"
MCPClient
from the configuration:
client = MCPClient.from_config_file("browser_mcp.json")
# or MCPClient.from_dict(config)
MCPAgent
:
llm = ChatOpenAI(model="gpt-4o")
agent = MCPAgent(llm=llm, client=client, max_steps=30)
result = await agent.run("Find the best restaurant in San Francisco")
# streaming
async for chunk in agent.astream("Find restaurants"):
print(chunk["messages"], end="")
Q: Which LLMs can I use? A: Any model supported by LangChain that offers tool/function calling (e.g., OpenAI GPT‑4o, Anthropic Claude‑3.5, Groq Llama‑3, etc.).
Q: Do I need to run MCP servers locally?
A: You can run them locally via npx
, use HTTP endpoints, or enable the E2B sandbox to run them in the cloud.
Q: How do I add a new MCP server?
A: Add an entry under mcpServers
in a JSON config file or a Python dict, then recreate the MCPClient
.
Q: Can I limit the tools an agent can use?
A: Yes, pass disallowed_tools
when creating MCPAgent
.
Q: How do I enable streaming output?
A: Call agent.astream(query)
and iterate over the async generator.
Q: What is the simplest way to start?
A: Install the package, copy the example browser_mcp.json
config, and run the quick‑start script from the README.
🌐 MCP-Use is the open source way to connect any LLM to any MCP server and build custom MCP agents that have tool access, without using closed source or application clients.
💡 Let developers easily connect any LLM to tools like web browsing, file operations, and more.
Supports | |
---|---|
Primitives | |
Transports |
With pip:
pip install mcp-use
Or install from source:
git clone https://github.com/mcp-use/mcp-use.git
cd mcp-use
pip install -e .
mcp_use works with various LLM providers through LangChain. You'll need to install the appropriate LangChain provider package for your chosen LLM. For example:
# For OpenAI
pip install langchain-openai
# For Anthropic
pip install langchain-anthropic
For other providers, check the LangChain chat models documentation and add your API keys for the provider you want to use to your .env
file.
OPENAI_API_KEY=
ANTHROPIC_API_KEY=
Important: Only models with tool calling capabilities can be used with mcp_use. Make sure your chosen model supports function calling or tool use.
import asyncio
import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from mcp_use import MCPAgent, MCPClient
async def main():
# Load environment variables
load_dotenv()
# Create configuration dictionary
config = {
"mcpServers": {
"playwright": {
"command": "npx",
"args": ["@playwright/mcp@latest"],
"env": {
"DISPLAY": ":1"
}
}
}
}
# Create MCPClient from configuration dictionary
client = MCPClient.from_dict(config)
# Create LLM
llm = ChatOpenAI(model="gpt-4o")
# Create agent with the client
agent = MCPAgent(llm=llm, client=client, max_steps=30)
# Run the query
result = await agent.run(
"Find the best restaurant in San Francisco",
)
print(f"\nResult: {result}")
if __name__ == "__main__":
asyncio.run(main())
You can also add the servers configuration from a config file like this:
client = MCPClient.from_config_file(
os.path.join("browser_mcp.json")
)
Example configuration file (browser_mcp.json
):
{
"mcpServers": {
"playwright": {
"command": "npx",
"args": ["@playwright/mcp@latest"],
"env": {
"DISPLAY": ":1"
}
}
}
}
For other settings, models, and more, check out the documentation.
MCP-Use supports asynchronous streaming of agent output using the astream
method on MCPAgent
. This allows you to receive incremental results, tool actions, and intermediate steps as they are generated by the agent, enabling real-time feedback and progress reporting.
Call agent.astream(query)
and iterate over the results asynchronously:
async for chunk in agent.astream("Find the best restaurant in San Francisco"):
print(chunk["messages"], end="", flush=True)
Each chunk is a dictionary containing keys such as actions
, steps
, messages
, and (on the last chunk) output
. This enables you to build responsive UIs or log agent progress in real time.
import asyncio
import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from mcp_use import MCPAgent, MCPClient
async def main():
load_dotenv()
client = MCPClient.from_config_file("browser_mcp.json")
llm = ChatOpenAI(model="gpt-4o")
agent = MCPAgent(llm=llm, client=client, max_steps=30)
async for chunk in agent.astream("Look for job at nvidia for machine learning engineer."):
print(chunk["messages"], end="", flush=True)
if __name__ == "__main__":
asyncio.run(main())
This streaming interface is ideal for applications that require real-time updates, such as chatbots, dashboards, or interactive notebooks.
import asyncio
import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from mcp_use import MCPAgent, MCPClient
async def main():
# Load environment variables
load_dotenv()
# Create MCPClient from config file
client = MCPClient.from_config_file(
os.path.join(os.path.dirname(__file__), "browser_mcp.json")
)
# Create LLM
llm = ChatOpenAI(model="gpt-4o")
# Alternative models:
# llm = ChatAnthropic(model="claude-3-5-sonnet-20240620")
# llm = ChatGroq(model="llama3-8b-8192")
# Create agent with the client
agent = MCPAgent(llm=llm, client=client, max_steps=30)
# Run the query
result = await agent.run(
"Find the best restaurant in San Francisco USING GOOGLE SEARCH",
max_steps=30,
)
print(f"\nResult: {result}")
if __name__ == "__main__":
asyncio.run(main())
import asyncio
import os
from dotenv import load_dotenv
from langchain_anthropic import ChatAnthropic
from mcp_use import MCPAgent, MCPClient
async def run_airbnb_example():
# Load environment variables
load_dotenv()
# Create MCPClient with Airbnb configuration
client = MCPClient.from_config_file(
os.path.join(os.path.dirname(__file__), "airbnb_mcp.json")
)
# Create LLM - you can choose between different models
llm = ChatAnthropic(model="claude-3-5-sonnet-20240620")
# Create agent with the client
agent = MCPAgent(llm=llm, client=client, max_steps=30)
try:
# Run a query to search for accommodations
result = await agent.run(
"Find me a nice place to stay in Barcelona for 2 adults "
"for a week in August. I prefer places with a pool and "
"good reviews. Show me the top 3 options.",
max_steps=30,
)
print(f"\nResult: {result}")
finally:
# Ensure we clean up resources properly
if client.sessions:
await client.close_all_sessions()
if __name__ == "__main__":
asyncio.run(run_airbnb_example())
Example configuration file (airbnb_mcp.json
):
{
"mcpServers": {
"airbnb": {
"command": "npx",
"args": ["-y", "@openbnb/mcp-server-airbnb"]
}
}
}
import asyncio
from dotenv import load_dotenv
from langchain_anthropic import ChatAnthropic
from mcp_use import MCPAgent, MCPClient
async def run_blender_example():
# Load environment variables
load_dotenv()
# Create MCPClient with Blender MCP configuration
config = {"mcpServers": {"blender": {"command": "uvx", "args": ["blender-mcp"]}}}
client = MCPClient.from_dict(config)
# Create LLM
llm = ChatAnthropic(model="claude-3-5-sonnet-20240620")
# Create agent with the client
agent = MCPAgent(llm=llm, client=client, max_steps=30)
try:
# Run the query
result = await agent.run(
"Create an inflatable cube with soft material and a plane as ground.",
max_steps=30,
)
print(f"\nResult: {result}")
finally:
# Ensure we clean up resources properly
if client.sessions:
await client.close_all_sessions()
if __name__ == "__main__":
asyncio.run(run_blender_example())
MCP-Use supports HTTP connections, allowing you to connect to MCP servers running on specific HTTP ports. This feature is particularly useful for integrating with web-based MCP servers.
Here's an example of how to use the HTTP connection feature:
import asyncio
import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from mcp_use import MCPAgent, MCPClient
async def main():
"""Run the example using a configuration file."""
# Load environment variables
load_dotenv()
config = {
"mcpServers": {
"http": {
"url": "http://localhost:8931/sse"
}
}
}
# Create MCPClient from config file
client = MCPClient.from_dict(config)
# Create LLM
llm = ChatOpenAI(model="gpt-4o")
# Create agent with the client
agent = MCPAgent(llm=llm, client=client, max_steps=30)
# Run the query
result = await agent.run(
"Find the best restaurant in San Francisco USING GOOGLE SEARCH",
max_steps=30,
)
print(f"\nResult: {result}")
if __name__ == "__main__":
# Run the appropriate example
asyncio.run(main())
This example demonstrates how to connect to an MCP server running on a specific HTTP port. Make sure to start your MCP server before running this example.
MCP-Use allows configuring and connecting to multiple MCP servers simultaneously using the MCPClient
. This enables complex workflows that require tools from different servers, such as web browsing combined with file operations or 3D modeling.
You can configure multiple servers in your configuration file:
{
"mcpServers": {
"airbnb": {
"command": "npx",
"args": ["-y", "@openbnb/mcp-server-airbnb", "--ignore-robots-txt"]
},
"playwright": {
"command": "npx",
"args": ["@playwright/mcp@latest"],
"env": {
"DISPLAY": ":1"
}
}
}
}
The MCPClient
class provides methods for managing connections to multiple servers. When creating an MCPAgent
, you can provide an MCPClient
configured with multiple servers.
By default, the agent will have access to tools from all configured servers. If you need to target a specific server for a particular task, you can specify the server_name
when calling the agent.run()
method.
# Example: Manually selecting a server for a specific task
result = await agent.run(
"Search for Airbnb listings in Barcelona",
server_name="airbnb" # Explicitly use the airbnb server
)
result_google = await agent.run(
"Find restaurants near the first result using Google Search",
server_name="playwright" # Explicitly use the playwright server
)
For enhanced efficiency and to reduce potential agent confusion when dealing with many tools from different servers, you can enable the Server Manager by setting use_server_manager=True
during MCPAgent
initialization.
When enabled, the agent intelligently selects the correct MCP server based on the tool chosen by the LLM for a specific step. This minimizes unnecessary connections and ensures the agent uses the appropriate tools for the task.
import asyncio
from mcp_use import MCPClient, MCPAgent
from langchain_anthropic import ChatAnthropic
async def main():
# Create client with multiple servers
client = MCPClient.from_config_file("multi_server_config.json")
# Create agent with the client
agent = MCPAgent(
llm=ChatAnthropic(model="claude-3-5-sonnet-20240620"),
client=client,
use_server_manager=True # Enable the Server Manager
)
try:
# Run a query that uses tools from multiple servers
result = await agent.run(
"Search for a nice place to stay in Barcelona on Airbnb, "
"then use Google to find nearby restaurants and attractions."
)
print(result)
finally:
# Clean up all sessions
await client.close_all_sessions()
if __name__ == "__main__":
asyncio.run(main())
MCP-Use allows you to restrict which tools are available to the agent, providing better security and control over agent capabilities:
import asyncio
from mcp_use import MCPAgent, MCPClient
from langchain_openai import ChatOpenAI
async def main():
# Create client
client = MCPClient.from_config_file("config.json")
# Create agent with restricted tools
agent = MCPAgent(
llm=ChatOpenAI(model="gpt-4"),
client=client,
disallowed_tools=["file_system", "network"] # Restrict potentially dangerous tools
)
# Run a query with restricted tool access
result = await agent.run(
"Find the best restaurant in San Francisco"
)
print(result)
# Clean up
await client.close_all_sessions()
if __name__ == "__main__":
asyncio.run(main())
MCP-Use supports running MCP servers in a sandboxed environment using E2B's cloud infrastructure. This allows you to run MCP servers without having to install dependencies locally, making it easier to use tools that might have complex setups or system requirements.
To use sandboxed execution, you need to install the E2B dependency:
# Install mcp-use with E2B support
pip install "mcp-use[e2b]"
# Or install the dependency directly
pip install e2b-code-interpreter
You'll also need an E2B API key. You can sign up at e2b.dev to get your API key.
To enable sandboxed execution, use the sandbox parameter when creating your MCPClient
:
import asyncio
import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from mcp_use import MCPAgent, MCPClient
from mcp_use.types.sandbox import SandboxOptions
async def main():
# Load environment variables (needs E2B_API_KEY)
load_dotenv()
# Define MCP server configuration
server_config = {
"mcpServers": {
"everything": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-everything"],
}
}
}
# Define sandbox options
sandbox_options: SandboxOptions = {
"api_key": os.getenv("E2B_API_KEY"), # API key can also be provided directly
"sandbox_template_id": "base", # Use base template
}
# Create client with sandboxed mode enabled
client = MCPClient(
config=server_config,
sandbox=True,
sandbox_options=sandbox_options,
)
# Create agent with the sandboxed client
llm = ChatOpenAI(model="gpt-4o")
agent = MCPAgent(llm=llm, client=client)
# Run your agent
result = await agent.run("Use the command line tools to help me add 1+1")
print(result)
# Clean up
await client.close_all_sessions()
if __name__ == "__main__":
asyncio.run(main())
The SandboxOptions
type provides configuration for the sandbox environment:
Option | Description | Default |
---|---|---|
api_key |
E2B API key. Required - can be provided directly or via E2B_API_KEY environment variable | None |
sandbox_template_id |
Template ID for the sandbox environment | "base" |
supergateway_command |
Command to run supergateway | "npx -y supergateway" |
You can call MCP server tools directly without an LLM when you need programmatic control:
import asyncio
from mcp_use import MCPClient
async def call_tool_example():
config = {
"mcpServers": {
"everything": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-everything"],
}
}
}
client = MCPClient.from_dict(config)
try:
await client.create_all_sessions()
session = client.get_session("everything")
# Call tool directly
result = await session.call_tool(
name="add",
arguments={"a": 1, "b": 2}
)
print(f"Result: {result.content[0].text}") # Output: 3
finally:
await client.close_all_sessions()
if __name__ == "__main__":
asyncio.run(call_tool_example())
See the complete example: examples/direct_tool_call.py
You can also build your own custom agent using the LangChain adapter:
import asyncio
from langchain_openai import ChatOpenAI
from mcp_use.client import MCPClient
from mcp_use.adapters.langchain_adapter import LangChainAdapter
from dotenv import load_dotenv
load_dotenv()
async def main():
# Initialize MCP client
client = MCPClient.from_config_file("examples/browser_mcp.json")
llm = ChatOpenAI(model="gpt-4o")
# Create adapter instance
adapter = LangChainAdapter()
# Get LangChain tools with a single line
tools = await adapter.create_tools(client)
# Create a custom LangChain agent
llm_with_tools = llm.bind_tools(tools)
result = await llm_with_tools.ainvoke("What tools do you have available ? ")
print(result)
if __name__ == "__main__":
asyncio.run(main())
MCP-Use provides a built-in debug mode that increases log verbosity and helps diagnose issues in your agent implementation.
There are two primary ways to enable debug mode:
Run your script with the DEBUG
environment variable set to the desired level:
# Level 1: Show INFO level messages
DEBUG=1 python3.11 examples/browser_use.py
# Level 2: Show DEBUG level messages (full verbose output)
DEBUG=2 python3.11 examples/browser_use.py
This sets the debug level only for the duration of that specific Python process.
Alternatively you can set the following environment variable to the desired logging level:
export MCP_USE_DEBUG=1 # or 2
You can set the global debug flag directly in your code:
import mcp_use
mcp_use.set_debug(1) # INFO level
# or
mcp_use.set_debug(2) # DEBUG level (full verbose output)
If you only want to see debug information from the agent without enabling full debug logging, you can set the verbose
parameter when creating an MCPAgent:
# Create agent with increased verbosity
agent = MCPAgent(
llm=your_llm,
client=your_client,
verbose=True # Only shows debug messages from the agent
)
This is useful when you only need to see the agent's steps and decision-making process without all the low-level debug information from other components.
We love contributions! Feel free to open issues for bugs or feature requests. Look at CONTRIBUTING.md for guidelines.
Thanks to all our amazing contributors!
MIT
If you use MCP-Use in your research or project, please cite:
@software{mcp_use2025,
author = {Zullo, Pietro},
title = {MCP-Use: MCP Library for Python},
year = {2025},
publisher = {GitHub},
url = {https://github.com/pietrozullo/mcp-use}
}
Please log in to share your review and rating for this MCP.
{ "mcpServers": { "playwright": { "command": "npx", "args": [ "@playwright/mcp@latest" ], "env": { "DISPLAY": ":1" } } } }
Discover more MCP servers with similar functionality and use cases
by danny-avila
Provides a customizable ChatGPT‑like web UI that integrates dozens of AI models, agents, code execution, image generation, web search, speech capabilities, and secure multi‑user authentication, all open‑source and ready for self‑hosting.
by ahujasid
BlenderMCP integrates Blender with Claude AI via the Model Context Protocol (MCP), enabling AI-driven 3D scene creation, modeling, and manipulation. This project allows users to control Blender directly through natural language prompts, streamlining the 3D design workflow.
by pydantic
Enables building production‑grade generative AI applications using Pydantic validation, offering a FastAPI‑like developer experience.
by GLips
Figma-Context-MCP is a Model Context Protocol (MCP) server that provides Figma layout information to AI coding agents. It bridges design and development by enabling AI tools to directly access and interpret Figma design data for more accurate and efficient code generation.
by sonnylazuardi
This project implements a Model Context Protocol (MCP) integration between Cursor AI and Figma, allowing Cursor to communicate with Figma for reading designs and modifying them programmatically.
by lharries
WhatsApp MCP Server is a Model Context Protocol (MCP) server for WhatsApp that allows users to search, read, and send WhatsApp messages (including media) through AI models like Claude. It connects directly to your personal WhatsApp account via the WhatsApp web multi-device API and stores messages locally in a SQLite database.
by idosal
GitMCP is a free, open-source remote Model Context Protocol (MCP) server that transforms any GitHub project into a documentation hub, enabling AI tools to access up-to-date documentation and code directly from the source to eliminate "code hallucinations."
by Klavis-AI
Klavis AI provides open-source Multi-platform Control Protocol (MCP) integrations and a hosted API for AI applications. It simplifies connecting AI to various third-party services by managing secure MCP servers and authentication.
by zcaceres
Markdownify is a Model Context Protocol (MCP) server that converts various file types and web content to Markdown format, providing tools to transform PDFs, images, audio files, web pages, and more into easily readable and shareable Markdown text.