by co-browser
browser-use-mcp-server is an MCP server that enables AI agents to control web browsers. It facilitates AI interaction with web interfaces by providing browser automation and real-time VNC streaming.
browser-use-mcp-server is an MCP (Multi-Agent Communication Protocol) server designed to enable AI agents to control web browsers. It leverages Playwright for browser automation and provides functionalities for real-time VNC streaming of browser activity. This project is part of the browser-use
ecosystem, facilitating AI interaction with web interfaces.
To use browser-use-mcp-server, you need to install prerequisites like uv
(a fast Python package manager) and Playwright. The server can be run in two modes: SSE (Server-Sent Events) mode or stdio (standard input/output) mode.
Installation:
uv
: curl -LsSf https://astral.sh/uv/install.sh | sh
mcp-proxy
: uv tool install mcp-proxy
uv sync
, uv pip install playwright
, uv run playwright install --with-deps --no-shell chromium
Running the Server:
uv run server --port 8000
uv build
, uv tool install dist/browser_use_mcp_server-*.whl
), then run browser-use-mcp-server run server --port 8000 --stdio --proxy-port 9000
Client Configuration:
Clients (like Cursor, Windsurf, Claude) need to be configured to connect to the MCP server. This involves setting the mcpServers
configuration with the appropriate URL for SSE mode or command/arguments for stdio mode. Configuration file locations vary by client.
Docker:
For a consistent environment, you can build and run the Docker image:
docker build -t browser-use-mcp-server .
docker run --rm -p8000:8000 -p5900:5900 browser-use-mcp-server
uv
(Python package manager), Playwright, and mcp-proxy
(for stdio mode).mcpServers
configuration in your client's settings, specifying the server URL for SSE mode or the command/arguments for stdio mode.An MCP server that enables AI agents to control web browsers using browser-use.
🔗 Managing multiple MCP servers? Simplify your development workflow with agent-browser
# Install prerequisites
curl -LsSf https://astral.sh/uv/install.sh | sh
uv tool install mcp-proxy
uv tool update-shell
Create a .env
file:
OPENAI_API_KEY=your-api-key
CHROME_PATH=optional/path/to/chrome
PATIENT=false # Set to true if API calls should wait for task completion
# Install dependencies
uv sync
uv pip install playwright
uv run playwright install --with-deps --no-shell chromium
# Run directly from source
uv run server --port 8000
# 1. Build and install globally
uv build
uv tool uninstall browser-use-mcp-server 2>/dev/null || true
uv tool install dist/browser_use_mcp_server-*.whl
# 2. Run with stdio transport
browser-use-mcp-server run server --port 8000 --stdio --proxy-port 9000
{
"mcpServers": {
"browser-use-mcp-server": {
"url": "http://localhost:8000/sse"
}
}
}
{
"mcpServers": {
"browser-server": {
"command": "browser-use-mcp-server",
"args": [
"run",
"server",
"--port",
"8000",
"--stdio",
"--proxy-port",
"9000"
],
"env": {
"OPENAI_API_KEY": "your-api-key"
}
}
}
}
Client | Configuration Path |
---|---|
Cursor | ./.cursor/mcp.json |
Windsurf | ~/.codeium/windsurf/mcp_config.json |
Claude (Mac) | ~/Library/Application Support/Claude/claude_desktop_config.json |
Claude (Windows) | %APPDATA%\Claude\claude_desktop_config.json |
To develop and test the package locally:
Build a distributable wheel:
# From the project root directory
uv build
Install it as a global tool:
uv tool uninstall browser-use-mcp-server 2>/dev/null || true
uv tool install dist/browser_use_mcp_server-*.whl
Run from any directory:
# Set your OpenAI API key for the current session
export OPENAI_API_KEY=your-api-key-here
# Or provide it inline for a one-time run
OPENAI_API_KEY=your-api-key-here browser-use-mcp-server run server --port 8000 --stdio --proxy-port 9000
After making changes, rebuild and reinstall:
uv build
uv tool uninstall browser-use-mcp-server
uv tool install dist/browser_use_mcp_server-*.whl
Using Docker provides a consistent and isolated environment for running the server.
# Build the Docker image
docker build -t browser-use-mcp-server .
# Run the container with the default VNC password ("browser-use")
# --rm ensures the container is automatically removed when it stops
# -p 8000:8000 maps the server port
# -p 5900:5900 maps the VNC port
docker run --rm -p8000:8000 -p5900:5900 browser-use-mcp-server
# Run with a custom VNC password read from a file
# Create a file (e.g., vnc_password.txt) containing only your desired password
echo "your-secure-password" > vnc_password.txt
# Mount the password file as a secret inside the container
docker run --rm -p8000:8000 -p5900:5900 \
-v $(pwd)/vnc_password.txt:/run/secrets/vnc_password:ro \
browser-use-mcp-server
Note: The :ro
flag in the volume mount (-v
) makes the password file read-only inside the container for added security.
# Browser-based viewer
git clone https://github.com/novnc/noVNC
cd noVNC
./utils/novnc_proxy --vnc localhost:5900
Default password: browser-use
(unless overridden using the custom password method)
Try asking your AI:
open https://news.ycombinator.com and return the top ranked article
For issues or inquiries: cobrowser.xyz
Please log in to share your review and rating for this MCP.
Discover more MCP servers with similar functionality and use cases
by danny-avila
Provides a customizable ChatGPT‑like web UI that integrates dozens of AI models, agents, code execution, image generation, web search, speech capabilities, and secure multi‑user authentication, all open‑source and ready for self‑hosting.
by ahujasid
BlenderMCP integrates Blender with Claude AI via the Model Context Protocol (MCP), enabling AI-driven 3D scene creation, modeling, and manipulation. This project allows users to control Blender directly through natural language prompts, streamlining the 3D design workflow.
by pydantic
Enables building production‑grade generative AI applications using Pydantic validation, offering a FastAPI‑like developer experience.
by GLips
Figma-Context-MCP is a Model Context Protocol (MCP) server that provides Figma layout information to AI coding agents. It bridges design and development by enabling AI tools to directly access and interpret Figma design data for more accurate and efficient code generation.
by mcp-use
Easily create and interact with MCP servers using custom agents, supporting any LLM with tool calling and offering multi‑server, sandboxed, and streaming capabilities.
by sonnylazuardi
This project implements a Model Context Protocol (MCP) integration between Cursor AI and Figma, allowing Cursor to communicate with Figma for reading designs and modifying them programmatically.
by lharries
WhatsApp MCP Server is a Model Context Protocol (MCP) server for WhatsApp that allows users to search, read, and send WhatsApp messages (including media) through AI models like Claude. It connects directly to your personal WhatsApp account via the WhatsApp web multi-device API and stores messages locally in a SQLite database.
by idosal
GitMCP is a free, open-source remote Model Context Protocol (MCP) server that transforms any GitHub project into a documentation hub, enabling AI tools to access up-to-date documentation and code directly from the source to eliminate "code hallucinations."
by Klavis-AI
Klavis AI provides open-source Multi-platform Control Protocol (MCP) integrations and a hosted API for AI applications. It simplifies connecting AI to various third-party services by managing secure MCP servers and authentication.