by baryhuang
A MCP server that enables Claude to discover and call any API endpoint through semantic search, designed for integrating private APIs with Claude Desktop.
mcp-server-any-openapi is a Claude Message Control Protocol (MCP) server that allows Claude to semantically search and interact with any API endpoint defined in an OpenAPI specification. It addresses the challenge of integrating large API documentation with Claude Desktop by intelligently chunking OpenAPI specifications and providing built-in request execution capabilities. This makes it ideal for connecting private or extensive APIs to Claude Desktop.
To use mcp-server-any-openapi, you can either install it via pip, use the pre-built Docker image, or run it from source. The server is configured primarily through environment variables such as OPENAPI_JSON_DOCS_URL (for the OpenAPI spec URL), MCP_API_PREFIX (for customizing tool namespaces), and GLOBAL_TOOL_PROMPT (for guiding Claude's tool selection). Once running, you integrate it with Claude Desktop by configuring the MCP server in your Claude Desktop settings, providing the necessary command and arguments for the Docker container or local execution. Claude can then use the _api_request_schema tool to discover endpoints and the _make_request tool to execute API calls.
MCP_API_PREFIX.linux/amd64 and linux/arm64.Q: Why was this project created? A: The project was created to address the challenge of Claude's inability to process large OpenAPI documentation files, which often resulted in errors. The goal was to enable Claude to effectively discover and call API endpoints from extensive specifications.
Q: How does it handle large OpenAPI files? A: It uses in-memory semantic search and endpoint-based chunking. Instead of processing the entire document, it indexes individual endpoints, allowing Claude to find relevant APIs by natural language and receive full endpoint details.
Q: What are the main limitations? A: Current limitations include a cold start penalty (around 15 seconds for model loading if not using Docker), and past issues with Docker image size and reliance on Hugging Face for model downloads (though the latest images embed pre-downloaded models).
Q: Can I customize the tool names?
A: Yes, you can customize the tool namespace by setting the MCP_API_PREFIX environment variable when running the Docker container or the server from source.
Q: How does it ensure Claude selects the correct tool?
A: The GLOBAL_TOOL_PROMPT environment variable is crucial. It allows you to prepend optional text to all tool descriptions, helping Claude accurately select or deselect the appropriate tool based on the context of the conversation.
Customize through environment variables. GLOBAL_TOOL_PROMPT is IMPORTANT!
OPENAPI_JSON_DOCS_URL: URL to the OpenAPI specification JSON (defaults to https://api.staging.readymojo.com/openapi.json)MCP_API_PREFIX: Customizable tool namespace (default "any_openapi"):
# Creates tools: custom_api_request_schema and custom_make_request
docker run -e MCP_API_PREFIX=finance ...
GLOBAL_TOOL_PROMPT: Optional text to prepend to all tool descriptions. This is crucial to make the Claude select and not select your tool accurately.
# Adds "Access to insights apis for ACME Financial Services abc.com . " to the beginning of all tool descriptions
docker run -e GLOBAL_TOOL_PROMPT="Access to insights apis for ACME Financial Services abc.com ." ...
Why I create this: I want to serve my private API, whose swagger openapi docs is a few hundreds KB in size.
Eventually I came down to this solution:
Boom, Claude now knows what API to call, with the full parameters!
Wait I have to create another tool in this server to make the actual restful request, because "fetch" server simply don't work, and I don't want to debug why.
https://github.com/user-attachments/assets/484790d2-b5a7-475d-a64d-157e839ad9b0
Technical highlights:
query -> [Embedding] -> FAISS TopK -> OpenAPI docs -> MCP Client (Claude Desktop)
MCP Client -> Construct OpenAPI Request -> Execute Request -> Return Response
Here is the multi-instance config example. I design it so it can more flexibly used for multiple set of apis:
{
"mcpServers": {
"finance_openapi": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"OPENAPI_JSON_DOCS_URL=https://api.finance.com/openapi.json",
"-e",
"MCP_API_PREFIX=finance",
"-e",
"GLOBAL_TOOL_PROMPT='Access to insights apis for ACME Financial Services abc.com .'",
"buryhuang/mcp-server-any-openapi:latest"
]
},
"healthcare_openapi": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"OPENAPI_JSON_DOCS_URL=https://api.healthcare.com/openapi.json",
"-e",
"MCP_API_PREFIX=healthcare",
"-e",
"GLOBAL_TOOL_PROMPT='Access to insights apis for Healthcare API services efg.com .",
"buryhuang/mcp-server-any-openapi:latest"
]
}
}
}
In this example:
https://api.finance.com for finance APIshttps://api.healthcare.com for healthcare APIsAPI_REQUEST_BASE_URL environment variable:{
"mcpServers": {
"finance_openapi": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"OPENAPI_JSON_DOCS_URL=https://api.finance.com/openapi.json",
"-e",
"API_REQUEST_BASE_URL=https://api.finance.staging.com",
"-e",
"MCP_API_PREFIX=finance",
"-e",
"GLOBAL_TOOL_PROMPT='Access to insights apis for ACME Financial Services abc.com .'",
"buryhuang/mcp-server-any-openapi:latest"
]
}
}
}
Claude Desktop Project Prompt:
You should get the api spec details from tools financial_api_request_schema
You task is use financial_make_request tool to make the requests to get response. You should follow the api spec to add authorization header:
Authorization: Bearer <xxxxxxxxx>
Note: The base URL will be returned in the api_request_schema response, you don't need to specify it manually.
In chat, you can do:
Get prices for all stocks
To install Scalable OpenAPI Endpoint Discovery and API Request Tool for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @baryhuang/mcp-server-any-openapi --client claude
pip install mcp-server-any-openapi
The server provides the following tools (where {prefix} is determined by MCP_API_PREFIX):
Get API endpoint schemas that match your intent. Returns endpoint details including path, method, parameters, and response formats.
Input Schema:
{
"query": {
"type": "string",
"description": "Describe what you want to do with the API (e.g., 'Get user profile information', 'Create a new job posting')"
}
}
Essential for reliable execution with complex APIs where simplified implementations fail. Provides:
Input Schema:
{
"method": {
"type": "string",
"description": "HTTP method (GET, POST, PUT, DELETE, PATCH)",
"enum": ["GET", "POST", "PUT", "DELETE", "PATCH"]
},
"url": {
"type": "string",
"description": "Fully qualified API URL (e.g., https://api.example.com/users/123)"
},
"headers": {
"type": "object",
"description": "Request headers (optional)",
"additionalProperties": {
"type": "string"
}
},
"query_params": {
"type": "object",
"description": "Query parameters (optional)",
"additionalProperties": {
"type": "string"
}
},
"body": {
"type": "object",
"description": "Request body for POST, PUT, PATCH (optional)"
}
}
Response Format:
{
"status_code": 200,
"headers": {
"content-type": "application/json",
...
},
"body": {
// Response data
}
}
Official images support 3 platforms:
# Build and push using buildx
docker buildx create --use
docker buildx build --platform linux/amd64,linux/arm64 \
-t buryhuang/mcp-server-any-openapi:latest \
--push .
Control tool names through MCP_API_PREFIX:
# Produces tools with "finance_api" prefix:
docker run -e MCP_API_PREFIX=finance_ ...
docker pull buryhuang/mcp-server-any-openapi:latest
docker build -t mcp-server-any-openapi .
docker run \
-e OPENAPI_JSON_DOCS_URL=https://api.example.com/openapi.json \
-e MCP_API_PREFIX=finance \
buryhuang/mcp-server-any-openapi:latest
EndpointSearcher: Core class that handles:
Server Implementation:
python -m mcp_server_any_openapi
Configure the MCP server in your Claude Desktop settings:
{
"mcpServers": {
"any_openapi": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"OPENAPI_JSON_DOCS_URL=https://api.example.com/openapi.json",
"-e",
"MCP_API_PREFIX=finance",
"-e",
"GLOBAL_TOOL_PROMPT='Access to insights apis for ACME Financial Services abc.com .",
"buryhuang/mcp-server-any-openapi:latest"
]
}
}
}
git checkout -b feature/amazing-feature)git commit -m 'Add some amazing feature')git push origin feature/amazing-feature)This project is licensed under the terms included in the LICENSE file.
Please log in to share your review and rating for this MCP.
Discover more MCP servers with similar functionality and use cases
by danny-avila
Provides a customizable ChatGPT‑like web UI that integrates dozens of AI models, agents, code execution, image generation, web search, speech capabilities, and secure multi‑user authentication, all open‑source and ready for self‑hosting.
by ahujasid
BlenderMCP integrates Blender with Claude AI via the Model Context Protocol (MCP), enabling AI-driven 3D scene creation, modeling, and manipulation. This project allows users to control Blender directly through natural language prompts, streamlining the 3D design workflow.
by pydantic
Enables building production‑grade generative AI applications using Pydantic validation, offering a FastAPI‑like developer experience.
by GLips
Figma-Context-MCP is a Model Context Protocol (MCP) server that provides Figma layout information to AI coding agents. It bridges design and development by enabling AI tools to directly access and interpret Figma design data for more accurate and efficient code generation.
by mcp-use
Easily create and interact with MCP servers using custom agents, supporting any LLM with tool calling and offering multi‑server, sandboxed, and streaming capabilities.
by sonnylazuardi
This project implements a Model Context Protocol (MCP) integration between Cursor AI and Figma, allowing Cursor to communicate with Figma for reading designs and modifying them programmatically.
by lharries
WhatsApp MCP Server is a Model Context Protocol (MCP) server for WhatsApp that allows users to search, read, and send WhatsApp messages (including media) through AI models like Claude. It connects directly to your personal WhatsApp account via the WhatsApp web multi-device API and stores messages locally in a SQLite database.
by idosal
GitMCP is a free, open-source remote Model Context Protocol (MCP) server that transforms any GitHub project into a documentation hub, enabling AI tools to access up-to-date documentation and code directly from the source to eliminate "code hallucinations."
by Klavis-AI
Klavis AI provides open-source Multi-platform Control Protocol (MCP) integrations and a hosted API for AI applications. It simplifies connecting AI to various third-party services by managing secure MCP servers and authentication.