by bazinga012
Execute Python code within specified Conda or virtual environments, supporting incremental generation and dynamic dependency management for LLMs.
Mcp Code Executor enables large language models to run Python code in a chosen environment (Conda, standard virtualenv, or UV‑based virtualenv). It creates and stores code files, installs required packages, and executes the code while keeping the environment isolated.
git clone https://github.com/bazinga012/mcp_code_executor.git
cd mcp_code_executor
npm install
npm run build
{
"mcpServers": {
"mcp-code-executor": {
"command": "node",
"args": ["/path/to/mcp_code_executor/build/index.js"],
"env": {
"CODE_STORAGE_DIR": "/path/to/code/storage",
"ENV_TYPE": "conda",
"CONDA_ENV_NAME": "my-conda-env"
}
}
}
}
For Docker, replace the command
with docker
and provide the appropriate args
as shown in the README.execute_code
, install_dependencies
, initialize_code_file
, etc.) from an LLM prompt. The server will handle file creation, dependency installation, and execution.configure_environment
.Q: Which environments are supported?
A: Conda, standard Python virtualenv, and UV‑based virtualenv. Choose by setting ENV_TYPE
and the corresponding path variables.
Q: Do I need internet access for dependency installation?
A: Yes, the server uses pip
(or conda
) to fetch packages from public repositories.
Q: How does the server handle large code blocks?
A: Use initialize_code_file
to create a base file, then append_to_code_file
for additional sections, and finally execute_code_file
to run the complete script.
Q: Can I change the environment after the server has started?
A: Yes, call configure_environment
with the new settings; the server will reload the configuration.
Q: Where are the generated scripts stored?
A: In the directory defined by the CODE_STORAGE_DIR
environment variable.
The MCP Code Executor is an MCP server that allows LLMs to execute Python code within a specified Python environment. This enables LLMs to run code with access to libraries and dependencies defined in the environment. It also supports incremental code generation for handling large code blocks that may exceed token limits.
git clone https://github.com/bazinga012/mcp_code_executor.git
cd mcp_code_executor
npm install
npm run build
To configure the MCP Code Executor server, add the following to your MCP servers configuration file:
{
"mcpServers": {
"mcp-code-executor": {
"command": "node",
"args": [
"/path/to/mcp_code_executor/build/index.js"
],
"env": {
"CODE_STORAGE_DIR": "/path/to/code/storage",
"ENV_TYPE": "conda",
"CONDA_ENV_NAME": "your-conda-env"
}
}
}
}
{
"mcpServers": {
"mcp-code-executor": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"mcp-code-executor"
]
}
}
}
Note: The Dockerfile has been tested with the venv-uv environment type only. Other environment types may require additional configuration.
CODE_STORAGE_DIR
: Directory where the generated code will be storedFor Conda:
ENV_TYPE
: Set to conda
CONDA_ENV_NAME
: Name of the Conda environment to useFor Standard Virtualenv:
ENV_TYPE
: Set to venv
VENV_PATH
: Path to the virtualenv directoryFor UV Virtualenv:
ENV_TYPE
: Set to venv-uv
UV_VENV_PATH
: Path to the UV virtualenv directoryThe MCP Code Executor provides the following tools to LLMs:
execute_code
Executes Python code in the configured environment. Best for short code snippets.
{
"name": "execute_code",
"arguments": {
"code": "import numpy as np\nprint(np.random.rand(3,3))",
"filename": "matrix_gen"
}
}
install_dependencies
Installs Python packages in the environment.
{
"name": "install_dependencies",
"arguments": {
"packages": ["numpy", "pandas", "matplotlib"]
}
}
check_installed_packages
Checks if packages are already installed in the environment.
{
"name": "check_installed_packages",
"arguments": {
"packages": ["numpy", "pandas", "non_existent_package"]
}
}
configure_environment
Dynamically changes the environment configuration.
{
"name": "configure_environment",
"arguments": {
"type": "conda",
"conda_name": "new_env_name"
}
}
get_environment_config
Gets the current environment configuration.
{
"name": "get_environment_config",
"arguments": {}
}
initialize_code_file
Creates a new Python file with initial content. Use this as the first step for longer code that may exceed token limits.
{
"name": "initialize_code_file",
"arguments": {
"content": "def main():\n print('Hello, world!')\n\nif __name__ == '__main__':\n main()",
"filename": "my_script"
}
}
append_to_code_file
Appends content to an existing Python code file. Use this to add more code to a file created with initialize_code_file.
{
"name": "append_to_code_file",
"arguments": {
"file_path": "/path/to/code/storage/my_script_abc123.py",
"content": "\ndef another_function():\n print('This was appended to the file')\n"
}
}
execute_code_file
Executes an existing Python file. Use this as the final step after building up code with initialize_code_file and append_to_code_file.
{
"name": "execute_code_file",
"arguments": {
"file_path": "/path/to/code/storage/my_script_abc123.py"
}
}
read_code_file
Reads the content of an existing Python code file. Use this to verify the current state of a file before appending more content or executing it.
{
"name": "read_code_file",
"arguments": {
"file_path": "/path/to/code/storage/my_script_abc123.py"
}
}
Once configured, the MCP Code Executor will allow LLMs to execute Python code by generating a file in the specified CODE_STORAGE_DIR
and running it within the configured environment.
LLMs can generate and execute code by referencing this MCP server in their prompts.
For larger code blocks that might exceed LLM token limits, use the incremental code generation approach:
initialize_code_file
append_to_code_file
read_code_file
execute_code_file
This approach allows LLMs to write complex, multi-part code without running into token limitations.
This package maintains backward compatibility with earlier versions. Users of previous versions who only specified a Conda environment will continue to work without any changes to their configuration.
Contributions are welcome! Please open an issue or submit a pull request.
This project is licensed under the MIT License.
Please log in to share your review and rating for this MCP.
Discover more MCP servers with similar functionality and use cases
by danny-avila
Provides a customizable ChatGPT‑like web UI that integrates dozens of AI models, agents, code execution, image generation, web search, speech capabilities, and secure multi‑user authentication, all open‑source and ready for self‑hosting.
by ahujasid
BlenderMCP integrates Blender with Claude AI via the Model Context Protocol (MCP), enabling AI-driven 3D scene creation, modeling, and manipulation. This project allows users to control Blender directly through natural language prompts, streamlining the 3D design workflow.
by pydantic
Enables building production‑grade generative AI applications using Pydantic validation, offering a FastAPI‑like developer experience.
by GLips
Figma-Context-MCP is a Model Context Protocol (MCP) server that provides Figma layout information to AI coding agents. It bridges design and development by enabling AI tools to directly access and interpret Figma design data for more accurate and efficient code generation.
by mcp-use
Easily create and interact with MCP servers using custom agents, supporting any LLM with tool calling and offering multi‑server, sandboxed, and streaming capabilities.
by sonnylazuardi
This project implements a Model Context Protocol (MCP) integration between Cursor AI and Figma, allowing Cursor to communicate with Figma for reading designs and modifying them programmatically.
by lharries
WhatsApp MCP Server is a Model Context Protocol (MCP) server for WhatsApp that allows users to search, read, and send WhatsApp messages (including media) through AI models like Claude. It connects directly to your personal WhatsApp account via the WhatsApp web multi-device API and stores messages locally in a SQLite database.
by idosal
GitMCP is a free, open-source remote Model Context Protocol (MCP) server that transforms any GitHub project into a documentation hub, enabling AI tools to access up-to-date documentation and code directly from the source to eliminate "code hallucinations."
by Klavis-AI
Klavis AI provides open-source Multi-platform Control Protocol (MCP) integrations and a hosted API for AI applications. It simplifies connecting AI to various third-party services by managing secure MCP servers and authentication.