mcp -code-execution-mode
An MCP server that executes Python code in isolated rootless containers with optional MCP server proxying. Implementation of Anthropic's and Cloudflare's ideas for reducing MCP tool definitions context bloat.
claude mcp add --transport stdio elusznik-mcp-server-code-execution-mode python -m mcp_code_execution_server \ --env MCP_PORT="Port to run the MCP server (default depends on implementation)" \ --env MCP_ROOT="Path to the root of MCP tool repository or workspace" \ --env MCP_LOG_LEVEL="Information about log verbosity (e.g., INFO, DEBUG)"
How to use
This MCP server implements the Code Execution with MCP pattern. It exposes a single primary capability for LLM agents: to write Python code that discovers, inspects, and orchestrates a wide variety of MCP tools without preloading their entire schemas into the prompt. The agent uses the provided runtime APIs to discover available servers, fetch tool descriptions on demand, and proxy execution into a rootless, sandboxed Python process. The result is a discovery-first workflow where the agent can locate tools across many servers, load only the needed tool schemas, and execute code with strict isolation and timeouts. Tools like search_tool_docs() and discovered_servers() enable fuzzy search across servers and selective schema hydration, while run_python allows the agent to compose complex logic in a single Python script that the bridge then executes via the proxied stdio channel.
How to install
Prerequisites:\n- Python 3.8+ (recommended 3.9+), with venv support\n- Internet access to install dependencies\n\n1) Create a virtual environment:\n\nbash\npython -m venv .venv\nsource .venv/bin/activate # Linux/macOS\n.venv\Scripts\activate # Windows\n\n\n2) Upgrade pip and install the MCP package (example name used; replace with actual package if different):\n\nbash\npip install --upgrade pip\npip install mcp-code-execution-server # replace with actual package name\n\n\n3) Verify installation:\n\nbash\npython -m mcp_code_execution_server --version\n\n\n4) Run the MCP server:\n\nbash\npython -m mcp_code_execution_server\n\n\nNotes:\n- If you install from a source distribution or a Git checkout, ensure dependencies are installed (requirements.txt or pyproject.toml).\n- Ensure the required ports are available and not blocked by firewalls.
Additional notes
Tips and tips:\n- Use environment variables to tune security, timeouts, and logging (MCP_PORT, MCP_LOG_LEVEL, etc.).\n- If you encounter token bloat, rely on the runtime discovery APIs (discovered_servers, search_tool_docs) to hydrate only the needed schemas.\n- Confirm that the Python sandbox is properly isolated in your runtime environment to enforce rootless execution and read-only constraints where applicable.\n- For troubleshooting, increase MCP_LOG_LEVEL to DEBUG to inspect the discovery and proxying handshake between the LLM and the stdio-mirected subprocess.\n- When integrating with orchestration pipelines, consider pre-configuring common tool groups and caching search results to reduce latency on repeated queries.
Related MCP Servers
diy-tools
An MCP server that allows users to dynamically add custom tools/functions at runtime
claude-code
MCP Server connects with claude code local command.
docmole
Dig through any documentation with AI - MCP server for Claude, Cursor, and other AI assistants
chromadb-remote
Remote MCP server for ChromaDB
yougile
MCP сервер для интеграции с YouGile
aiquila
Connect Claude AI to your Nextcloud via the Model Context Protocol. Browse, search, and manage files through natural conversation.