node-code-sandbox
Secure Node.js execution sandbox for AI. Allows coding agents & LLMs to dynamically run JavaScript, install NPM packages, and retrieve results, facilitating code generation, testing, and interactive assistance. MCP-compatible.
claude mcp add --transport stdio ssdeanx-node-code-sandbox-mcp npx -y node-code-sandbox-mcp \ --env FILES_DIR="<path-to-host-output-directory>" \ --env SANDBOX_CPU_LIMIT="<optional, e.g. 0.75>" \ --env SANDBOX_MEMORY_LIMIT="<optional, e.g. 512m>"
How to use
This MCP server provides a Node.js sandbox that runs JavaScript in ephemeral Docker containers with on-the-fly npm dependency installation. You can spawn isolated sandboxes, execute JS snippets (including ES modules), install dependencies per job, and capture stdout and any files saved during execution. The tool supports both ephemeral one-off runs and long-running, detached execution where the container remains alive after a script finishes. Typical workflows involve initializing a sandbox, running your JS code (with optional npm dependencies), and then tearing down or reusing the sandbox as needed. Use the provided APIs to run code, install dependencies, and (optionally) retrieve produced files from the mounted host directory.
How to install
Prerequisites:
- Docker must be installed and running on your machine.
- Node.js/npm or NPX must be available if you plan to use the NPX-based startup.
Installation options:
Option A: NPX (recommended for quick start)
- Ensure Node.js is installed (Node.js 14+ recommended).
- Run the MCP server via NPX:
npx -y node-code-sandbox-mcp
- When prompted, configure environment variables as needed (e.g., FILES_DIR for mounted output, optional SANDBOX memory/CPU limits).
Option B: Docker (self-contained environment)
- Build or pull the image:
# Build locally if you have a Dockerfile
# docker build -t alfonsograziano/node-code-sandbox-mcp .
- Run the container, mounting an output directory and exposing environment vars:
docker run --rm -it \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "$HOME/Desktop/sandbox-output":/root \
-e FILES_DIR="$HOME/Desktop/sandbox-output" \
-e SANDBOX_MEMORY_LIMIT="512m" \
-e SANDBOX_CPU_LIMIT="0.5" \
alfonsograziano/node-code-sandbox-mcp stdio
- The container will start in stdio mode and listen for MCP commands via the configured channel.
Prerequisites summary: Docker installed and running; Node.js/npm for NPX usage; optionally pre-pull necessary Docker images for faster startup.
Additional notes
Tips and notes:
- The SANDBOX_MEMORY_LIMIT and SANDBOX_CPU_LIMIT environment variables are optional but recommended to prevent resource exhaustion when running untrusted code.
- When using the Docker route, the server typically binds the host output directory to /root (as FILES_DIR) so any files saved by the JS code can be retrieved on the host.
- For NPX usage, ensure the working directory contains any needed server files or dependencies; NPX will fetch the package on demand.
- If you plan to reuse a sandbox for long-running tasks, consider enabling Detached Mode to keep the container alive after script execution.
- Common issues: missing Docker socket permissions, insufficient host directory permissions, or network restrictions preventing npm install inside the sandbox.
- If you encounter permission errors, ensure the user running Docker has appropriate access to the Docker socket and the mounted host directory.
Related MCP Servers
ipybox
Python code execution sandbox with programmatic MCP tool calling (PTC)
browserai
A powerful Model Context Protocol (MCP) server that provides an access to serverless browser for AI agents and apps
midnight
Midnight MCP server giving AI assistants access to Midnight blockchain — search contracts, analyze code, explore docs
CyberSecurity s
Model Context Protocol Server For Cyber Security
pega-dx
Pega DX MCP Server - Enabling conversational interaction with Pega Infinity™ applications. This MCP Server transforms Pega Infinity™ interactions into intuitive, conversational experiences through the Model Context Protocol.
glassbox-ai
Autonomous coding agent that ships tested PRs from GitHub issues. Trust-scored multi-agent pipeline - every decision transparent. Trust is earned, not assumed. 💎