CyberChef
[CyberChef-MCP] Model Context Protocol Server for CyberChef ... exposing GCHQ's "Cyber Swiss Army Knife" as 463+ executable AI agent tools spanning encryption, encoding, compression, and forensic data analysis
claude mcp add --transport stdio doublegate-cyberchef-mcp node src/node/mcp-server.mjs \ --env PORT="3000" \ --env CYBERCHEF_ENABLE_WORKERS="true"
How to use
This MCP server exposes CyberChef's extensive data manipulation capabilities as MCP tools. Users (typically AI assistants) can invoke CyberChef operations directly through the MCP interface, including the Omni-tool cyberchef_bake to run a complete CyberChef recipe, and a large suite of 463 atomic operations such as cyberchef_to_base64, cyberchef_aes_decrypt, cyberchef_sha2, cyberchef_yara_rules, and many more. Additional management tools are available, such as cyberchef_search to discover operations and cyberchef_recipe_* tools to save, organize, and execute multi-operation workflows. Newer features include worker-thread based processing (cyberchef_worker_stats, cyberchef_worker_stats) and enterprise-oriented utilities (batch processing via cyberchef_batch, telemetry exports, and quota/stats tools). The server is designed to be run in a Node.js environment and can be exposed over standard HTTP transports for MCP clients. To begin, start the MCP server and connect your agent to the server endpoint to enumerate available tools and execute CyberChef recipes or individual operations.
How to install
Prerequisites:
- Node.js v22 or newer installed on the host
- Internet access to install dependencies
Recommended installation steps:
-
Clone the repository: git clone https://github.com/doublegate/CyberChef-MCP.git cd CyberChef-MCP
-
Install dependencies: npm install
-
Start the MCP server:
- If a start script is defined: npm run start
- If no script is defined, run directly: node src/node/mcp-server.mjs
-
Verify the server is running by hitting the MCP endpoint (default port 3000 as configured): http://localhost:3000/
Note: If you customize the environment (e.g., different port or enabling worker threads), adjust the environment variables accordingly before starting the server.
Optional: Build a Docker image if you prefer containerized deployment following the Docker-based deployment guide in the repository.
Additional notes
Tips and common considerations:
- Environment variables: CYBERCHEF_ENABLE_WORKERS enables the worker thread pool; adjust pool size and routing through additional config if needed. CYBERCHEF_TRANSPORT can be used to switch transport mode if you’re integrating with clients that require HTTP streaming.
- Ensure Node.js v22+ compatibility to avoid runtime issues with newer CyberChef features.
- If you upgrade CyberChef components, review the MCP-related code paths under src/node/mcp-server.mjs to ensure compatibility with new operation definitions or recipe management features.
- Use cyberchef_search to discover available operations and cyberchef_recipe_* tools to manage complex multi-step workflows efficiently.
- For production, consider enabling structured logging (Pino) and setting appropriate timeouts and input size limits via environment variables to prevent resource exhaustion.
- If you encounter port conflicts, change the PORT environment variable and update the MCP client endpoint accordingly.
Related MCP Servers
llm-functions
Easily create LLM tools and agents using plain Bash/JavaScript/Python functions.
cheatengine -bridge
Connect Cursor, Copilot & Claude directly to Cheat Engine via MCP. Automate reverse engineering, pointer scanning, and memory analysis using natural language.
langchainjs -adapters
** THIS REPO HAS MOVED TO https://github.com/langchain-ai/langchainjs/tree/main/libs/langchain-mcp-adapters ** Adapters for integrating Model Context Protocol (MCP) tools with LangChain.js applications, supporting both stdio and SSE transports.
mcp-batchit
🚀 MCP aggregator for batching multiple tool calls into a single request. Reduces overhead, saves tokens, and simplifies complex operations in AI agent workflows.
local_faiss_mcp
Local FAISS vector store as an MCP server – drop-in local RAG for Claude / Copilot / Agents.
SchemaPin
The SchemaPin protocol for cryptographically signing and verifying AI agent tool schemas to prevent supply-chain attacks.