model-context-shell
Unix-style pipelines for MCP. Deterministic tool calls.
claude mcp add --transport stdio stackloklabs-model-context-shell docker run -i ghcr.io/stackloklabs/model-context-shell:latest --foreground --transport streamable-http
How to use
Model Context Shell provides a safe, MCP-native environment to orchestrate multiple MCP tool calls as a single, Unix-style pipeline. It exposes four core tools to the agent: execute_pipeline (to run a full pipeline of tool and shell stages), list_all_tools (to discover tools available from MCP servers via ToolHive), get_tool_details (to fetch the full schema and description for a specific tool), and list_available_shell_commands (to show the allowed CLI commands). Agents construct pipelines as JSON arrays where each stage is either a tool call, a pre-processing command, or a preview, with optional per-item processing via for_each. The result is streamed through the pipeline and only the final output is returned to the agent, reducing intermediate data exposure within the context. This makes complex workflows, such as fetching data from multiple MCP tools, transforming it with safe shell commands, and returning a concise summary, both scalable and auditable.
How to install
Prerequisites:
- Docker installed and running (or another container runtime supported by your environment)
- Access to ToolHive (thv) if you plan to discover and manage MCP tools
Installation steps:
-
Pull the pre-built Model Context Shell image:
- If using Docker directly: docker pull ghcr.io/stackloklabs/model-context-shell:latest
-
Run the server via a compatible runtime (from the MCP client perspective, ToolHive is used for orchestration). Example using Docker with foreground transport:
- docker run -i ghcr.io/stackloklabs/model-context-shell:latest --foreground --transport streamable-http
-
Ensure ToolHive is configured if you plan to discover tools:
- Follow ToolHive setup: install thv and use thv login/setup as per the ToolHive quickstart.
-
Connect your MCP client to the running server following the MCP specification. You should be able to invoke the four tools exposed by the server once it is up.
Additional notes
Tips and caveats:
- The server runs inside a container; all data flowing through pipelines is JSON-based. No shell injection occurs because commands are not passed through a shell interpreter.
- Use the preview stage to inspect data structures before applying transformations.
- If you use for_each, ensure previous stages emit one JSON object per line (JSONL) to enable per-item processing.
- When using Docker, ensure the container has network access to any MCP servers or ToolHive endpoints you intend to query.
- If you encounter authentication or access issues with ToolHive, verify your thv configuration and credentials as per the ToolHive documentation.
Related MCP Servers
web-agent-protocol
🌐Web Agent Protocol (WAP) - Record and replay user interactions in the browser with MCP support
google_ads_mcp
The Google Ads MCP Server is an implementation of the Model Context Protocol (MCP) that enables Large Language Models (LLMs), such as Gemini, to interact directly with the Google Ads API.
AI-SOC-Agent
Blackhat 2025 presentation and codebase: AI SOC agent & MCP server for automated security investigation, alert triage, and incident response. Integrates with ELK, IRIS, and other platforms.
ultrasync
MCP server from darvid/ultrasync
zotero -lite
Zotero MCP Lite: Fast, Customizable & Light Zotero MCP server for AI research assistants
rasdaman
An MCP server for querying rasdaman with natural language.