pctx
pctx is the execution layer for agentic tool calls. It auto-converts agent tools and MCP servers into code that runs in secure sandboxes for token-efficient workflows.
claude mcp add --transport stdio portofcontext-pctx npx -y @modelcontextprotocol/server-memory
How to use
pctx acts as a bridge between AI agents and multiple upstream MCP servers, offering a Code Mode interface where tools are exposed as code functions. This lets agents execute multi-step tasks by calling tools through a sandboxed code execution environment, reducing prompt churn and token usage. The unified MCP capability lets you connect upstream MCP servers (like Stripe or custom MCP endpoints) and expose their tools via a single, standardized interface. You can start Code Mode sessions for Python or produce a unified MCP server that aggregates multiple upstreams, with authentication handled centrally through the pctx config.
To use the MCP capabilities, initialize the unified MCP configuration, add your upstream MCP servers or tool servers, and then run the MCP server in dev or production mode. Tools from connected MCP servers become available to agents as code-mode functions. This enables agents to compose complex tool workflows in code, rather than issuing sequential API calls, which improves latency and token efficiency. The README provides examples for adding a memory-backed MCP server via npx and demonstrates how Code Mode can be exercised from Python or through the CLI.
How to install
-
Prerequisites:
- Ensure you have Rust, Node.js, and npm installed if you plan to build or run components locally.
- Install the MCP server tooling as described in the README of pctx or the MCP server repository.
-
Install the MCP server (example using pctx):
- Install via Homebrew (macOS):
brew install portofcontext/tap/pctx - Install via cURL/script:
curl --proto '=https' --tlsv1.2 -LsSf https://raw.githubusercontent.com/portofcontext/pctx/main/install.sh | sh - Install via npm (global):
npm i -g @portofcontext/pctx
- Install via Homebrew (macOS):
-
Setup configuration:
- Create and customize a pctx.json or equivalent config file to define your upstream MCP servers and tools.
- See examples in the README for adding servers and configuring authentication.
-
Run:
- Initialize a unified MCP config:
pctx mcp init - Add upstream MCP server(s):
pctx mcp add <name> <endpoint>(e.g.,pctx mcp add memory --command "npx -y @modelcontextprotocol/server-memory") - Start the MCP server:
pctx mcp devorpctx mcp start --stdiodepending on your deployment needs
- Initialize a unified MCP config:
-
Optional: install Python SDKs or client libraries if you plan to drive the MCP server from Python:
- Python:
pip install pctx-client
- Python:
Additional notes
- The mcp_config example uses npx to launch MCP-related tool servers. You can replace these with other launch methods supported by your deployment (e.g., dockerized MCP tools).
- Environment variables in mcp_config are optional but recommended for endpoints, secrets, and authentication. Avoid hard-coding secrets in the config.
- When using Code Mode, ensure your agents have appropriate tool schemas and access permissions to the upstream MCP servers.
- For security, pctx runs code in a sandboxed environment and limits network access to configured hosts; adjust your config to allow only trusted hosts.
- If you encounter connection or authentication issues, verify that endpoint URLs are reachable from the host running pctx and that any required API keys or tokens are supplied via env vars or secure key storage.
Related MCP Servers
openfang
Open-source Agent Operating System
MCPJungle
Self-hosted MCP Gateway for AI agents
flyto-core
The open-source execution engine for AI agents. 412 modules, MCP-native, triggers, queue, versioning, metering.
shinzo-ts
TypeScript SDK for MCP server observability, built on OpenTelemetry. Gain insight into agent usage patterns, contextualize tool calls, and analyze server performance across platforms. Integrate with any OpenTelemetry ingest service including the Shinzo platform.
ollama -bridge
Extend the Ollama API with dynamic AI tool integration from multiple MCP (Model Context Protocol) servers. Fully compatible, transparent, and developer-friendly, ideal for building powerful local LLM applications, AI agents, and custom chatbots
opnsense
Modular MCP server for OPNsense firewall management - 88 tools providing access to 2000+ methods through AI assistants