Get the FREE Ultimate OpenClaw Setup Guide →

pctx

pctx is the execution layer for agentic tool calls. It auto-converts agent tools and MCP servers into code that runs in secure sandboxes for token-efficient workflows.

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio portofcontext-pctx npx -y @modelcontextprotocol/server-memory

How to use

pctx acts as a bridge between AI agents and multiple upstream MCP servers, offering a Code Mode interface where tools are exposed as code functions. This lets agents execute multi-step tasks by calling tools through a sandboxed code execution environment, reducing prompt churn and token usage. The unified MCP capability lets you connect upstream MCP servers (like Stripe or custom MCP endpoints) and expose their tools via a single, standardized interface. You can start Code Mode sessions for Python or produce a unified MCP server that aggregates multiple upstreams, with authentication handled centrally through the pctx config.

To use the MCP capabilities, initialize the unified MCP configuration, add your upstream MCP servers or tool servers, and then run the MCP server in dev or production mode. Tools from connected MCP servers become available to agents as code-mode functions. This enables agents to compose complex tool workflows in code, rather than issuing sequential API calls, which improves latency and token efficiency. The README provides examples for adding a memory-backed MCP server via npx and demonstrates how Code Mode can be exercised from Python or through the CLI.

How to install

  • Prerequisites:

    • Ensure you have Rust, Node.js, and npm installed if you plan to build or run components locally.
    • Install the MCP server tooling as described in the README of pctx or the MCP server repository.
  • Install the MCP server (example using pctx):

    1. Install via Homebrew (macOS):
      brew install portofcontext/tap/pctx
      
    2. Install via cURL/script:
      curl --proto '=https' --tlsv1.2 -LsSf https://raw.githubusercontent.com/portofcontext/pctx/main/install.sh | sh
      
    3. Install via npm (global):
      npm i -g @portofcontext/pctx
      
  • Setup configuration:

    • Create and customize a pctx.json or equivalent config file to define your upstream MCP servers and tools.
    • See examples in the README for adding servers and configuring authentication.
  • Run:

    • Initialize a unified MCP config: pctx mcp init
    • Add upstream MCP server(s): pctx mcp add <name> <endpoint> (e.g., pctx mcp add memory --command "npx -y @modelcontextprotocol/server-memory")
    • Start the MCP server: pctx mcp dev or pctx mcp start --stdio depending on your deployment needs
  • Optional: install Python SDKs or client libraries if you plan to drive the MCP server from Python:

    • Python: pip install pctx-client

Additional notes

  • The mcp_config example uses npx to launch MCP-related tool servers. You can replace these with other launch methods supported by your deployment (e.g., dockerized MCP tools).
  • Environment variables in mcp_config are optional but recommended for endpoints, secrets, and authentication. Avoid hard-coding secrets in the config.
  • When using Code Mode, ensure your agents have appropriate tool schemas and access permissions to the upstream MCP servers.
  • For security, pctx runs code in a sandboxed environment and limits network access to configured hosts; adjust your config to allow only trusted hosts.
  • If you encounter connection or authentication issues, verify that endpoint URLs are reachable from the host running pctx and that any required API keys or tokens are supplied via env vars or secure key storage.

Related MCP Servers

Sponsor this space

Reach thousands of developers