Get the FREE Ultimate OpenClaw Setup Guide →

pprof -agent

A Go agent that provides runtime profiling data through the Model Context Protocol (MCP).

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio yudppp-pprof-mcp-agent docker run -p 1239:1239 ghcr.io/yudppp/pprof-mcp-agent \
  --env MCP_PORT="1239 (optional override to the port the MCP server should listen on inside the container)" \
  --env MCP_LOG_LEVEL="info (optional: set to debug for verbose logs)"

How to use

The pprof-mcp-agent is a Go-based MCP server that exposes Go runtime profiling data over the Model Context Protocol (MCP). It allows you to request CPU, memory, goroutine, block, allocation, and thread-creation profiles from a running Go application and view them in flat, cumulative, or graph views. Once started, you can connect to the MCP endpoint and request different profile types with configurable sampling durations and limits to inspect performance characteristics in real time. This is useful for diagnosing CPU bottlenecks, memory leaks, goroutine leaks, and synchronization issues in production systems without intrusive instrumentation.

How to install

Prerequisites:

  • Docker (recommended) or a compatible container runtime
  • Optional: Go toolchain if you prefer building from source

Installation steps (Docker):

  1. Ensure Docker is running on your machine.
  2. Pull and run the MCP agent container (example image assumed to be ghcr.io/yudppp/pprof-mcp-agent): docker run -p 1239:1239 ghcr.io/yudppp/pprof-mcp-agent
  3. The agent will start and listen on port 1239 for MCP connections. Adjust port if needed via environment variables or docker run flags as documented by the image maintainers.

Alternative (from source, if you build a native binary):

  1. Ensure Go is installed (https://golang.org/dl/).
  2. Install the module: go get github.com/yudppp/pprof-mcp-agent
  3. Run the binary (the exact binary name may vary by build): pprof-mcp-agent --port 1239

Prerequisites recap:

  • Docker or Go toolchain
  • Network access to reach the target MCP client
  • Optional: knowledge of which profile types you want to collect (CPU, Heap, Goroutine, Block, Allocation, Thread Creation)

Additional notes

Notes and tips:

  • CPU profiling duration defaults to 10 seconds; you can adjust duration via configuration for CPU profiles.
  • Profiles include: CPU, Heap, Goroutine, Block, Allocation, and Thread Creation. Each supports three view modes: flat, cumulative, and graph.
  • The Graph view shows a call graph with the top 5 children per function, which helps identify hot paths in complex call trees.
  • When using Docker, ensure the container image matches the architecture of your host (amd64 vs arm64).
  • If you encounter connectivity issues, verify that port 1239 (or the configured port) is open and that MCP clients can reach the host.
  • Environment variables can be used to adjust logging level and port bindings without modifying the container image or binary.
  • If you are integrating into an existing Go application, you can use ServeSSE(ctx, ":1239") as shown in the usage example to expose the MCP endpoint.

Related MCP Servers

Sponsor this space

Reach thousands of developers