headroom
The Context Optimization Layer for LLM Applications
claude mcp add --transport stdio chopratejas-headroom python -m headroom.mcp \ --env HEADROOM_MCP_BIND="Binding address (default 0.0.0.0)" \ --env HEADROOM_MCP_PORT="Port to run the MCP server on (default 8080)"
How to use
Headroom exposes an MCP server that integrates the Python Headroom toolchain as an inside-the-MCP proxy for LLM tool usage. The MCP server enables Claude Code integration and leverages Headroom’s compression and learning capabilities to optimize tool calls and responses. You can connect your Claude Code or Claude Code-compatible proxy workflow to the Headroom MCP endpoint and let it route tool outputs through Headroom’s compressions, learnings, and context optimization pipeline. Typical usage patterns involve routing tool invocations through the MCP to benefit from automatic token savings, improved context management, and corrective feedback when tools underperform, all while preserving exact recoverability of omitted information when needed. The server exposes capabilities such as compress, learn, and integration helpers that can be invoked by your MCP client to enhance LLM interactions with external tools.
How to install
Prerequisites:
- Python 3.8+ (recommended 3.9+)
- pip (or pipx) available in your environment
- Optional: a virtual environment to isolate dependencies
Installation steps:
-
Create and activate a Python environment (optional but recommended): python -m venv venv source venv/bin/activate # on macOS/Linux .\venv\Scripts\activate # on Windows
-
Install Headroom with all optional dependencies (this includes the MCP integrations and tooling): pip install "headroom-ai[all]"
-
Verify installation and availability of the MCP module: python -m pip show headroom-ai
Ensure the package is installed and you can access the mcp entrypoint
-
Run the MCP server (as defined in mcp_config): python -m headroom.mcp
Or invoke via your container/orchestrator using the command/args from mcp_config
-
Optional: configure environment variables and ports as needed for your deployment: HEADROOM_MCP_PORT=8080 HEADROOM_MCP_BIND=0.0.0.0
-
If you’re deploying in Docker, you can adapt the command to a container run aligned with the mcp_config: docker run -i headroom-mcp:latest
Note:
- The exact module path may vary depending on how you structure your runtime environment. If your installation uses a different entrypoint, adjust the -m module name accordingly (e.g., headroom.mcp or headroom.mcp_server).
- For production, consider running behind a reverse proxy and enabling TLS termination, and configure authentication as appropriate for your MCP client ecosystem.
Additional notes
Tips and common issues:
- Ensure Python 3.8+ is used; older interpreters may fail due to typing or dependency requirements.
- When updating headroom-ai, re-check the MCP module path in case the entrypoint changes between releases.
- If you see port binding issues, check that HEADROOM_MCP_BIND is set to a routable interface and that the port is not in use by another process.
- Environment variables can customize behavior like port, binding address, and runtime options; document and version them for reproducibility.
- The MCP server integrates with Claude Code workflows via the MCP interface; ensure your MCP client is configured to target the headroom MCP endpoint and to handle any compression/learning callbacks as per Headroom’s guidance.
- If you run into import or dependency errors, verify that your environment’s PATH and PYTHONPATH include the site-packages directory where headroom-ai is installed.
Related MCP Servers
AstrBot
Agentic IM Chatbot infrastructure that integrates lots of IM platforms, LLMs, plugins and AI feature, and can be your openclaw alternative. ✨
SearChat
Search + Chat = SearChat(AI Chat with Search), Support OpenAI/Anthropic/VertexAI/Gemini, DeepResearch, SearXNG, Docker. AI对话式搜索引擎,支持DeepResearch, 支持OpenAI/Anthropic/VertexAI/Gemini接口、聚合搜索引擎SearXNG,支持Docker一键部署。
aser
Aser is a lightweight, self-assembling AI Agent frame.
zypher-agent
A minimal yet powerful framework for creating AI agents with full control over tools, providers, and execution flow.
codemesh
The Self-Improving MCP Server - Agents write code to orchestrate multiple MCP servers with intelligent TypeScript execution and auto-augmentation
local-skills
Universal MCP server enabling any LLM or AI agent to utilize expert skills from your local filesystem. Reduces context consumption through lazy loading. Works with Claude, Cline, and any MCP-compatible client.