Get the FREE Ultimate OpenClaw Setup Guide →

chunkhound

Local first codebase intelligence

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio chunkhound-chunkhound uvx chunkhound

How to use

ChunkHound provides local-first codebase intelligence by indexing and analyzing your codebase to extract architectural patterns, code relationships, and institutional knowledge. It supports semantic search, regex search, and code research capabilities, all designed to run locally on your machine to avoid sending code to cloud services. The MCP integration allows you to access ChunkHound's indexing and search features through MCP-compatible clients like Claude, VS Code, Cursor, Windsurf, Zed, and others, enabling you to perform targeted queries such as finding authentication code, understanding code relationships across a monorepo, and performing multi-hop semantic searches that go beyond simple keyword matches. The server exposes tooling to index your project, configure embedding and LLM providers, and query the indexed code with natural language or regex patterns.

To use it, first ensure your environment is prepared with Python 3.10+ and the uv package manager. Install the ChunkHound tool via uv, then run the server through your chosen MCP client. Once indexed, you can perform semantic searches across languages and configurations (JSON, YAML, TOML, HCL, Markdown, etc.), leveraging the built-in code research capabilities to retrieve contextual information from your codebase.

How to install

Prerequisites:

  • Python 3.10 or newer
  • The uv package manager (https://docs.astral.sh/uv/)
  • Optional: API keys for embeddings and LLMs if you want to enable those features (VoyageAI recommended for embeddings; Claude/OpenAI/Codex options for LLMs)
  1. Install uv if needed
curl -LsSf https://astral.sh/uv/install.sh | sh
  1. Install ChunkHound
uv tool install chunkhound
  1. Run ChunkHound (via MCP-compatible client)
  • This step may be performed by your MCP plugin/adapter. Ensure the MCP client is configured to connect to the ChunkHound server endpoint provided by uv. If you need to start a local service directly, you can typically invoke the installed tool via uv in the environment where your MCP client is running. See your MCP client's documentation for the exact run command.
  1. Initialize a project for indexing
  • Create a .chunkhound.json at your project root with your embedding and LLM provider settings (as shown in the Quick Start section of the README).
{
  "embedding": {
    "provider": "voyageai",
    "api_key": "your-voyageai-key"
  },
  "llm": {
    "provider": "claude-code-cli"
  }
}
  1. Index your codebase
chunkhound index

Notes:

  • If you prefer Codex, you can replace the llm provider with codex-cli and omit the API key.
  • The embeddables and LLM keys can be configured later via environment variables or configuration files as supported by your MCP client.

Additional notes

Tips and common issues:

  • Ensure Python 3.10+ is installed and that uv is properly installed before attempting to install ChunkHound.
  • If you are using embeddings, provide a valid VoyageAI API key (or other supported providers) in the configuration file or environment variables.
  • ChunkHound runs locally; ensure your environment has access to the codebase you want to index and search.
  • MCP clients like Claude, VS Code, Cursor, Windsurf, and Zed can connect to the ChunkHound MCP server to perform natural language queries or regex searches against the indexed code.
  • If you encounter indexing performance issues on very large repositories, consider indexing in smaller submodules or limiting the scope during initial runs, then progressively expanding coverage.

Related MCP Servers

Sponsor this space

Reach thousands of developers