Get the FREE Ultimate OpenClaw Setup Guide →

repo-graphrag

An MCP server that uses LightRAG and Tree-sitter to build a repository knowledge graph from code and docs, for Q&A and implementation planning.

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio yumeiriowl-repo-graphrag-mcp uv --directory /absolute/path/to/repo-graphrag-mcp run server.py \
  --env GEMINI_API_KEY="your_gemini_api_key" \
  --env OPENAI_API_KEY="your_openai_api_key" \
  --env OPENAI_BASE_URL="http://localhost:1234/v1" \
  --env ANTHROPIC_API_KEY="your_anthropic_api_key" \
  --env AZURE_API_VERSION="azure_openai_api_version" \
  --env AZURE_OPENAI_API_KEY="your_azure_openai_api_key" \
  --env AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com/" \
  --env GRAPH_CREATE_PROVIDER="Your LLM provider for graph creation" \
  --env GRAPH_ANALYSIS_PROVIDER="Your LLM provider for planning and Q&A" \
  --env GRAPH_CREATE_MODEL_NAME="claude-haiku-4-5" \
  --env GRAPH_ANALYSIS_MODEL_NAME="claude-sonnet-4-5"

How to use

This MCP server, Repo GraphRAG, builds and queries a knowledge graph from code and text documents within a repository or directory. It uses LightRAG and Tree-sitter to extract structure and content, then creates an embedding index for fast retrieval. The server exposes three main MCP tools: graph_create for building or updating the graph, graph_plan for generating implementation plans and concrete steps, and graph_query for question-and-answer interactions based on the graph. Clients can connect using any MCP-compatible client, including Claude-based tooling, VS Code extensions, or generic MCP libraries. To use, start the server with the uv command provided in the README, then issue commands starting with graph: and specifying the target directory and storage name (default storage is storage or the specified storage).

  • graph_create analyzes the target directory to build or incrementally update the knowledge graph and embeddings. It supports incremental updates by reanalyzing only changes when the same storage name is used. You can adjust extraction settings and embeddings by changing environment variables and model names.
  • graph_plan uses the knowledge graph to generate a detailed implementation plan and actionable steps for a requested modification or enhancement, optionally combining results with vector search for improved accuracy.
  • graph_query answers questions about the repository based on the graph and, if configured, vector search. It can explain architecture, endpoints, or design decisions derived from the graph content.

How to install

Prerequisites

  • Python 3.11+
  • uv package manager
  • Access/keys for your preferred LLM provider (Anthropic, OpenAI, Gemini, or Azure OpenAI)
  • Git

Install and run locally

  1. Clone the repository
# Clone from GitHub
git clone https://github.com/yumeiriowl/repo-graphrag-mcp.git
cd repo-graphrag-mcp
  1. Install dependencies via uv (the MCP client/server manager)
uv sync
  1. Prepare environment
# Copy the settings file (adjust paths as needed)
cp .env.example .env

# Edit the .env file to configure providers, models, and keys
nano .env
  1. Run the MCP server
uv --directory /absolute/path/to/repo-graphrag-mcp run server.py
  1. Optional: Integrate with an MCP client (Claude, VS Code, etc.) using the examples provided in the README to register the server and start issuing graph: commands.

Additional notes

Tips and notes:

  • If you change the embedding model or DOC extraction settings, you may need to rebuild the storage. You can either delete the existing storage or provide a new storage name when running graph_create.
  • The first graph creation can take longer as large repos are processed; for very large repos, consider targeting a smaller subset or directory.
  • Embedding caches are downloaded automatically on first use; ensure network access and necessary authentication tokens are available.
  • For local testing with OpenAI-compatible endpoints, you can set OPENAI_BASE_URL to point to a local server (e.g., LM Studio) if you’re using an OpenAI-compatible interface.
  • The MCP supports both incremental updates and full rebuilds; use storage naming to control whether a rebuild overwrites or extends existing data.
  • If issues arise, check the .env file validity, ensure API keys are correct, and confirm the uv server is reachable by your MCP client.

Related MCP Servers

Sponsor this space

Reach thousands of developers