grepai
Semantic Search & Call Graphs for AI Agents (100% Local)
claude mcp add --transport stdio yoanbernabeu-grepai docker run -i yoanbernabeu/grepai:latest \ --env INDEX_DIR="path to store local index (optional)" \ --env WATCH_PATHS="comma-separated paths to index (optional)" \ --env EMBEDDING_PROVIDER="set to the embedding provider you use (e.g., ollama, lmstudio, openai) or leave default if supported"
How to use
grepai is a privacy-first semantic code search tool that can be exposed as an MCP tool so your AI agents can perform semantic code queries directly against your codebase. It uses vector embeddings to understand code meaning, enabling natural language queries such as 'authentication logic' or 'trace who calls handleUserSession' and returns relevant code snippets or references. The MCP integration allows an agent to invoke grepai as a tool within a broader workflow, making it possible to search, index, and trace code usage from within an automated agent session. You can run the grepai server in a container, and your MCP-enabled agent can send commands to the tool to initialize, index, and search your repository content. The tool supports indexing (watching for changes), semantic search, and graph tracing, all locally if you prefer privacy.
How to install
Prerequisites:
- Docker installed on the host where the MCP server will run
- Optional: an embedding provider (e.g., Ollama, LM Studio, or OpenAI) configured and accessible
Installation steps:
- Ensure Docker is running on your machine
- Pull and run the grepai MCP service (as a container):
# Run the grepai MCP server container (interactive)
docker run -it --rm \
-v /path/to/your/code:/workspace \
-e EMBEDDING_PROVIDER=ollama \
yoanbernabeu/grepai:latest
- If your environment requires a specific embedding provider, install and configure it per its documentation (for example Ollama) and ensure the provider is reachable from the container.
- In your MCP configuration, reference the server as shown in the mcp_config example and start the MCP workflow that includes the grepai tool.
Notes:
- If you need to customize paths to index, use environment variables or bind-mounts to expose your codebase to the container.
- Ensure network access from the MCP host to the container if they are running on separate networks.
Additional notes
Tips and common considerations:
- When using a local embedding provider, ensure the provider stays up-to-date with your codebase to keep search results relevant.
- If you encounter performance issues, adjust the indexing cadence (watch vs. manual indexing) and consider narrowing the indexed scope to relevant packages.
- Ensure the MCP client and grepai tool agree on data formats for search queries and results.
- For privacy-sensitive codebases, prefer local embeddings and local indexing to avoid sending code to external services.
- The env var EMBEDDING_PROVIDER is optional depending on how you configure grepai; set it to Ollama, LM Studio, or OpenAI as needed.
Related MCP Servers
mcp-local-rag
Local-first RAG server for developers using MCP. Semantic + keyword search for code and technical docs. Fully private, zero setup.
claude-code-open
Open source AI coding platform with Web IDE, multi-agent system, 37+ tools, MCP protocol. MIT licensed.
CodeMCP
Code intelligence for AI assistants - MCP server, CLI, and HTTP API with symbol navigation, impact analysis, and architecture mapping
mcp-ragex
MCP server for intelligent code search: semantic (RAG), symbolic (tree-sitter), and regex (ripgrep) search modes. Built for Claude Code and AI coding assistants.
devtap
Bridge build/dev process output to AI coding sessions via MCP — supports Claude Code, Codex, OpenCode, Gemini CLI, and aider
docrag
AI-powered documentation RAG system with MCP server for Claude Code. Search and retrieve technical documentation on-demand with vector embeddings and smart web scraping.