Get the FREE Ultimate OpenClaw Setup Guide →

CodeCompass

CodeCompass: AI-powered Vibe Coding with MCP. Connects Git repositories to AI assistants like Claude, using Ollama for privacy or OpenAI for cloud. Integrates with VSCode, Cursor, and more.

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio alvinveroy-codecompass npx -y @alvinveroy/codecompass@latest \
  --env HTTP_PORT="3001" \
  --env OBSERVABILITY="optional" \
  --env OLLAMA_MODEL_EMBEDDING="nomic-embed-text:v1.5 (default)" \
  --env OLLAMA_MODEL_SUGGESTION="llama3.1:8b (default)"

How to use

CodeCompass analyzes your codebase and provides AI-assisted coding guidance by indexing the repository with a vector store (Qdrant) and powering intelligent suggestions via local or cloud LLMs. It uses an Agentic RAG approach, where the central agent_query orchestrates internal capabilities to gather context, summarize large diffs or file lists, and retrieve relevant code snippets for informed suggestions. You can run CodeCompass in server mode to index a repository and then use the CLI client to invoke tools like agent_query or other capabilities to search code, fetch full file content, list directories, and fetch adjacent chunks. The system supports local LLMs through Ollama (e.g., embedding model nomic-embed-text:v1.5 and suggestion model llama3.1:8b) as well as cloud-based options like DeepSeek when configured with an API key.

How to install

Prerequisites:

  • Node.js v20 or newer
  • Docker (for Qdrant)
  • Ollama (for local LLM/embedding capabilities) and models
  • Optional: DeepSeek API key for cloud-based suggestions

Installation steps:

  1. Install Ollama and pull models (embedding and suggestion):
# Install Ollama (via installer or package manager per OS)
# Then pull the default models
ollama pull nomic-embed-text:v1.5
ollama pull llama3.1:8b
  1. Install and run Qdrant (vector store):
docker run -p 6333:6333 -p 6334:6334 qdrant/qdrant

Verify at http://localhost:6333/dashboard

3) Install CodeCompass globally via npx (as the MCP server you will run):
```bash
npx -y @alvinveroy/codecompass@latest

This installs CodeCompass globally. You can then run it from any directory.

  1. Start the CodeCompass server (as needed, in your repository root or a dedicated workspace):
codecompass [repoPath] [--port <number>]
  • If you omit [repoPath], the current directory is used.
  • Optional: --port to override default HTTP port (default 3001).

Additional notes

Notes and tips:

  • The server communicates primarily via stdio MCP, with a lightweight HTTP utility server available on the configured port for health checks and indexing status. If multiple instances are used, port conflicts will alter how MCP is routed.
  • You can configure environment variables to fine-tune indexing, agent behavior, and which LLM models are used. Common env vars include those for embedding models, suggestion models, and DeepSeek API keys.
  • When using Ollama, ensure the specified embedding and suggestion models are available locally and correctly configured via environment variables or defaults.
  • If you rely on DeepSeek for cloud-based responses, set the DeepSeek API key in the environment and verify network access. Monitor indexing status and repository diffs through the utility HTTP endpoints.
  • For CLI tool usage, the available tools include agent_query (to orchestrate capabilities), capability_searchCodeSnippets, capability_getFullFileContent, capability_listDirectory, capability_getAdjacentFileChunks, capability_getRepositoryOverview, capability_fetchMoreSearchResults, and more as described in the repository docs.

Related MCP Servers

Sponsor this space

Reach thousands of developers