VectorCode
A code repository indexing tool to supercharge your LLM experience.
claude mcp add --transport stdio davidyz-vectorcode python -m vectorcode
How to use
VectorCode is a Python-based code repository indexing tool that helps you build better prompts for coding-focused LLMs by indexing your project and exposing a structured API to query code metadata, documents, and chunked file content. It also ships a Neovim plugin that provides APIs to integrate indexing and retrieval capabilities directly into your editor workflow, enabling context-aware prompts and code navigation enhancements. The CLI portion lets you index a local project and then search or retrieve relevant chunks of code or documentation to inform your prompts or build richer AI-assisted tooling. Tools available include: the indexing workflow to scan and chunk files, metadata tagging for files, configurable chunk sizes, and a query engine that can retrieve matched code snippets or documentation based on your prompts or questions. You can also leverage the Neovim integration to access the same capabilities from within your editor, enabling live prompts and insights as you navigate a repository.
How to install
Prerequisites:
- Python 3.8+ (recommended)
- Access to pip (Python package manager)
Installation steps:
- Ensure Python and pip are installed and accessible in your shell.
- Install VectorCode from PyPI: python3 -m pip install vectorcode
- Run the CLI/module to index or interact with a repository: python3 -m vectorcode
Optional (Neovim integration):
- Install and configure the VectorCode Neovim plugin as documented in the project docs to use the editor integration.
Notes:
- If you manage multiple Python environments, consider using a virtual environment (python -m venv .venv; source .venv/bin/activate) before installing.
Additional notes
Environment and configuration tips:
- After installation, you can configure which parts of your project to index and how to chunk files to optimize search relevance. Look for configuration options in the project docs for chunk-size, metadata, and root detection.
- If you encounter indexing slowdowns on large repos, adjust the chunk size or enable incremental indexing if supported.
- The MCP integration allows you to expose the indexed data to other MCP-enabled tools; ensure your environment variables (if any) are set according to VectorCode documentation when running within a broader MCP workflow.
Related MCP Servers
haiku.rag
Opinionated agentic RAG powered by LanceDB, Pydantic AI, and Docling
mcp-pinecone
Model Context Protocol server to allow for reading and writing from Pinecone. Rudimentary RAG
Archive-Agent
Find your files with natural language and ask questions.
RiMCP_hybrid
Rimworld Coding RAG MCP server
code-memory
MCP server with local vector search for your codebase. Smart indexing, semantic search, Git history — all offline.
srag
Semantic code search and RAG system written in Rust with tree-sitter chunking, MCP server for IDE integration, prompt injection detection, and secret redaction