Get the FREE Ultimate OpenClaw Setup Guide →

coco-search

Local-first hybrid semantic code search tool. Indexes codebases into PostgreSQL with pgvector embeddings via Ollama, combines vector similarity + keyword search with RRF fusion. Supports 30+ languages. Features CLI, MCP server, WEB dashboard and interactive REPL.

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add violetcranberry-coco-search

How to use

Coco[-S]earch is a local-first hybrid semantic code search tool exposed as an MCP server. It indexes your codebase using CocoIndex and Tree-sitter to preserve syntactic structure, stores embeddings in PostgreSQL with pgvector, and can generate local embeddings via Ollama by default. The server provides multiple access modes: a web dashboard for visual exploration, a CLI for quick searches, and an interactive REPL for ad-hoc queries. You can also enable optional remote embedding providers (such as OpenAI or OpenRouter) if your workflow requires managed embedding services, with the actual code never leaving your machine—only text chunks are sent for embedding when remote providers are used. Tools available include incremental indexing of repositories, semantic search with vector similarities, and keyword-based search with RRF fusion to blend semantic and lexical signals.

How to install

Prerequisites:\n- Python >= 3.11 installed on your system.\n- PostgreSQL server running with pgvector extension installed.\n- Optional: Ollama for local embeddings (recommended default).\n\nInstallation steps:\n1) Create a Python virtual environment (optional but recommended):\n python -m venv venv\n source venv/bin/activate # on Unix/macOS\n venv\Scripts\activate # on Windows\n\n2) Install the cocosearch package (from PyPI):\n python -m pip install --upgrade pip\n pip install cocosearch\n\n3) Ensure PostgreSQL is running and pgvector is enabled:\n -- Install pgvector as an extension in your database.\n -- Create the target database for CocoSearch.\n\n4) (Optional) Install Ollama for local embeddings and start the Ollama daemon.\n\n5) Configure environment variables (see mcp_config example):\n export DATABASE_URL="postgresql://user:password@host:5432/dbname"\n export COCOSEARCH_HOME="/path/to/data"\n # If using remote embeddings, set appropriate API keys or endpoints.\n\n6) Start the MCP server using the recommended MCP runner command (as per your environment):\n uvx cocosearch # using the uvx runner, per MCP convention for Python packages\n

Additional notes

Notes and tips:\n- Ensure the PostgreSQL user has permissions to create tables and extensions.\n- If you’re indexing large codebases, enable incremental indexing and monitor disk usage for embeddings.\n- When using remote embedding providers, be mindful of data privacy—only chunk text is sent, not code literals in many configurations.\n- The COCOSEARCH_HOME path can be used to store local indexes and caches; adjust permissions accordingly.\n- If you encounter connection issues to PostgreSQL, verify that the host/port and credentials are correct and that the database accepts connections from the MCP server host.\n- For development, you can run a local Ollama instance to keep embeddings on-device.\n- Review logs for any schema migrations or optimizer recommendations related to pgvector.

Related MCP Servers

Sponsor this space

Reach thousands of developers