Get the FREE Ultimate OpenClaw Setup Guide →

context-lens

Semantic search knowledge base for MCP-enabled AI assistants. Index local files or GitHub repos, query with natural language. Built on LanceDB vector storage. Works with Claude Desktop, Cursor, and other MCP clients.

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio cornelcroi-context-lens uvx context-lens

How to use

Context Lens provides a self-contained MCP server that builds a local, searchable knowledge base from your content using LanceDB as the embedded vector store. It supports semantic search across code, docs, contracts, and text by converting content into meaningful vector representations and performing concept-based retrieval rather than simple keyword matching. You can interact with it through any MCP client (for example Claude Desktop, Continue.dev, or custom MCP integrations) by pointing the client at the context-lens server using the uvx-based command in your MCP configuration. The server exposes capabilities for indexing content, performing semantic queries, and retrieving relevant chunks that form the basis of answers, all while keeping data local on your machine. The embedding model uses a local pipeline to generate vectors, ensuring privacy and zero external API calls.

To use it, connect via an MCP client with the appropriate server specification. For example, in Kiro IDE or Cursor, you’ll configure the mcpServers entry for context-lens with command uvx and the argument context-lens. Once connected, you can add content to the knowledge base, perform semantic searches, and have the agent retrieve relevant chunks and respond with contextual answers. The system supports asking questions like how a particular authentication flow works across code and docs, and it will surface meaningfully related content rather than just exact word matches.

How to install

Prerequisites:

  • Python 3.11 or higher installed on your system
  • Access to the command uvx (the MCP execution runner) or the ability to install it via your preferred method

Installation steps:

  1. Install Python 3.11+ from https://www.python.org/downloads/

  2. Install the MCP runner (uvx) if you don’t already have it. The exact method may vary by environment; a common approach is to install a package that provides the uvx command. For example, you might use pipx or a system package manager as documented by the uvx distribution. Example (adjust to your environment):

    • pipx install uvx # or follow the project-specific installation guide for uvx
    • Ensure uvx is available in your PATH

  3. Install Context Lens so that the uvx command can discover the context-lens package. This typically means installing the Python package named context-lens from PyPI:

    pip install context-lens

  4. Start the MCP server using uvx for the context-lens package:

    uvx context-lens

  5. Verify the server is reachable and listening for MCP clients on the default MCP port used by your setup. You can consult the MCP client documentation for how to point to the context-lens server (e.g., through .mcp.json files used by Kiro, Cursor, or Claude Desktop).

Optional configuration (recommended):

  • Create or edit your MCP client configuration (e.g., .mcp.json) to include: { "mcpServers": { "context-lens": { "command": "uvx", "args": ["context-lens"] } } }
  • If your setup supports autoApprove actions, you can enable them (e.g., "autoApprove": ["list_documents", "search_documents"]).

Note: The exact installation steps for uvx can vary by environment. If your system provides a packaged uvx or a containerized option, prefer that method and adapt the CLI command accordingly.

Additional notes

Tips and common issues:

  • Data locality: Context Lens stores embeddings and the LanceDB database on disk. Ensure the storage path is writable and has enough space for your content and embeddings.
  • Model and performance: The embedding model is 384-dimensional and designed for local use. If you experience performance issues, consider adjusting chunk size (default 1000 chars) and overlap (default 200 chars) to suit your content and hardware.
  • Content parsing: The system uses language-aware parsers (e.g., AST for Python, structural parsing for JSON, header-based splitting for Markdown). Ensure your content is in a readable format for best results.
  • Content indexing workflow: You can index local files or public GitHub repositories. To index a new repository, add its path/URL to the LanceDB-backed knowledge base using your MCP client tools.
  • Security and privacy: All data remains local. If you share a machine or repository, consider encrypting the LanceDB store or restricting access to the host.
  • Environment variables: If you rely on tokens or environment-based configuration, you can pass these via your MCP client or environment; the default setup shown here uses no required environment variables beyond what your client needs to connect.
  • Registry and verification: Context Lens is published under the MCP registry as io.github.cornelcroi/context-lens. See REGISTRY.md for installation verification guidance.

Related MCP Servers

Sponsor this space

Reach thousands of developers