Get the FREE Ultimate OpenClaw Setup Guide →

mcp-context

MCP Context Server — a FastMCP-based server providing persistent multimodal context storage for LLM agents.

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio alex-feel-mcp-context-server uvx --python 3.12 --with mcp-context-server[embeddings-ollama,reranking] mcp-context-server \
  --env DB_PATH="${DB_PATH:-~/.mcp/context_storage.db}" \
  --env LOG_LEVEL="INFO"

How to use

The MCP Context Server offers a persistent multimodal context store designed for use with MCP-compatible clients. It can store and retrieve both text and image data, supports thread-based scoping so multiple agents working on the same task share a common context, and provides powerful search options including full-text, semantic, and hybrid approaches. By default, the server uses a SQLite database, but it can be configured to use PostgreSQL for high-concurrency production setups. Optional cross-encoder reranking is enabled to refine search results, and you can enable or disable features such as semantic search or chunking through environment variables and configuration. To use the server with Claude Code or other MCP clients, add the server as an MCP backend via the provided CLI or by editing your .mcp.json file, pointing to the uvx-based stdio integration that invokes the mcp-context-server package.

How to install

Prerequisites:

Installation steps:

  1. Install uv (if not already installed) following the official guide.
  2. Ensure Python 3.12 is available in your environment.
  3. Install the MCP Context Server package from PyPI:
    • pip install mcp-context-server
  4. Verify installation by running a basic startup command (example using uvx as in the README):
    • uvx --python 3.12 --with mcp-context-server[embeddings-ollama,reranking] mcp-context-server
  5. Create an MCP config (see mcp_config below) to connect your MCP clients (e.g., Claude Code) to this server.
  6. Start the server using your preferred orchestration (CLI, docker, or directly via your MCP manager).

Notes:

  • If you want to use a persistent PostgreSQL backend, configure STORAGE_BACKEND and DB connection details via environment variables in your MCP configuration.
  • For embedding and reranking features, ensure Ollama and the specified embedding model are installed and ready.

Additional notes

Tips and common issues:

  • Environment variables: You can tune core and search settings via environment variables (e.g., LOG_LEVEL, DB_PATH, MAX_IMAGE_SIZE_MB, MAX_TOTAL_SIZE_MB). Use the ${VAR:-default} syntax in your MCP config to provide defaults.
  • CHUNKING and RERANKING: Enabling chunking (ENABLE_CHUNKING) and cross-encoder reranking (ENABLE_RERANKING) can improve semantic search quality, but may increase latency. Adjust CHUNK_SIZE, CHUNK_OVERLAP, and RERANKING_OVERFETCH to balance performance and accuracy.
  • Semantic vs. Full-Text: If you enable semantic search (ENABLE_SEMANTIC_SEARCH), ensure your embedding provider is correctly configured (OLLAMA, OpenAI, etc.) and that embeddings model dimensions match EMBEDDING_DIM.
  • Backends: For production, consider PostgreSQL (high concurrency) over SQLite. Ensure backends are properly migrated when changing EMBEDDING_DIM or schema.
  • Troubleshooting: If MCP tools are not available in your CLI, verify the uvx invocation in your mcp.json and ensure the mcp-context-server package is installed in the same Python environment referenced by --python.
  • Licensing and compatibility: This server aims to be MCP-standard compliant and works with Claude Code, LangGraph, and other MCP clients. Check compatibility with your MCP version if upgrading.

Related MCP Servers

Sponsor this space

Reach thousands of developers