kb-bridge
MCP server for enhancing knowledge base search and retrieval
claude mcp add --transport stdio egpivo-kb-bridge python -m kbbridge.server --host 0.0.0.0 --port 5210 \ --env LLM_MODEL="LLM model name (e.g., gpt-4o)" \ --env RERANK_URL="Optional rerank service URL" \ --env LLM_API_URL="LLM service URL" \ --env RERANK_MODEL="Optional rerank model name" \ --env LLM_API_TOKEN="LLM API token" \ --env RETRIEVAL_API_KEY="Retrieval backend API key" \ --env RETRIEVAL_ENDPOINT="Retrieval backend endpoint (e.g., https://api.dify.ai/v1)"
How to use
KB-Bridge is a Python-based MCP server that provides intelligent knowledge base search and retrieval with support for multiple backends. It exposes an MCP endpoint that clients can query to perform hybrid, semantic, keyword, and full-text search across configured knowledge sources, then synthesize and refine answers with optional quality checks. The server integrates with backends like Dify and can be extended to additional providers via its retrieval and re-ranking tooling. You can call the MCP tools (notably the default 'assistant') to perform searches and retrieve structured results suitable for downstream agents (e.g., Dify workflows or custom MCP clients). The available tooling includes modules for assistant (answer extraction), file_discover (semantic file discovery), file_lister (dataset inspection), keyword_generator (LLM-based keyword expansion), retriever (various search methods), and file_count (dataset metrics), enabling end-to-end workflow from query understanding to refined answers.
How to install
Prerequisites:
- Python 3.8+ installed on the host
- Pip available in PATH
Step 1: Create and activate a virtual environment (recommended)
- python -m venv venv
- source venv/bin/activate # on macOS/Linux
- venv\Scripts\activate # on Windows
Step 2: Install the KB-Bridge package
- pip install kbbridge
Step 3: Prepare environment/configuration
- Create a .env file with your credentials and endpoints as described in the configuration section of the README. Example:
RETRIEVAL_ENDPOINT=https://api.dify.ai/v1
RETRIEVAL_API_KEY=your-retrieval-api-key
LLM_API_URL=https://your-llm-service.com/v1
LLM_MODEL=gpt-4o
LLM_API_TOKEN=your-token-here
Optional
RERANK_URL=https://your-rerank-api.com RERANK_MODEL=your-rerank-model
Step 4: Run the server
- python -m kbbridge.server --host 0.0.0.0 --port 5210
Optional: If you prefer Docker for local development, see the deployment notes in the README and use a Dockerfile to build kbbridge:latest and run with environment variables loaded from a .env file.
Additional notes
Tips and caveats:
- Ensure the environment variables for retrieval and LLM services are correctly set; the server will not start correctly without required credentials for your chosen backends.
- The MCP endpoint is exposed at http://<host>:5210/mcp; use this URL from MCP clients or Dify agent workflows.
- If you update any dependencies or switch backends, re-run installation steps or rebuild the Docker image as needed.
- For development and testing, install development dependencies via: pip install -e ".[dev]" and run tests with pytest.
- When using Docker, map port 5210 to the host and provide the same environment variables via --env-file .env for parity with local runs.
Related MCP Servers
chunkhound
Local first codebase intelligence
VectorCode
A code repository indexing tool to supercharge your LLM experience.
mcp-pinecone
Model Context Protocol server to allow for reading and writing from Pinecone. Rudimentary RAG
Archive-Agent
Find your files with natural language and ask questions.
gradio -hackathon
Our participation to the 2025 Gradio Agent MCP Hackathon
ia-na-pratica
IA na Prática: LLM, RAG, MCP, Agents, Function Calling, Multimodal, TTS/STT e mais