mcp-documentation
MCP Documentation Server - Bridge the AI Knowledge Gap. ✨ Features: Document management • Gemini integration • AI-powered semantic search • File uploads • Smart chunking • Multilingual support • Zero-setup 🎯 Perfect for: New frameworks • API docs • Internal guides
claude mcp add --transport stdio andrea9293-mcp-documentation-server npx -y @andrea9293/mcp-documentation-server \ --env WEB_PORT="3080" \ --env MCP_BASE_DIR="path/to/workspace (default: ~/.mcp-documentation-server)" \ --env START_WEB_UI="true (set to false to disable built-in web UI)" \ --env GEMINI_API_KEY="your-api-key-here (optional for AI Gemini-powered search)" \ --env MCP_EMBEDDING_MODEL="Xenova/all-MiniLM-L6-v2 (default embedding model)"
How to use
This MCP Documentation Server provides a local-first document management system with semantic search and optional AI-powered analysis. Documents are stored in an embedded Orama vector database, enabling hybrid search (full-text plus vector similarity) without requiring external services. A built-in web UI is available by default, offering dashboards, document management, uploading, and context window exploration. You can run the server and interact with tools such as add_document, list_documents, search_all_documents, get_context_window, and more. If you supply a GEMINI_API_KEY, you can enable Gemini-powered AI search to gain deeper contextual insights. Everything runs locally, with data stored under ~./mcp-documentation-server by default (configurable via MCP_BASE_DIR).
How to install
Prerequisites:
- Node.js and npm installed on your system
- Internet access to install the MCP server package from npm
Step 1: Install Node.js (if not already installed)
- On macOS: install via Homebrew: brew install node
- On Windows: download and run the Node.js installer from nodejs.org
- On Linux: use your distribution's package manager (e.g., apt, dnf) to install node and npm
Step 2: Install and run the MCP Documentation Server using npx
- The server is published as @andrea9293/mcp-documentation-server. Use the following configuration with npx to install and run automatically:
npx -y @andrea9293/mcp-documentation-server
This will start the MCP server with the default quick-start configuration (documentation channel) and set up the embedded Orama DB for local storage.
Step 3: Optional customization
- If you want to customize the workspace path or enable Gemini AI, set environment variables as shown in the mcp_config section above. You can run with your own environment settings in a shell like:
export MCP_BASE_DIR="/path/to/workspace"
export GEMINI_API_KEY="your-api-key"
export MCP_EMBEDDING_MODEL="Xenova/all-MiniLM-L6-v2"
export START_WEB_UI="true"
export WEB_PORT="3080"
npx -y @andrea9293/mcp-documentation-server
Step 4: Access the web UI
- By default, the built-in web UI starts on http://localhost:3080 (adjust WEB_PORT if you changed it).
Note: Without a GEMINI_API_KEY, only the local embedding-based search tools are available. The server will migrate legacy JSON documents to Orama on first startup automatically.
Additional notes
Tips and considerations:
- Data is stored locally under MCP_BASE_DIR (default: ~/.mcp-documentation-server). Change this to a persistent workspace if needed.
- The web UI can be disabled by setting START_WEB_UI=false. If disabled, you can still interact with the server via its MCP tools programmatically.
- Gemini AI search is optional and requires GEMINI_API_KEY to be set. Without it, semantic search will rely on local embeddings.
- Ensure your environment variables are defined in the same session or via a .env file in the project root for convenience.
- The server supports uploading .txt, .md, and .pdf files to populate the knowledge base; you can also use add_document for manual entries.
- If you’re running behind a firewall or on a non-default port, adjust WEB_PORT accordingly and ensure the port is open.
- The module exposes common MCP tools such as add_document, list_documents, get_document, delete_document, process_uploads, get_uploads_path, list_uploads_files, get_ui_url, search_documents, search_all_documents, and get_context_window.
Related MCP Servers
gemini-cli
An open-source AI agent that brings the power of Gemini directly into your terminal.
obsidian -tools
Add Obsidian integrations like semantic search and custom Templater prompts to Claude or any MCP client.
mcp
Model Context Protocol (MCP) server for the Webflow Data API.
create -app
A CLI tool for quickly scaffolding Model Context Protocol (MCP) server applications with TypeScript support and modern development tooling
akyn-sdk
Turn any data source into an MCP server in 5 minutes. Build AI-agents-ready knowledge bases.
mcp-json-yaml-toml
A structured data reader and writer like 'jq' and 'yq' for AI Agents