elastic-semantic-search
MCP server to search up-to-date elasticsearch docs
claude mcp add --transport stdio jedrazb-elastic-semantic-search-mcp-server python -m elastic_semantic_search \ --env ES_URL="Elasticsearch URL (e.g., http://localhost:9200)" \ --env ES_API_KEY="Encoded API key for Elasticsearch with crawler permissions"
How to use
This MCP server provides a Python-based semantic search interface over blog posts indexed in Elasticsearch. It exposes tooling to crawl content, configure Elasticsearch for semantic search, and run the server behind the MCP Inspector. Once running, you can perform semantic queries against the Search Labs posts using the semantic-enabled index. The server relies on Elasticsearch’s semantic capabilities (ELSER) to produce semantically relevant results from the blog content that’s been crawled and indexed under the search-labs-posts index. To integrate with Claude Desktop, you can register the server so Claude can discover and invoke its tools, enabling natural-language querying of the blog posts via Claude’s interface.
How to install
Prerequisites: Python 3.8+ and a running Elasticsearch cluster with the required permissions. You will also need Docker if you plan to verify crawling steps locally, and an MCP-compatible environment for the MCP Inspector integration.
- Clone the repository:
git clone <repo-url>
cd <repository-directory>
- Create and activate a Python virtual environment:
python -m venv venv
source venv/bin/activate # on macOS/Linux
venv\Scripts\activate # on Windows
- Install dependencies (adjust if a requirements file exists):
pip install -r requirements.txt
- Prepare environment variables for Elasticsearch access. Create an .env file or export variables:
export ES_URL=http://localhost:9200
export ES_API_KEY=YOUR_ENCODED_API_KEY
- Run the MCP server in development mode (via MCP Inspector interface or equivalent command):
# If using a script or make target provided by the repo
make dev
- Access the MCP Inspector at http://localhost:5173 and verify the elastic-semantic-search server is loaded. For Claude Desktop integration, run the Claude config installer as described in the README:
make install-claude-config
Note: If you intend to replicate the crawling and indexing steps exactly as shown in the README, you will need Docker and Elastic Open Crawler configured with the appropriate crawler configs and permissions.
Additional notes
Environment variables: ES_URL should point to your Elasticsearch instance and ES_API_KEY should be an API key with at least monitor privileges on the cluster and all privileges on the search-labs-posts index. If you see semantic_indexing issues, ensure the index exists with the proper mapping (semantic_text fields) as outlined in the README. When running locally, start Elasticsearch first and wait for semantic models (ELSER) to initialize before crawling. The Claude Desktop integration will only appear after the server is detectable and the Claude config has been updated. If you encounter connectivity problems, verify network access between the MCP server, the MCP Inspector, and Elasticsearch, and confirm that the API key has not expired or been rotated.
Related MCP Servers
mcp-vegalite
MCP server from isaacwasserman/mcp-vegalite-server
github-chat
A Model Context Protocol (MCP) for analyzing and querying GitHub repositories using the GitHub Chat API.
nautex
MCP server for guiding Coding Agents via end-to-end requirements to implementation plan pipeline
pagerduty
PagerDuty's official local MCP (Model Context Protocol) server which provides tools to interact with your PagerDuty account directly from your MCP-enabled client.
futu-stock
mcp server for futuniuniu stock
mcp -boilerplate
Boilerplate using one of the 'better' ways to build MCP Servers. Written using FastMCP