ols4
The EMBL-EBI Ontology Lookup Service (OLS)
claude mcp add --transport stdio ebispot-ols4 docker compose up -d ols4-solr ols4-neo4j ols4-backend ols4-frontend
How to use
OLS4 exposes a scalable Ontology Lookup Service stack that combines a Solr search index, a Neo4j graph store, and a Spring Boot API backend with a React frontend. The MCP endpoint for programmatic access is exposed via the API under /api/mcp, enabling streamable HTTP interactions for ontology concept lookup, term suggestions, and graph-based queries. To try it locally, start the full stack using Docker Compose as described in the repository: this will spin up Solr, Neo4j, the backend API, and the frontend. Once running, you can query the MCP endpoint to retrieve ontology terms, relationships, and embeddings as part of your data workflows or integrative analyses. The system is designed to support fast full-text search (Solr) and graph traversal (Neo4j) for complex ontology querying, making it suitable for embedding-enabled exploration and API-driven lookups.
How to install
Prerequisites:
- Docker Desktop (or compatible environment) with Docker Compose support
- Git (optional, for cloning the repository)
Install and run locally:
-
Ensure Docker is running and you have access to docker and docker compose from your shell.
-
Clone the repository and navigate to the project root.
-
Build and start the required components via Docker Compose (as an example, the MCP stack includes Solr, Neo4j, backend, and frontend):
docker compose up --build --detach ols4-solr ols4-neo4j ols4-backend ols4-frontend
-
Wait for the services to come up. The frontend should be accessible at http://localhost:8081 (as per the defaults in the repo), and MCP API calls can be directed to the backend endpoint (the exact URL is typically http://localhost:8080/api/mcp or as configured in your deployment).
-
If you need to re-create data, follow the repository's dataload steps which typically involve preparing configs and running the dataload script inside the dataload directory (Docker-based or local) as described in the docs.
Note: If you’re deploying to Kubernetes or using prebuilt images, follow the Kubernetes deployment notes in the README and replace the docker commands with the appropriate helm or kubectl commands.
Additional notes
Tips and common issues:
- Ensure your environment has enough memory for Solr and Neo4j when running locally; allocate at least 4-8 GB RAM across containers if possible.
- The MCP endpoint is streamable HTTP, so you can implement long-lived HTTP connections for real-time ontology queries.
- When loading new ontologies, update the dataload configurations and re-run the dataload script to populate Solr and Neo4j with the new data.
- If the frontend cannot reach the API, verify that the backend service is up and that any reverse proxy or network configuration allows API traffic to pass through.
- For Kubernetes deployments, flavors like imageTag (dev/stable) are used to select the appropriate container images; ensure KUBECONFIG is set when applying manifests.
- Environment variables mentioned in the docs (for example, runtime paths and config file locations) should be set according to your local setup or cluster configuration.
Related MCP Servers
mcp-memory-libsql
🧠 High-performance persistent memory system for Model Context Protocol (MCP) powered by libSQL. Features vector search, semantic knowledge storage, and efficient relationship management - perfect for AI agents and knowledge graph applications.
mie
Persistent memory graph for AI agents. Facts, decisions, entities, and relationships that survive across sessions, tools, and providers. MCP server — works with Claude, Cursor, ChatGPT, and any MCP client.
mem0 -selfhosted
Self-hosted mem0 MCP server for Claude Code. Run a complete memory server against self-hosted Qdrant + Neo4j + Ollama while using Claude as the main LLM.
heuristic
Enhanced MCP server for semantic code search with call-graph proximity, recency ranking, and find-similar-code. Built for AI coding assistants.
code-memory
MCP server with local vector search for your codebase. Smart indexing, semantic search, Git history — all offline.
post-cortex
Post-Cortex provides durable memory infrastructure with automatic knowledge graph construction, intelligent entity extraction, and semantic search powered by local transformer models.