Get the FREE Ultimate OpenClaw Setup Guide →

ols4

The EMBL-EBI Ontology Lookup Service (OLS)

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio ebispot-ols4 docker compose up -d ols4-solr ols4-neo4j ols4-backend ols4-frontend

How to use

OLS4 exposes a scalable Ontology Lookup Service stack that combines a Solr search index, a Neo4j graph store, and a Spring Boot API backend with a React frontend. The MCP endpoint for programmatic access is exposed via the API under /api/mcp, enabling streamable HTTP interactions for ontology concept lookup, term suggestions, and graph-based queries. To try it locally, start the full stack using Docker Compose as described in the repository: this will spin up Solr, Neo4j, the backend API, and the frontend. Once running, you can query the MCP endpoint to retrieve ontology terms, relationships, and embeddings as part of your data workflows or integrative analyses. The system is designed to support fast full-text search (Solr) and graph traversal (Neo4j) for complex ontology querying, making it suitable for embedding-enabled exploration and API-driven lookups.

How to install

Prerequisites:

  • Docker Desktop (or compatible environment) with Docker Compose support
  • Git (optional, for cloning the repository)

Install and run locally:

  1. Ensure Docker is running and you have access to docker and docker compose from your shell.

  2. Clone the repository and navigate to the project root.

  3. Build and start the required components via Docker Compose (as an example, the MCP stack includes Solr, Neo4j, backend, and frontend):

    docker compose up --build --detach ols4-solr ols4-neo4j ols4-backend ols4-frontend

  4. Wait for the services to come up. The frontend should be accessible at http://localhost:8081 (as per the defaults in the repo), and MCP API calls can be directed to the backend endpoint (the exact URL is typically http://localhost:8080/api/mcp or as configured in your deployment).

  5. If you need to re-create data, follow the repository's dataload steps which typically involve preparing configs and running the dataload script inside the dataload directory (Docker-based or local) as described in the docs.

Note: If you’re deploying to Kubernetes or using prebuilt images, follow the Kubernetes deployment notes in the README and replace the docker commands with the appropriate helm or kubectl commands.

Additional notes

Tips and common issues:

  • Ensure your environment has enough memory for Solr and Neo4j when running locally; allocate at least 4-8 GB RAM across containers if possible.
  • The MCP endpoint is streamable HTTP, so you can implement long-lived HTTP connections for real-time ontology queries.
  • When loading new ontologies, update the dataload configurations and re-run the dataload script to populate Solr and Neo4j with the new data.
  • If the frontend cannot reach the API, verify that the backend service is up and that any reverse proxy or network configuration allows API traffic to pass through.
  • For Kubernetes deployments, flavors like imageTag (dev/stable) are used to select the appropriate container images; ensure KUBECONFIG is set when applying manifests.
  • Environment variables mentioned in the docs (for example, runtime paths and config file locations) should be set according to your local setup or cluster configuration.

Related MCP Servers

Sponsor this space

Reach thousands of developers