lifesciences-research
AI Agent wrappers for Life Sciences APIs (Open Targets, ChEMBL, UniProt). Accelerating drug discovery with Model Context Protocol (MCP) and FastMCP.
claude mcp add --transport stdio donbr-lifesciences-research node server.js \ --env PORT="5000" \ --env LOG_LEVEL="info"
How to use
The Life Sciences MCP server collection exposes a suite of micro-services that ground biology-related queries in canonical, verifiable facts. Each API wrapper handles a specific data source (e.g., gene nomenclature, protein identifiers, chemical data, pathways, and clinical trials) and implements a Fuzzy-to-Fact workflow: search for a concept, generate candidate matches, then perform a strict lookup to resolve to canonical IDs and cross-references. Agents can query these MCP servers to resolve synonyms, fetch evidence, verify mappings, and surface recovery hints when identifiers change. This enables safe, updatable grounding for downstream reasoning systems without requiring a monolithic knowledge graph.
How to install
Prerequisites:
- Docker or a Node.js runtime (depending on your deployment preference)
- Git
- Internet access to fetch upstream data sources
Option A — Run as a Node.js server (recommended for development):
- Clone the repository: git clone https://github.com/your-org/lifesciences-research.git
- Navigate to the project and install dependencies: cd lifesciences-research npm install
- Start the MCP server: npm run start
Option B — Run as a Docker container:
- Build the image (adjust image name as needed): docker build -t lifesciences-research:latest .
- Run the container (exposing port 5000 by default): docker run -p 5000:5000 lifesciences-research:latest
Option C — Run via uvx (Python) if the project provides a Python entrypoint:
- Create and activate a virtual environment: python -m venv venv source venv/bin/activate
- Install Python dependencies: pip install -r requirements.txt
- Run the service: uvx lifesciences_research
Notes:
- Ensure your environment has network access to upstream APIs (ChEMBL, UniProt, Open Targets, etc.).
- If you self-host multiple MCP servers, consider orchestrating them with your preferred gateway to expose a unified interface.
Additional notes
- Environment variables: configure PORT, LOG_LEVEL, and any API keys required by upstream services (e.g., OPEN_TARGETS_API_KEY, CHemBL_API_KEY) as needed by your deployment.
- If you encounter deprecated identifier mappings, use the server's recovery hints to retry with the new canonical IDs or updated schema.
- Tests: this MCP setup emphasizes 12 operational servers with comprehensive tests; ensure you run test suites if provided (unit/integration) when modifying any server.
- Security: gate external access to MCP endpoints in production and rotate credentials for any API keys used by upstream data sources.
Related MCP Servers
HydraMCP
Connect agents to agents. MCP server for querying any LLM through your existing subscriptions: compare, vote, and synthesize across GPT, Gemini, Claude, and local models from one terminal.
timebound-iam
An MCP Server that sits between your agent and AWS STS and issues temporary credentials scoped to specific AWS Services
backlog
Help coding agents and developers to keep track of a project's backlog by storing tasks as markdown in git.
fastmcp-builder
A comprehensive Claude Code skill for building production-ready MCP servers using FastMCP. Includes reference guides, runnable examples, and a complete implementation with OAuth, testing, and best practices.
zotero -lite
Zotero MCP Lite: Fast, Customizable & Light Zotero MCP server for AI research assistants
agent-configs
Control Claude Code, Cursor & Gemini CLI remotely — answer agent questions from your phone via Slack