Get the FREE Ultimate OpenClaw Setup Guide →

rdkit

MCP server that enables language models to interact with RDKit through natural language

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio tandemai-inc-rdkit-mcp-server python run_server.py

How to use

The RDKit MCP Server provides agent-level access to RDKit's toolkit through the Model Context Protocol (MCP). This enables any MCP-compliant language model to invoke RDKit functions in natural language, effectively letting LLMs perform chemical informatics tasks such as molecule parsing, property calculations, substructure searches, and SMILES/InChI manipulations as if RDKit were an external API. The server also ships a CLI client for rapid experimentation, so you can test commands and tool availability without building a full integration.

To get started, run the server and connect an MCP-compatible LLM. You can list available RDKit tools, issue tool invocations via the MCP protocol, and receive structured results that your agent can reason over. The CLI client exposes an OpenAI-backed interface to prototype prompts and tool calls locally before integrating into a larger system. Tools exposed by the server cover core RDKit functionalities such as molecule parsing, descriptor calculations, substructure matching, and common chemistry transformations, all accessible through natural language prompts.

How to install

Prerequisites:

  • Python 3.9+ installed on your system
  • pip available on the PATH

Step-by-step installation:

  1. Create or navigate to your project environment

    • Optional: use a virtual environment
  2. Install the RDKit MCP server package from the repository root

    • Run:
pip install .
  1. (Optional) Install evaluation tooling if you plan to run the eval suite
pip install ".[evals]"
  1. Prepare settings (optional)
  • If you have a settings.yaml example, copy it to settings.yaml and customize as needed
  1. Run the server to verify installation
python run_server.py
  1. Test the CLI client (optional but recommended)
export OPENAI_API_KEY="your-api-key"
python run_client.py

Notes:

  • The server exposes functionality via MCP; no code changes are required in your LLM prompts beyond following the MCP protocol for tool invocations.

Additional notes

Tips and common considerations:

  • RDKit version: The server targets RDKit 2025.3.1; ensure this version or compatible one is installed in your environment.
  • Settings: Use the provided settings.example.yaml as a starting point for configuring timeouts, model context limits, and logging.
  • Environment variables: If your deployment requires OpenAI or other API access from the CLI client, set OPENAI_API_KEY in your environment.
  • Tool discovery: Use the list_tools.py utility to enumerate available RDKit tools exposed by the MCP server before building prompts for your LLM.
  • Debugging: If tool calls fail, check server logs for tool invocation errors, ensure RDKit imports succeed, and verify that the MCP connection is established by your LLM client.
  • Security: Restrict access to the MCP endpoint in production and consider authentication or network ACLs to prevent unauthorized tool usage.

Related MCP Servers

Sponsor this space

Reach thousands of developers