Get the FREE Ultimate OpenClaw Setup Guide →

m3

🏥🤖 Query MIMIC-IV medical data using natural language through Model Context Protocol (MCP). Transform healthcare research with AI-powered database interactions - supports both local MIMIC-IV SQLite demo dataset and full BigQuery datasets.

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio rafiattrach-m3 uvx m3-mcp \
  --env M3_DEMO="true" \
  --env M3_BACKEND="duckdb"

How to use

M3 provides a natural language interface to explore MIMIC-IV data via MCP clients. You can ask questions in plain English and the MCP server translates them into SQL queries against either a local DuckDB-based demo dataset or a larger backend like BigQuery. The project supports multiple clients (Claude Desktop, Cursor, Goose, and others) that can feed your questions to the MCP server and render results with minimal setup. Typical interactions include asking for patient demographics, distributions over admissions, or specific clinical metrics, with results returned as structured data and visualizable outputs. To get started, initialize the MCP server via your chosen runtime (uvx in the recommended setup) and connect it with your MCP client by pasting the generated MCP configuration JSON from the server into the client’s config.

How to install

Prerequisites:

  • Python 3.10+ (recommended)
  • uv (via uvx) installed on your system for running the MCP server as shown in the docs
  • Optional: Docker if you prefer containerized deployment

Install steps (recommended path using uvx):

  1. Install uv as described in the README (macOS/Linux: brew install uv or the provided install script; Windows: PowerShell install script).
  2. Install the m3 MCP components via your preferred means (the README suggests using uvx to run the MCP server after cloning the repo and ensuring the m3-mcp module is available).
  3. Ensure you have a dataset available: a local DuckDB demo (default) or a connection to BigQuery. For the demo, prepare the m3_data as described in the docs; for BigQuery, configure credentials and project access as shown.
  4. Start the MCP server using uvx with the m3-mcp target, e.g.: uv init uv add m3-mcp uv run m3 init DATASET_NAME uv run m3 config --quick
  5. Copy and paste the resulting MCP configuration JSON into your MCP client configuration.

If you prefer Docker or pip-based setups, refer to the repo’s Alternative Installation Methods section in the README for the exact docker run commands and pip install commands, including how to build the image and start the container, and how to wire in the MCP config for a consistent client experience.

Additional notes

Tips and notes:

  • The recommended workflow uses uvx to run the MCP server with the 'm3-mcp' module. You can switch backends later (duckdb demo vs. BigQuery) by adjusting the M3_BACKEND environment variable and related credentials.
  • For BigQuery, ensure you have proper Google Cloud credentials and, if needed, PhysioNet access. Update M3_PROJECT_ID in the MCP config to your GCP project.
  • The MCP config examples show both a direct Python/m3-mcp approach and a docker-based approach. Choose the path that matches your environment (local development vs. production).
  • When running locally, you can use the DuckDB demo to experiment and then switch to the full dataset or BigQuery by updating the environment variable and reinitializing the MCP server config.
  • If you encounter connection or client capability issues, verify your MCP client’s version compatibility with the MCP server and ensure the correct backend is selected (duckdb vs bigquery).
  • The project emphasizes read-only queries with validation to protect against SQL injection; still, validate outputs in early experimentation.

Related MCP Servers

Sponsor this space

Reach thousands of developers ↗