databricks
A Model Completion Protocol (MCP) server for interacting with Databricks services
claude mcp add --transport stdio justtryai-databricks-mcp-server uvx databricks-mcp-server \ --env DATABRICKS_HOST="https://<your-databricks-instance>.azuredatabricks.net" \ --env DATABRICKS_TOKEN="<your-personal-access-token>"
How to use
This MCP server provides a Databricks integration exposed via the Model Completion Protocol (MCP). It translates MCP tool calls into Databricks REST API requests, allowing LLM-powered workflows to manage clusters, jobs, and notebooks, query workspace files, and run SQL statements against Databricks data. Available tools include: list_clusters to enumerate clusters, create_cluster and terminate_cluster to manage cluster lifecycles, get_cluster and start_cluster for cluster state inquiries and operations, list_jobs and run_job to manage and execute Databricks jobs, list_notebooks and export_notebook to interact with notebooks, list_files to inspect DBFS directories, and execute_sql to run SQL statements. Tools are designed to be invoked from MCP clients and return structured results suitable for downstream reasoning by an AI agent. Async operation is supported, enabling multiple Databricks actions to be performed concurrently where appropriate.
How to install
Prerequisites
- Python 3.10 or higher
- uv (recommended MCP runtime manager)
Setup
-
Install uv if you don't have it already:
MacOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | sh
Windows (PowerShell)
irm https://astral.sh/uv/install.ps1 | iex
Restart your terminal after installation.
-
Clone the repository:
git clone https://github.com/JustTryAI/databricks-mcp-server.git cd databricks-mcp-server
-
Set up the project with uv:
Create and activate virtual environment
uv venv
On Windows
..venv\Scripts\activate
On Linux/Mac
source .venv/bin/activate
Install dependencies in development mode
uv pip install -e .
Install development dependencies
uv pip install -e ".[dev]"
-
Set up environment variables (example):
Windows
set DATABRICKS_HOST=https://your-databricks-instance.azuredatabricks.net set DATABRICKS_TOKEN=your-personal-access-token
Linux/Mac
export DATABRICKS_HOST=https://your-databricks-instance.azuredatabricks.net export DATABRICKS_TOKEN=your-personal-access-token
You can also create an .env file based on the .env.example template.
-
Run the MCP server (via uvx configuration):
uvx databricks-mcp-server
The server will start and be ready to accept MCP protocol connections.
Additional notes
Tips and common issues:
- Ensure DATABRICKS_HOST and DATABRICKS_TOKEN are set correctly; missing credentials will result in authentication errors from Databricks APIs.
- If you encounter dependency issues, reinstall in a clean virtual environment and ensure you are using Python 3.10+.
- The MCP server is designed to run asynchronously; for high throughput, consider triggering multiple tool calls concurrently where the MCP client supports it.
- The recommended startup path is using uv with the package name as shown in the configuration; you can also run the included scripts directly for local testing.
- If you need to test specific tools, verify Databricks permissions for the token scope (clusters, jobs, notebooks, workspace, and DBFS as required).
- For local development, refer to the scripts/show_clusters.py and scripts/show_notebooks.py utilities to quickly inspect resources before integrating them into your MCP workflows.
Related MCP Servers
mcp-vegalite
MCP server from isaacwasserman/mcp-vegalite-server
github-chat
A Model Context Protocol (MCP) for analyzing and querying GitHub repositories using the GitHub Chat API.
nautex
MCP server for guiding Coding Agents via end-to-end requirements to implementation plan pipeline
pagerduty
PagerDuty's official local MCP (Model Context Protocol) server which provides tools to interact with your PagerDuty account directly from your MCP-enabled client.
futu-stock
mcp server for futuniuniu stock
mcp -boilerplate
Boilerplate using one of the 'better' ways to build MCP Servers. Written using FastMCP