Get the FREE Ultimate OpenClaw Setup Guide →

dat

Asking yours data in a natural language way through pre-modeling (data models and semantic models).

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio hexinfo-dat docker run -i ghcr.io/hexinfo/dat:latest \
  --env DAT_CONFIG="default"

How to use

This MCP server wraps the DAT (Data Ask Tool) project, empowering users to interact with enterprise data using natural language. It exposes an API-driven data querying workflow that leverages semantic modeling, LLM-driven interpretation, and vector-based retrieval to transform natural language questions into precise SQL or data retrieval actions across supported databases. You can deploy this server to enable OpenAPI/HTTP-based access to the DAT engine, enabling clients or UIs to send natural language queries and receive structured data responses. Use cases include data analytics prompts, ad-hoc reporting, and guided data exploration for business users.

Once running, you can access the API endpoints to submit questions, receive a parsed semantic plan, and obtain results. The system supports integration with various data sources (e.g., MySQL, PostgreSQL, Oracle, DuckDB) and embedding/vector stores for enhanced semantic search. If you’re building a frontend, you can pair this MCP server with a UI that collects user questions, displays the generated semantic SQL, and shows results in charts or tables. The tooling is designed to be extendable via SPI components and supports deploying a server-side OpenAPI service for programmatic access.

How to install

Prerequisites:

  • Docker installed and running
  • Optional: if you prefer a non-Docker setup, you can adapt this to a Java/Maven deployment from the DAT project repository

Installation steps (Docker-based):

  1. Pull and run the DAT Docker image (as an MCP server):
# Run the DAT MCP server container (interactive)
docker run -i ghcr.io/hexinfo/dat:latest
  1. Verify the container starts and exposes its API (adjust port mappings if needed in your environment).

  2. If you need to customize configuration, set environment variables or mount a config directory into the container as appropriate for your deployment workflow.

  3. (Optional) For local development without Docker, follow the DAT project’s Java/Maven build instructions from the repository and run the server with Java 17+:

# Example (depends on the built artifact coordinates):
mvn -q -DskipTests package
java -jar target/dat-server-*.jar

Prerequisites for a non-Docker setup:

  • JDK 17+ (OpenJDK recommended)
  • Maven 3.6+ for building
  • Access to a database (MySQL, PostgreSQL, Oracle, or DuckDB for embedded)
  • An LLM provider configuration (OpenAI, Anthropic, Ollama, etc.) if you intend to use AI features

Additional notes

Tips and considerations:

  • Docker-based deployment is the simplest way to containerize the MCP server; ensure network access to your data sources and any embedding/vector stores used by DAT.
  • If using Docker, consider mapping a local configuration directory into the container to manage database connections, LLM credentials, and embedding settings externally.
  • For non-container deployments, ensure Java 17+ is installed and that all required dependencies (dat-sdk, SPI components) are available on the classpath.
  • Environment variables you may encounter include database connection details (URL, username, password), LLM API keys, and embedding model configurations. Keep credentials secure and use a secrets manager in production.
  • If you encounter issues starting the service, check container logs for startup errors related to database connectivity or network configuration. Common issues include incorrect JDBC URLs, firewall blocks, or missing embeddings models.
  • This MCP uses a Docker-based approach here, but you can adapt to other runtimes (e.g., npx/Node, Python uv) if you containerize or port the server differently.

Related MCP Servers

Sponsor this space

Reach thousands of developers