semantic-metrics-modeling-assistant
MCP agent for semantic metrics governance with built-in trust scoring, lineage visualization, and conversational metric definition. Designed for data teams working with dbt, LookML, and modern semantic layers.
claude mcp add --transport stdio jkelleman-semantic-metrics-modeling-assistant python -m semantic_metrics.server
How to use
The Semantic Metrics Modeling Assistant is a Python-based MCP agent that helps data teams define, validate, and visualize semantic metrics while tracking lineage, usage, and trust scores. It provides a conversational interface for creating metrics, validating definitions, and exporting metrics to BI and data modeling tools. Core capabilities include defining metrics with natural language prompts, validating metric definitions through tests, visualizing lineage and dependencies via ASCII trees or Mermaid diagrams, and assessing quality with a multi-dimensional trust score. It also supports exporting metrics to Looker (LookML), Tableau (TDS), and dbt YAML definitions for integration into existing data stacks.
To use the server, run the MCP entry point and interact with the defined API/CLI tools exposed by the semantic_metrics.server module. Typical workflows involve defining a metric using define_metric(...), validating it with validate_metric(...), visualizing its lineage with visualize_lineage(...), checking the trust score with check_trust_score(...), and exporting artifacts to BI tools with export_to_looker(...), export_to_tableau(...), or export_to_dbt(...). The system stores data in a SQLite database (metrics.db) with a five-table schema to persist metrics, history, tests, usage, and trust scores, enabling change tracking and governance automation over time.
How to install
Prerequisites:
- Python 3.10+ (per project README)
- Git
- Internet access to install dependencies
Step-by-step installation:
-
Clone the repository: git clone https://github.com/jkelleman/semantic-metrics-modeling-assistant.git cd semantic-metrics-modeling-assistant
-
(Optional) Create and activate a virtual environment: python -m venv venv source venv/bin/activate # On Windows use: venv\Scripts\activate
-
Install dependencies in editable mode (preferred for development): pip install -e .
-
Run the MCP server: python -m semantic_metrics.server
-
(Optional) Run tests: pytest tests/
Notes:
- The server uses SQLite by default and will create metrics.db on first run.
- Ensure network access if you plan to export metrics to external BI tools or Git integrations.
Additional notes
Tips and common considerations:
- Environment variables: The project stores data locally in SQLite (metrics.db). If deploying in a container or cloud environment, consider mounting a persistent volume for the database.
- Export options: Looker, Tableau, and dbt exporters generate production-ready artifacts (LookML, TDS XML, and YAML, respectively). Ensure appropriate permissions and paths when writing these files in your deployment environment.
- Change tracking and governance: The five-table schema enables audit trails and rollback capabilities. Use the history and trust score metadata to explain metric changes to stakeholders.
- Testing and quality: Leverage the trust scoring components (Tests, Usage, Freshness, Documentation, Ownership, Trend Analysis) to monitor metric health over time and guide improvements.
- Troubleshooting: If metrics.db is not created, verify write permissions in the working directory and ensure the server process has sufficient filesystem access. If Python dependencies fail to install, ensure pip is up to date and that you are using Python 3.10+ as recommended.
Related MCP Servers
OpenMetadata
OpenMetadata is a unified metadata platform for data discovery, data observability, and data governance powered by a central metadata repository, in-depth column level lineage, and seamless team collaboration.
marmot
Marmot helps teams discover, understand, and leverage their data with powerful search and lineage visualisation tools. It's designed to make data accessible for everyone.
dbt
A MCP (Model Context Protocol) server for interacting with dbt.
one
A centralized reverse-proxy platform for MCP servers — manage, group, and export as Skills from a single endpoint.
RivalSearchMCP
Deep Research & Competitor Analysis MCP for Claude & Cursor. No API Keys. Features: Web Search, Social Media (Reddit/HN), Trends & OCR.
metabase
Metabase MCP server provides integration with the Metabase API, enabling LLM with MCP capabilites to directly interact with your analytics data, this server acts as a bridge between your analytics platform and conversational AI.