data-context-extractor
npx machina-cli add skill anthropics/knowledge-work-plugins/data-context-extractor --openclawData Context Extractor
A meta-skill that extracts company-specific data knowledge from analysts and generates tailored data analysis skills.
How It Works
This skill has two modes:
- Bootstrap Mode: Create a new data analysis skill from scratch
- Iteration Mode: Improve an existing skill by adding domain-specific reference files
Bootstrap Mode
Use when: User wants to create a new data context skill for their warehouse.
Phase 1: Database Connection & Discovery
Step 1: Identify the database type
Ask: "What data warehouse are you using?"
Common options:
- BigQuery
- Snowflake
- PostgreSQL/Redshift
- Databricks
Use ~~data warehouse tools (query and schema) to connect. If unclear, check available MCP tools in the current session.
Step 2: Explore the schema
Use ~~data warehouse schema tools to:
- List available datasets/schemas
- Identify the most important tables (ask user: "Which 3-5 tables do analysts query most often?")
- Pull schema details for those key tables
Sample exploration queries by dialect:
-- BigQuery: List datasets
SELECT schema_name FROM INFORMATION_SCHEMA.SCHEMATA
-- BigQuery: List tables in a dataset
SELECT table_name FROM `project.dataset.INFORMATION_SCHEMA.TABLES`
-- Snowflake: List schemas
SHOW SCHEMAS IN DATABASE my_database
-- Snowflake: List tables
SHOW TABLES IN SCHEMA my_schema
Phase 2: Core Questions (Ask These)
After schema discovery, ask these questions conversationally (not all at once):
Entity Disambiguation (Critical)
"When people here say 'user' or 'customer', what exactly do they mean? Are there different types?"
Listen for:
- Multiple entity types (user vs account vs organization)
- Relationships between them (1:1, 1:many, many:many)
- Which ID fields link them together
Primary Identifiers
"What's the main identifier for a [customer/user/account]? Are there multiple IDs for the same entity?"
Listen for:
- Primary keys vs business keys
- UUID vs integer IDs
- Legacy ID systems
Key Metrics
"What are the 2-3 metrics people ask about most? How is each one calculated?"
Listen for:
- Exact formulas (ARR = monthly_revenue × 12)
- Which tables/columns feed each metric
- Time period conventions (trailing 7 days, calendar month, etc.)
Data Hygiene
"What should ALWAYS be filtered out of queries? (test data, fraud, internal users, etc.)"
Listen for:
- Standard WHERE clauses to always include
- Flag columns that indicate exclusions (is_test, is_internal, is_fraud)
- Specific values to exclude (status = 'deleted')
Common Gotchas
"What mistakes do new analysts typically make with this data?"
Listen for:
- Confusing column names
- Timezone issues
- NULL handling quirks
- Historical vs current state tables
Phase 3: Generate the Skill
Create a skill with this structure:
[company]-data-analyst/
├── SKILL.md
└── references/
├── entities.md # Entity definitions and relationships
├── metrics.md # KPI calculations
├── tables/ # One file per domain
│ ├── [domain1].md
│ └── [domain2].md
└── dashboards.json # Optional: existing dashboards catalog
SKILL.md Template: See references/skill-template.md
SQL Dialect Section: See references/sql-dialects.md and include the appropriate dialect notes.
Reference File Template: See references/domain-template.md
Phase 4: Package and Deliver
- Create all files in the skill directory
- Package as a zip file
- Present to user with summary of what was captured
Iteration Mode
Use when: User has an existing skill but needs to add more context.
Step 1: Load Existing Skill
Ask user to upload their existing skill (zip or folder), or locate it if already in the session.
Read the current SKILL.md and reference files to understand what's already documented.
Step 2: Identify the Gap
Ask: "What domain or topic needs more context? What queries are failing or producing wrong results?"
Common gaps:
- A new data domain (marketing, finance, product, etc.)
- Missing metric definitions
- Undocumented table relationships
- New terminology
Step 3: Targeted Discovery
For the identified domain:
-
Explore relevant tables: Use
~~data warehouseschema tools to find tables in that domain -
Ask domain-specific questions:
- "What tables are used for [domain] analysis?"
- "What are the key metrics for [domain]?"
- "Any special filters or gotchas for [domain] data?"
-
Generate new reference file: Create
references/[domain].mdusing the domain template
Step 4: Update and Repackage
- Add the new reference file
- Update SKILL.md's "Knowledge Base Navigation" section to include the new domain
- Repackage the skill
- Present the updated skill to user
Reference File Standards
Each reference file should include:
For Table Documentation
- Location: Full table path
- Description: What this table contains, when to use it
- Primary Key: How to uniquely identify rows
- Update Frequency: How often data refreshes
- Key Columns: Table with column name, type, description, notes
- Relationships: How this table joins to others
- Sample Queries: 2-3 common query patterns
For Metrics Documentation
- Metric Name: Human-readable name
- Definition: Plain English explanation
- Formula: Exact calculation with column references
- Source Table(s): Where the data comes from
- Caveats: Edge cases, exclusions, gotchas
For Entity Documentation
- Entity Name: What it's called
- Definition: What it represents in the business
- Primary Table: Where to find this entity
- ID Field(s): How to identify it
- Relationships: How it relates to other entities
- Common Filters: Standard exclusions (internal, test, etc.)
Quality Checklist
Before delivering a generated skill, verify:
- SKILL.md has complete frontmatter (name, description)
- Entity disambiguation section is clear
- Key terminology is defined
- Standard filters/exclusions are documented
- At least 2-3 sample queries per domain
- SQL uses correct dialect syntax
- Reference files are linked from SKILL.md navigation section
Source
git clone https://github.com/anthropics/knowledge-work-plugins/blob/main/data/skills/data-context-extractor/SKILL.mdView on GitHub Overview
The data-context-extractor is a meta-skill that pulls company-specific data knowledge from analysts to generate tailored data analysis skills. It supports Bootstrap mode for creating new data context skills and Iteration mode for enriching existing skills with domain references. This enables Claude to understand your warehouse schemas, terminology, metrics definitions, and common query patterns.
How This Skill Works
The skill operates in two modes: Bootstrap creates a new data analysis skill from scratch, while Iteration loads and updates an existing skill with domain-specific references. It follows three phases: Phase 1—Database Connection & Discovery to identify the warehouse type and key schemas; Phase 2—Core Questions (entity disambiguation, primary identifiers, key metrics, data hygiene, and common gotchas); Phase 3—Generate the skill with a company-specific structure and references.
When to Use It
- Bootstrap a new data context skill for your warehouse to establish foundational schemas and metrics.
- Set up data analysis for your data warehouse by identifying the database type and key tables.
- Add domain context to an existing skill by updating references and terminology.
- Update metrics, tables, or terminology to reflect changes in the data environment.
- Refine entity definitions and data hygiene rules based on analyst feedback.
Quick Start
- Step 1: Identify the database type (BigQuery, Snowflake, PostgreSQL/Redshift, Databricks).
- Step 2: Use data warehouse tools to discover datasets, schemas, and the top 3-5 tables your analysts query most.
- Step 3: Ask Phase 2 core questions and generate the new skill with company-specific references.
Best Practices
- Identify the data warehouse type early (BigQuery, Snowflake, PostgreSQL/Redshift, Databricks).
- Focus schema exploration on 3-5 key tables that analysts query most often.
- Capture clear entity definitions and relationships (e.g., user vs. customer) and how they link with IDs.
- Document exact metric formulas and the tables/columns that feed each metric, including time period conventions.
- Keep reference files up-to-date with ongoing iterations and analyst feedback.
Example Use Cases
- Bootstrapping a new data context skill for a retail warehouse to capture top tables and KPI definitions.
- Iterating a data skill to add a KPI like churn rate with precise time windows and source tables.
- Documenting data hygiene rules to exclude test and internal data from queries.
- Disambiguating entities (user vs customer) to resolve 1:many relationships and improve joins.
- Adding domain-specific terminology and common query patterns used by analysts for consistent reporting.