Get the FREE Ultimate OpenClaw Setup Guide →

Snowflake Automation

Scanned
npx machina-cli add skill ComposioHQ/awesome-claude-skills/snowflake-automation --openclaw
Files (1)
SKILL.md
4.6 KB

Snowflake Automation

Automate your Snowflake data warehouse workflows -- discover databases, browse schemas and tables, execute arbitrary SQL (SELECT, DDL, DML), and integrate Snowflake data operations into cross-app pipelines.

Toolkit docs: composio.dev/toolkits/snowflake


Setup

  1. Add the Composio MCP server to your client: https://rube.app/mcp
  2. Connect your Snowflake account when prompted (account credentials or key-pair authentication)
  3. Start using the workflows below

Core Workflows

1. List Databases

Use SNOWFLAKE_SHOW_DATABASES to discover available databases with optional filtering and Time Travel support.

Tool: SNOWFLAKE_SHOW_DATABASES
Inputs:
  - like_pattern: string (SQL wildcard, e.g., "%test%") -- case-insensitive
  - starts_with: string (e.g., "PROD") -- case-sensitive
  - limit: integer (max 10000)
  - history: boolean (include dropped databases within Time Travel retention)
  - terse: boolean (return subset of columns: created_on, name, kind, database_name, schema_name)
  - role: string (role to use for execution)
  - warehouse: string (optional, not required for SHOW DATABASES)
  - timeout: integer (seconds)

2. Browse Schemas

Use SNOWFLAKE_SHOW_SCHEMAS to list schemas within a database or across the account.

Tool: SNOWFLAKE_SHOW_SCHEMAS
Inputs:
  - database: string (database context)
  - in_scope: "ACCOUNT" | "DATABASE" | "<specific_database_name>"
  - like_pattern: string (SQL wildcard filter)
  - starts_with: string (case-sensitive prefix)
  - limit: integer (max 10000)
  - history: boolean (include dropped schemas)
  - terse: boolean (subset columns only)
  - role, warehouse, timeout: string/integer (optional)

3. List Tables

Use SNOWFLAKE_SHOW_TABLES to discover tables with metadata including row counts, sizes, and clustering keys.

Tool: SNOWFLAKE_SHOW_TABLES
Inputs:
  - database: string (database context)
  - schema: string (schema context)
  - in_scope: "ACCOUNT" | "DATABASE" | "SCHEMA" | "<specific_name>"
  - like_pattern: string (e.g., "%customer%")
  - starts_with: string (e.g., "FACT", "DIM", "TEMP")
  - limit: integer (max 10000)
  - history: boolean (include dropped tables)
  - terse: boolean (subset columns only)
  - role, warehouse, timeout: string/integer (optional)

4. Execute SQL Statements

Use SNOWFLAKE_EXECUTE_SQL for SELECT queries, DDL (CREATE/ALTER/DROP), and DML (INSERT/UPDATE/DELETE) with parameterized bindings.

Tool: SNOWFLAKE_EXECUTE_SQL
Inputs:
  - statement: string (required) -- SQL statement(s), semicolon-separated for multi-statement
  - database: string (case-sensitive, falls back to DEFAULT_NAMESPACE)
  - schema_name: string (case-sensitive)
  - warehouse: string (case-sensitive, required for compute-bound queries)
  - role: string (case-sensitive, falls back to DEFAULT_ROLE)
  - bindings: object (parameterized query values to prevent SQL injection)
  - parameters: object (Snowflake session-level parameters)
  - timeout: integer (seconds; 0 = max 604800s)

Examples:

  • "SELECT * FROM my_table LIMIT 100;"
  • "CREATE TABLE test (id INT, name STRING);"
  • "ALTER SESSION SET QUERY_TAG='mytag'; SELECT COUNT(*) FROM my_table;"

Known Pitfalls

PitfallDetail
Case sensitivityDatabase, schema, warehouse, and role names are case-sensitive in SNOWFLAKE_EXECUTE_SQL.
Warehouse required for computeSELECT and DML queries require a running warehouse. SHOW commands do not.
Multi-statement executionMultiple statements separated by semicolons execute in sequence automatically.
SQL injection preventionAlways use the bindings parameter for user-supplied values to prevent injection attacks.
Pagination with LIMITSHOW commands support limit (max 10000) and from_name for cursor-based pagination.
Time TravelSet history: true to include dropped objects still within the retention period.

Quick Reference

Tool SlugDescription
SNOWFLAKE_SHOW_DATABASESList databases with filtering and Time Travel support
SNOWFLAKE_SHOW_SCHEMASList schemas within a database or account-wide
SNOWFLAKE_SHOW_TABLESList tables with metadata (row count, size, clustering)
SNOWFLAKE_EXECUTE_SQLExecute SQL: SELECT, DDL, DML with parameterized bindings

Powered by Composio

Source

git clone https://github.com/ComposioHQ/awesome-claude-skills/blob/master/composio-skills/snowflake-automation/SKILL.mdView on GitHub

Overview

Snowflake Automation lets you discover databases, browse schemas and tables, and execute SQL against Snowflake from Composio MCP-enabled workflows. It enables cross-app data operations by integrating Snowflake actions into pipelines.

How This Skill Works

Connect to Snowflake using MCP (credentials or key-pair) and expose tooling to list databases, schemas, and tables, or execute parameterized SQL. Tools like SNOWFLAKE_SHOW_DATABASES, SNOWFLAKE_SHOW_SCHEMAS, SNOWFLAKE_SHOW_TABLES, and SNOWFLAKE_EXECUTE_SQL run inside your workflows, with inputs for database, schema, role, warehouse, bindings, and timeouts.

When to Use It

  • Inventory Snowflake assets by listing databases, schemas, and tables across your account or within a database.
  • Automate arbitrary SQL in pipelines using parameterized inputs to prevent injection (SELECT, DDL, DML).
  • Capture historical results using Time Travel-enabled SHOW commands to include dropped objects.
  • Integrate Snowflake operations into cross-app MCP pipelines for end-to-end data workflows.
  • Execute focused data operations with explicit role/warehouse context for secure compute.

Quick Start

  1. Step 1: Add the Composio MCP server and connect your Snowflake account (credentials or key-pair).
  2. Step 2: Choose a SNOWFLAKE_SHOW_* tool or SNOWFLAKE_EXECUTE_SQL and set inputs like database, schema, and bindings.
  3. Step 3: Run the workflow and review results in the MCP outputs; iterate inputs as needed.

Best Practices

  • Always use bindings for parameterized queries to prevent SQL injection.
  • Pass role, warehouse, and timeout inputs to control access and compute resources.
  • Leverage multi-statement execution by separating statements with semicolons.
  • Be mindful of case sensitivity for database, schema, warehouse, and role names in execute SQL.
  • Use the Time Travel and history options to control visibility of past objects when needed.

Example Use Cases

  • List all databases whose names match a pattern across the Snowflake account.
  • Show schemas within a specific database to understand available structures.
  • Show tables in a schema and inspect row counts, sizes, and clustering keys.
  • Execute a parameterized SELECT using bindings to fetch orders for a given customer_id.
  • Create a new table via SNOWFLAKE_EXECUTE_SQL and validate via a subsequent SELECT.

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers