Get the FREE Ultimate OpenClaw Setup Guide →

scrapegraph-ai-automation

Scanned
npx machina-cli add skill ComposioHQ/awesome-claude-skills/scrapegraph-ai-automation --openclaw
Files (1)
SKILL.md
3.0 KB

Scrapegraph AI Automation via Rube MCP

Automate Scrapegraph AI operations through Composio's Scrapegraph AI toolkit via Rube MCP.

Toolkit docs: composio.dev/toolkits/scrapegraph_ai

Prerequisites

  • Rube MCP must be connected (RUBE_SEARCH_TOOLS available)
  • Active Scrapegraph AI connection via RUBE_MANAGE_CONNECTIONS with toolkit scrapegraph_ai
  • Always call RUBE_SEARCH_TOOLS first to get current tool schemas

Setup

Get Rube MCP: Add https://rube.app/mcp as an MCP server in your client configuration. No API keys needed — just add the endpoint and it works.

  1. Verify Rube MCP is available by confirming RUBE_SEARCH_TOOLS responds
  2. Call RUBE_MANAGE_CONNECTIONS with toolkit scrapegraph_ai
  3. If connection is not ACTIVE, follow the returned auth link to complete setup
  4. Confirm connection status shows ACTIVE before running any workflows

Tool Discovery

Always discover available tools before executing workflows:

RUBE_SEARCH_TOOLS
queries: [{use_case: "Scrapegraph AI operations", known_fields: ""}]
session: {generate_id: true}

This returns available tool slugs, input schemas, recommended execution plans, and known pitfalls.

Core Workflow Pattern

Step 1: Discover Available Tools

RUBE_SEARCH_TOOLS
queries: [{use_case: "your specific Scrapegraph AI task"}]
session: {id: "existing_session_id"}

Step 2: Check Connection

RUBE_MANAGE_CONNECTIONS
toolkits: ["scrapegraph_ai"]
session_id: "your_session_id"

Step 3: Execute Tools

RUBE_MULTI_EXECUTE_TOOL
tools: [{
  tool_slug: "TOOL_SLUG_FROM_SEARCH",
  arguments: {/* schema-compliant args from search results */}
}]
memory: {}
session_id: "your_session_id"

Known Pitfalls

  • Always search first: Tool schemas change. Never hardcode tool slugs or arguments without calling RUBE_SEARCH_TOOLS
  • Check connection: Verify RUBE_MANAGE_CONNECTIONS shows ACTIVE status before executing tools
  • Schema compliance: Use exact field names and types from the search results
  • Memory parameter: Always include memory in RUBE_MULTI_EXECUTE_TOOL calls, even if empty ({})
  • Session reuse: Reuse session IDs within a workflow. Generate new ones for new workflows
  • Pagination: Check responses for pagination tokens and continue fetching until complete

Quick Reference

OperationApproach
Find toolsRUBE_SEARCH_TOOLS with Scrapegraph AI-specific use case
ConnectRUBE_MANAGE_CONNECTIONS with toolkit scrapegraph_ai
ExecuteRUBE_MULTI_EXECUTE_TOOL with discovered tool slugs
Bulk opsRUBE_REMOTE_WORKBENCH with run_composio_tool()
Full schemaRUBE_GET_TOOL_SCHEMAS for tools with schemaRef

Powered by Composio

Source

git clone https://github.com/ComposioHQ/awesome-claude-skills/blob/master/composio-skills/scrapegraph-ai-automation/SKILL.mdView on GitHub

Overview

This skill automates Scrapegraph AI operations through Composio's Scrapegraph AI toolkit via Rube MCP. It emphasizes always searching for current tool schemas with RUBE_SEARCH_TOOLS before execution and ensures connections are ACTIVE and schema-compliant. Practical setup, discovery, and execution flows are designed to minimize manual tooling changes.

How This Skill Works

Configure an MCP endpoint for Rube MCP, verify tool availability with RUBE_SEARCH_TOOLS, and manage your Scrapegraph AI connection with RUBE_MANAGE_CONNECTIONS. Then execute discovered tools using RUBE_MULTI_EXECUTE_TOOL with the correct memory payload and session management to complete workflows.

When to Use It

  • When starting a new Scrapegraph AI workflow and you need to discover available tools before running.
  • When you must ensure the RUBE MCP connection is ACTIVE before executing tools.
  • When you need to run a discovered tool with schema-compliant arguments using RUBE_MULTI_EXECUTE_TOOL.
  • When tool schemas change and you should avoid hardcoding slugs or arguments.
  • When handling multiple tools in a single session, including pagination and memory management.

Quick Start

  1. Step 1: Add https://rube.app/mcp as an MCP server in your client configuration and ensure RUBE_SEARCH_TOOLS responds.
  2. Step 2: Use RUBE_MANAGE_CONNECTIONS with toolkits: ["scrapegraph_ai"] and verify the connection becomes ACTIVE.
  3. Step 3: Discover tools via RUBE_SEARCH_TOOLS, then run a selected tool with RUBE_MULTI_EXECUTE_TOOL using the memory parameter.

Best Practices

  • Always call RUBE_SEARCH_TOOLS before running any workflow to get current tool schemas.
  • Verify RUBE_MANAGE_CONNECTIONS shows ACTIVE before executing tools.
  • Use exact field names and types from the search results; avoid hardcoding slugs.
  • Always include memory in RUBE_MULTI_EXECUTE_TOOL calls, even if empty ({}).
  • Reuse session IDs within a workflow and generate new ones only for separate workflows.

Example Use Cases

  • Discover a Scrapegraph AI operation tool, select its slug from RUBE_SEARCH_TOOLS, and run it with a memory payload in a single session.
  • Reconnect to Scrapegraph AI via RUBE_MANAGE_CONNECTIONS and confirm ACTIVE status before subsequent tool executions.
  • Handle a multi-tool Scrapegraph AI task by discovering tools, validating schemas, and executing with RUBE_MULTI_EXECUTE_TOOL across tools in one workflow.
  • Adapt to schema changes by re-running RUBE_SEARCH_TOOLS to fetch updated tool slugs and input schemas.
  • Perform bulk tool operations using RUBE_REMOTE_WORKBENCH with run_composio_tool() for Scrapegraph AI tasks.

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers ↗