Get the FREE Ultimate OpenClaw Setup Guide →

drip-jobs-automation

Scanned
npx machina-cli add skill ComposioHQ/awesome-claude-skills/drip-jobs-automation --openclaw
Files (1)
SKILL.md
2.9 KB

Drip Jobs Automation via Rube MCP

Automate Drip Jobs operations through Composio's Drip Jobs toolkit via Rube MCP.

Toolkit docs: composio.dev/toolkits/drip_jobs

Prerequisites

  • Rube MCP must be connected (RUBE_SEARCH_TOOLS available)
  • Active Drip Jobs connection via RUBE_MANAGE_CONNECTIONS with toolkit drip_jobs
  • Always call RUBE_SEARCH_TOOLS first to get current tool schemas

Setup

Get Rube MCP: Add https://rube.app/mcp as an MCP server in your client configuration. No API keys needed — just add the endpoint and it works.

  1. Verify Rube MCP is available by confirming RUBE_SEARCH_TOOLS responds
  2. Call RUBE_MANAGE_CONNECTIONS with toolkit drip_jobs
  3. If connection is not ACTIVE, follow the returned auth link to complete setup
  4. Confirm connection status shows ACTIVE before running any workflows

Tool Discovery

Always discover available tools before executing workflows:

RUBE_SEARCH_TOOLS
queries: [{use_case: "Drip Jobs operations", known_fields: ""}]
session: {generate_id: true}

This returns available tool slugs, input schemas, recommended execution plans, and known pitfalls.

Core Workflow Pattern

Step 1: Discover Available Tools

RUBE_SEARCH_TOOLS
queries: [{use_case: "your specific Drip Jobs task"}]
session: {id: "existing_session_id"}

Step 2: Check Connection

RUBE_MANAGE_CONNECTIONS
toolkits: ["drip_jobs"]
session_id: "your_session_id"

Step 3: Execute Tools

RUBE_MULTI_EXECUTE_TOOL
tools: [{
  tool_slug: "TOOL_SLUG_FROM_SEARCH",
  arguments: {/* schema-compliant args from search results */}
}]
memory: {}
session_id: "your_session_id"

Known Pitfalls

  • Always search first: Tool schemas change. Never hardcode tool slugs or arguments without calling RUBE_SEARCH_TOOLS
  • Check connection: Verify RUBE_MANAGE_CONNECTIONS shows ACTIVE status before executing tools
  • Schema compliance: Use exact field names and types from the search results
  • Memory parameter: Always include memory in RUBE_MULTI_EXECUTE_TOOL calls, even if empty ({})
  • Session reuse: Reuse session IDs within a workflow. Generate new ones for new workflows
  • Pagination: Check responses for pagination tokens and continue fetching until complete

Quick Reference

OperationApproach
Find toolsRUBE_SEARCH_TOOLS with Drip Jobs-specific use case
ConnectRUBE_MANAGE_CONNECTIONS with toolkit drip_jobs
ExecuteRUBE_MULTI_EXECUTE_TOOL with discovered tool slugs
Bulk opsRUBE_REMOTE_WORKBENCH with run_composio_tool()
Full schemaRUBE_GET_TOOL_SCHEMAS for tools with schemaRef

Powered by Composio

Source

git clone https://github.com/ComposioHQ/awesome-claude-skills/blob/master/composio-skills/drip-jobs-automation/SKILL.mdView on GitHub

Overview

This skill automates Drip Jobs operations using Composio's Drip Jobs toolkit via Rube MCP. It emphasizes discovering current tool schemas with RUBE_SEARCH_TOOLS before executing and ensures a proper ACTIVE connection via RUBE_MANAGE_CONNECTIONS. It enables end-to-end automation from discovery to execution, with careful handling of memory and session management.

How This Skill Works

You first verify Rube MCP connectivity and fetch current tool schemas using RUBE_SEARCH_TOOLS. Then you establish a connection to the drip_jobs toolkit with RUBE_MANAGE_CONNECTIONS and confirm it is ACTIVE. Finally you execute the selected tool via RUBE_MULTI_EXECUTE_TOOL, passing schema-compliant arguments and a memory object, using a session_id for continuity.

When to Use It

  • Automating routine Drip Jobs tasks (scheduling, management, or orchestration) via Rube MCP.
  • Ensuring you always use up-to-date tool schemas before executing any workflow.
  • Setting up and validating an ACTIVE drip_jobs connection before running workflows.
  • Discovering available tools and required arguments prior to execution.
  • Performing bulk or remote operations using multiple discovered tools in a single workflow.

Quick Start

  1. Step 1: RUBE_SEARCH_TOOLS with use_case: "Drip Jobs operations" to discover current tool slugs and schemas.
  2. Step 2: RUBE_MANAGE_CONNECTIONS with toolkits: ["drip_jobs"] and confirm ACTIVE status.
  3. Step 3: RUBE_MULTI_EXECUTE_TOOL with tool_slug from search results, proper arguments, memory: {} and session_id: <your_session_id>.

Best Practices

  • Always run RUBE_SEARCH_TOOLS first to fetch current tool schemas and avoid hardcoding tool slugs.
  • Verify the drip_jobs connection is ACTIVE using RUBE_MANAGE_CONNECTIONS before execution.
  • Use exact field names and types from the search results; align arguments to the discovered schema.
  • Always include a memory object in RUBE_MULTI_EXECUTE_TOOL calls, even if empty ({}).
  • Reuse session IDs within a workflow and handle pagination tokens when fetching tool data.

Example Use Cases

  • Orchestrate a weekly Drip Jobs sync by discovering tools, establishing a connection, and executing a sequence of drip operations.
  • Run a multi-step onboarding drip workflow by discovery, validating schemas, and sequential tool execution.
  • Perform bulk Drip Jobs tasks by executing multiple tools via RUBE_MULTI_EXECUTE_TOOL with a single session.
  • Adapt to schema updates by re-searching tools before updating arguments in an automated run.
  • Schedule recurring runs that always verify an ACTIVE connection before each execution.

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers