parallel-automation
Scannednpx machina-cli add skill ComposioHQ/awesome-claude-skills/parallel-automation --openclawParallel Automation via Rube MCP
Automate Parallel operations through Composio's Parallel toolkit via Rube MCP.
Toolkit docs: composio.dev/toolkits/parallel
Prerequisites
- Rube MCP must be connected (RUBE_SEARCH_TOOLS available)
- Active Parallel connection via
RUBE_MANAGE_CONNECTIONSwith toolkitparallel - Always call
RUBE_SEARCH_TOOLSfirst to get current tool schemas
Setup
Get Rube MCP: Add https://rube.app/mcp as an MCP server in your client configuration. No API keys needed — just add the endpoint and it works.
- Verify Rube MCP is available by confirming
RUBE_SEARCH_TOOLSresponds - Call
RUBE_MANAGE_CONNECTIONSwith toolkitparallel - If connection is not ACTIVE, follow the returned auth link to complete setup
- Confirm connection status shows ACTIVE before running any workflows
Tool Discovery
Always discover available tools before executing workflows:
RUBE_SEARCH_TOOLS
queries: [{use_case: "Parallel operations", known_fields: ""}]
session: {generate_id: true}
This returns available tool slugs, input schemas, recommended execution plans, and known pitfalls.
Core Workflow Pattern
Step 1: Discover Available Tools
RUBE_SEARCH_TOOLS
queries: [{use_case: "your specific Parallel task"}]
session: {id: "existing_session_id"}
Step 2: Check Connection
RUBE_MANAGE_CONNECTIONS
toolkits: ["parallel"]
session_id: "your_session_id"
Step 3: Execute Tools
RUBE_MULTI_EXECUTE_TOOL
tools: [{
tool_slug: "TOOL_SLUG_FROM_SEARCH",
arguments: {/* schema-compliant args from search results */}
}]
memory: {}
session_id: "your_session_id"
Known Pitfalls
- Always search first: Tool schemas change. Never hardcode tool slugs or arguments without calling
RUBE_SEARCH_TOOLS - Check connection: Verify
RUBE_MANAGE_CONNECTIONSshows ACTIVE status before executing tools - Schema compliance: Use exact field names and types from the search results
- Memory parameter: Always include
memoryinRUBE_MULTI_EXECUTE_TOOLcalls, even if empty ({}) - Session reuse: Reuse session IDs within a workflow. Generate new ones for new workflows
- Pagination: Check responses for pagination tokens and continue fetching until complete
Quick Reference
| Operation | Approach |
|---|---|
| Find tools | RUBE_SEARCH_TOOLS with Parallel-specific use case |
| Connect | RUBE_MANAGE_CONNECTIONS with toolkit parallel |
| Execute | RUBE_MULTI_EXECUTE_TOOL with discovered tool slugs |
| Bulk ops | RUBE_REMOTE_WORKBENCH with run_composio_tool() |
| Full schema | RUBE_GET_TOOL_SCHEMAS for tools with schemaRef |
Powered by Composio
Source
git clone https://github.com/ComposioHQ/awesome-claude-skills/blob/master/composio-skills/parallel-automation/SKILL.mdView on GitHub Overview
This skill automates parallel operations using Composio's Parallel toolkit through Rube MCP. It emphasizes discovering current tool schemas with RUBE_SEARCH_TOOLS and ensuring an ACTIVE connection via RUBE_MANAGE_CONNECTIONS before executing parallel tools.
How This Skill Works
First, verify RUBE_SEARCH_TOOLS is available to fetch current tool slugs and input schemas. Then establish or confirm an ACTIVE RUBE_MANAGE_CONNECTIONS session for the 'parallel' toolkit. Finally, execute the chosen tool with RUBE_MULTI_EXECUTE_TOOL, including the discovered slug, exact schema-compliant arguments, a memory payload, and the session_id for continuity.
When to Use It
- Orchestrating multiple operations in parallel within a single workflow
- When tool schemas change and you must fetch current slugs before running
- Setting up a new parallel automation integration with Rube MCP
- Running bulk parallel tasks across multiple tools in one pass
- Validating and reusing session IDs to optimize parallel runs
Quick Start
- Step 1: Add MCP server https://rube.app/mcp to your client configuration and confirm RUBE_SEARCH_TOOLS responds
- Step 2: Call RUBE_MANAGE_CONNECTIONS with toolkit 'parallel' and ensure status is ACTIVE
- Step 3: Run RUBE_MULTI_EXECUTE_TOOL with a discovered tool_slug, memory: {}, session_id, and arguments from the search results
Best Practices
- Always call RUBE_SEARCH_TOOLS before executing to fetch current tool slugs
- Verify RUBE_MANAGE_CONNECTIONS shows ACTIVE before any tools
- Use exact field names and types from the search results; avoid hardcoding
- Include memory in RUBE_MULTI_EXECUTE_TOOL calls (even if empty {})
- Reuse session IDs within a workflow; generate new ones for new workflows
Example Use Cases
- Parallelize API calls to multiple endpoints to speed up data collection
- Orchestrate batch processing where tools perform distinct data transformations in parallel
- Run concurrent validations across datasets using different tools
- Coordinate multiple data enrichment steps in parallel and aggregate results
- Bulk execute a set of compliance checks in parallel for a large dataset