Get the FREE Ultimate OpenClaw Setup Guide →

Replicate Automation

npx machina-cli add skill ComposioHQ/awesome-claude-skills/replicate-automation --openclaw
Files (1)
SKILL.md
5.2 KB

Replicate Automation

Automate your Replicate AI model workflows -- run predictions on any public model (image generation, LLMs, audio, video), upload input files, inspect model schemas and documentation, list model versions, and track prediction history.

Toolkit docs: composio.dev/toolkits/replicate


Setup

  1. Add the Composio MCP server to your client: https://rube.app/mcp
  2. Connect your Replicate account when prompted (API token authentication)
  3. Start using the workflows below

Core Workflows

1. Get Model Details and Schema

Use REPLICATE_MODELS_GET to inspect a model's input/output schema before running predictions.

Tool: REPLICATE_MODELS_GET
Inputs:
  - model_owner: string (required) -- e.g., "meta", "black-forest-labs", "stability-ai"
  - model_name: string (required) -- e.g., "meta-llama-3-8b-instruct", "flux-1.1-pro"

Important: Each model has unique input keys and types. Always check the openapi_schema from this response before constructing prediction inputs.

2. Run a Prediction

Use REPLICATE_MODELS_PREDICTIONS_CREATE to run inference on any model with optional synchronous waiting and webhooks.

Tool: REPLICATE_MODELS_PREDICTIONS_CREATE
Inputs:
  - model_owner: string (required) -- e.g., "meta", "black-forest-labs"
  - model_name: string (required) -- e.g., "flux-1.1-pro", "sdxl"
  - input: object (required) -- model-specific inputs, e.g., { "prompt": "A sunset over mountains" }
  - wait_for: integer (1-60 seconds, optional) -- synchronous wait for completion
  - cancel_after: string (optional) -- max execution time, e.g., "300s", "5m"
  - webhook: string (optional) -- HTTPS URL for async completion notifications
  - webhook_events_filter: array (optional) -- ["start", "output", "logs", "completed"]

Sync vs Async: Use wait_for (1-60s) for fast models. For long-running jobs, omit it and use webhooks or poll via REPLICATE_PREDICTIONS_LIST.

3. Upload Files for Model Input

Use REPLICATE_CREATE_FILE to upload images, documents, or other binary inputs that models need.

Tool: REPLICATE_CREATE_FILE
Inputs:
  - content: string (required) -- base64-encoded file content
  - filename: string (required) -- e.g., "input.png", "audio.wav" (max 255 bytes UTF-8)
  - content_type: string (default "application/octet-stream") -- MIME type
  - metadata: object (optional) -- custom JSON metadata

4. Read Model Documentation

Use REPLICATE_MODELS_README_GET to access a model's README in Markdown format for detailed usage instructions.

Tool: REPLICATE_MODELS_README_GET
Inputs:
  - model_owner: string (required)
  - model_name: string (required)

5. List Model Versions

Use REPLICATE_MODELS_VERSIONS_LIST to see all available versions of a model, sorted newest first.

Tool: REPLICATE_MODELS_VERSIONS_LIST
Inputs:
  - model_owner: string (required)
  - model_name: string (required)

6. Track Prediction History and Files

Use REPLICATE_PREDICTIONS_LIST to retrieve prediction history, and REPLICATE_FILES_GET/REPLICATE_FILES_LIST to manage uploaded files.

Tool: REPLICATE_PREDICTIONS_LIST
  - Lists all predictions for the authenticated user with pagination

Tool: REPLICATE_FILES_LIST
  - Lists uploaded files, most recent first

Tool: REPLICATE_FILES_GET
  - Get details of a specific file by ID

Known Pitfalls

PitfallDetail
Model-specific input keysEach model has unique input keys and types. Using the wrong key causes validation errors. Always call REPLICATE_MODELS_GET first to check the openapi_schema.
File upload encodingREPLICATE_CREATE_FILE requires base64-encoded content. Binary files treated as text (UTF-8) will fail with decode errors.
Public vs deployment pathsPublic models must be run via REPLICATE_MODELS_PREDICTIONS_CREATE. Using deployment-oriented paths causes HTTP 404 failures.
Sync wait limitswait_for supports 1-60 seconds only. Long-running jobs need async handling via webhooks or polling REPLICATE_PREDICTIONS_LIST.
Image model constraintsImage models like flux-1.1-pro have specific constraints (e.g., max width/height 1440px, valid aspect ratios). Check the model schema first.
Stale file referencesHeavy usage creates many uploads. Routinely check REPLICATE_FILES_LIST to avoid using stale file_id references.

Quick Reference

Tool SlugDescription
REPLICATE_MODELS_GETGet model details, schema, and metadata
REPLICATE_MODELS_PREDICTIONS_CREATERun a prediction on a model
REPLICATE_CREATE_FILEUpload a file for model input
REPLICATE_MODELS_README_GETGet model README documentation
REPLICATE_MODELS_VERSIONS_LISTList all versions of a model
REPLICATE_PREDICTIONS_LISTList prediction history with pagination
REPLICATE_FILES_LISTList uploaded files
REPLICATE_FILES_GETGet file details by ID

Powered by Composio

Source

git clone https://github.com/ComposioHQ/awesome-claude-skills/blob/master/composio-skills/replicate-automation/SKILL.mdView on GitHub

Overview

This skill automates Replicate AI model workflows through the Composio MCP integration. It enables running predictions on public models, uploading input files, inspecting model schemas, listing model versions, and tracking prediction history from a unified interface.

How This Skill Works

Connect to the MCP server at https://rube.app/mcp and authenticate with your Replicate API token. Use the dedicated tools (e.g., REPLICATE_MODELS_GET, REPLICATE_MODELS_PREDICTIONS_CREATE, REPLICATE_CREATE_FILE, REPLICATE_MODELS_README_GET, REPLICATE_MODELS_VERSIONS_LIST, REPLICATE_PREDICTIONS_LIST, REPLICATE_FILES_LIST/GET) to inspect schemas, run predictions (with optional wait_for or webhooks), upload inputs, read model docs, and manage history and files.

When to Use It

  • When you need to inspect a model's input/output schema before running predictions (REPLICATE_MODELS_GET).
  • When you want to run an inference on a public model and optionally wait for completion (REPLICATE_MODELS_PREDICTIONS_CREATE).
  • When you must upload binary inputs like images or documents for a model (REPLICATE_CREATE_FILE).
  • When you need from-model documentation or usage guidance (REPLICATE_MODELS_README_GET).
  • When you want to review available model versions or audit prediction history (REPLICATE_MODELS_VERSIONS_LIST, REPLICATE_PREDICTIONS_LIST, REPLICATE_FILES_LIST/GET).

Quick Start

  1. Step 1: Add the MCP server at https://rube.app/mcp and connect your Replicate account with API token authentication.
  2. Step 2: Get model details with REPLICATE_MODELS_GET to learn required inputs (model_owner, model_name, etc.).
  3. Step 3: Run a prediction with REPLICATE_MODELS_PREDICTIONS_CREATE (optionally upload inputs via REPLICATE_CREATE_FILE and choose wait_for or webhook for async results).

Best Practices

  • Always call REPLICATE_MODELS_GET first to learn the exact input keys via the openapi_schema.
  • For quick results, use wait_for (1-60s); for long-running jobs, omit it and rely on webhooks or polling via REPLICATE_PREDICTIONS_LIST.
  • If uploading files, encode content as base64 when using REPLICATE_CREATE_FILE and provide a descriptive filename.
  • Ensure model_owner and model_name are correct and match the target model (e.g., stability-ai, meta).
  • Monitor prediction history and uploaded files with REPLICATE_PREDICTIONS_LIST and REPLICATE_FILES_LIST to audit usage.

Example Use Cases

  • Inspect the input schema of a public model before sending a prompt to ensure correct keys are provided.
  • Run a prediction on a model and wait for the result within 30 seconds, receiving a synchronous response.
  • Upload a local image as input for a model that requires binary assets using REPLICATE_CREATE_FILE.
  • Open the model's README to follow specific usage instructions and constraints.
  • List all versions of a model to confirm you’re using the latest release and review your prediction history for auditing.

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers