Get the FREE Ultimate OpenClaw Setup Guide →

mock-authoring

npx machina-cli add skill mpuig/raw/mock-authoring --openclaw
Files (1)
SKILL.md
7.1 KB

Mock Authoring Skill

Use this skill to write dry_run.py files that enable workflows to run without external dependencies.

Purpose of dry_run.py

The dry run file provides mock implementations of external operations so workflows can be tested without:

  • Network calls (APIs, databases, web scraping)
  • File system writes outside the workflow directory
  • Environment variables or credentials
  • Long-running operations

Mock Patterns

1. API Response Mocking

#!/usr/bin/env python3
"""Dry run with mock data."""

from raw_runtime import DryRunContext

def mock_fetch_stock_price(ctx: DryRunContext, symbol: str) -> dict:
    """Mock Yahoo Finance API response."""
    ctx.log(f"[MOCK] Fetching stock price for {symbol}")

    # Return realistic mock data
    return {
        "symbol": symbol,
        "price": 150.23,
        "change": 2.45,
        "change_percent": 1.65,
        "volume": 1234567,
        "timestamp": "2024-01-15T16:00:00Z"
    }

def mock_fetch_news(ctx: DryRunContext, query: str) -> list[dict]:
    """Mock news API response."""
    ctx.log(f"[MOCK] Fetching news for query: {query}")

    return [
        {
            "title": "Sample News Article 1",
            "url": "https://example.com/article1",
            "published": "2024-01-15T10:00:00Z",
            "summary": "This is a mock news article summary."
        },
        {
            "title": "Sample News Article 2",
            "url": "https://example.com/article2",
            "published": "2024-01-15T09:00:00Z",
            "summary": "Another mock article summary."
        }
    ]

2. Database Operation Mocking

def mock_query_database(ctx: DryRunContext, sql: str) -> list[dict]:
    """Mock database query."""
    ctx.log(f"[MOCK] Executing SQL: {sql[:50]}...")

    # Return mock rows
    return [
        {"id": 1, "name": "Alice", "email": "alice@example.com"},
        {"id": 2, "name": "Bob", "email": "bob@example.com"},
    ]

def mock_insert_record(ctx: DryRunContext, table: str, data: dict) -> int:
    """Mock database insert."""
    ctx.log(f"[MOCK] Inserting into {table}: {data}")
    return 123  # Mock generated ID

3. File Operation Mocking

def mock_write_file(ctx: DryRunContext, path: str, content: str) -> bool:
    """Mock file write operation."""
    ctx.log(f"[MOCK] Would write {len(content)} bytes to {path}")
    # Don't actually write - just log
    return True

def mock_upload_to_s3(ctx: DryRunContext, bucket: str, key: str, data: bytes) -> str:
    """Mock S3 upload."""
    ctx.log(f"[MOCK] Would upload {len(data)} bytes to s3://{bucket}/{key}")
    return f"https://s3.amazonaws.com/{bucket}/{key}"  # Mock URL

4. Long-Running Operation Mocking

def mock_train_model(ctx: DryRunContext, dataset_path: str) -> dict:
    """Mock ML model training (skip actual training)."""
    ctx.log(f"[MOCK] Would train model on {dataset_path}")
    ctx.log("[MOCK] Training skipped in dry run")

    # Return mock metrics
    return {
        "accuracy": 0.92,
        "precision": 0.89,
        "recall": 0.91,
        "model_path": "/tmp/mock_model.pkl"
    }

DryRunContext Usage

from raw_runtime import DryRunContext

def mock_operation(ctx: DryRunContext) -> str:
    # Log what would happen
    ctx.log("Starting mock operation")
    ctx.log("Step 1: Connect to API")
    ctx.log("Step 2: Fetch data")
    ctx.log("Step 3: Process results")

    # Return mock result
    return "mock_result"

Realistic Mock Data

Mock data should be realistic enough to test workflow logic:

# Good: Realistic structure and values
def mock_api_response(ctx: DryRunContext) -> dict:
    return {
        "status": "success",
        "data": [
            {"id": "abc123", "value": 42.5, "timestamp": "2024-01-15T10:00:00Z"},
            {"id": "def456", "value": 38.2, "timestamp": "2024-01-15T11:00:00Z"},
        ],
        "pagination": {"page": 1, "total_pages": 5}
    }

# Bad: Oversimplified mock
def mock_api_response(ctx: DryRunContext) -> dict:
    return {"result": "ok"}  # Too simple - doesn't match real API

Error Case Mocking

Include mock error scenarios for testing error handling:

def mock_api_with_error(ctx: DryRunContext, should_fail: bool = False) -> dict:
    """Mock API that can simulate failures."""
    if should_fail:
        ctx.log("[MOCK] Simulating API error")
        raise ConnectionError("Mock API connection failed")

    ctx.log("[MOCK] API call succeeded")
    return {"status": "success", "data": []}

Integration with Workflow

In your workflow run.py:

class MyWorkflow(BaseWorkflow[MyParams]):
    def __init__(self, params: MyParams, workflow_dir: Path):
        super().__init__(params, workflow_dir)

        # Check if running in dry mode
        if self.is_dry_run():
            # Import mocks
            from dry_run import mock_fetch_stock_price, mock_write_file
            self.fetch_stock_price = mock_fetch_stock_price
            self.write_file = mock_write_file
        else:
            # Use real implementations
            from tools.yahoo_finance import fetch_stock_price
            from tools.file_writer import write_file
            self.fetch_stock_price = fetch_stock_price
            self.write_file = write_file

Testing Your Mocks

# Run workflow in dry mode
raw run <workflow-id> --dry

# Should complete without:
# - Network errors
# - Missing credentials
# - File system permission errors
# - Long wait times

Common Mistakes

  1. Mocks that call real APIs

    # Bad: Still makes real network call
    def mock_fetch(ctx: DryRunContext):
        import requests
        return requests.get("https://api.example.com")  # Don't do this!
    
    # Good: Pure mock
    def mock_fetch(ctx: DryRunContext):
        ctx.log("[MOCK] Fetching data")
        return {"data": "mock_value"}
    
  2. Accessing environment variables

    # Bad: Requires env vars in dry run
    def mock_auth(ctx: DryRunContext):
        api_key = os.environ["API_KEY"]  # Will fail without env var
    
    # Good: No env var dependency
    def mock_auth(ctx: DryRunContext):
        ctx.log("[MOCK] Using mock credentials")
        return "mock_token"
    
  3. File system writes outside workflow directory

    # Bad: Writes to filesystem
    def mock_save(ctx: DryRunContext, data: str):
        with open("/tmp/output.txt", "w") as f:
            f.write(data)
    
    # Good: Just logs
    def mock_save(ctx: DryRunContext, data: str):
        ctx.log(f"[MOCK] Would save {len(data)} bytes to /tmp/output.txt")
    

Checklist for Good Mocks

  • No network calls (HTTP, database, etc.)
  • No environment variable dependencies
  • No file system writes outside workflow directory
  • Realistic data structures matching real API responses
  • Appropriate use of ctx.log() for observability
  • Fast execution (no sleep or long operations)
  • Include error cases where workflow handles errors
  • Return types match real implementation

Source

git clone https://github.com/mpuig/raw/blob/main/builder/skills/mock-authoring/SKILL.mdView on GitHub

Overview

Mock Authoring lets you write dry_run.py files that let workflows run without external dependencies. It provides patterns for API responses, database operations, file interactions, and long-running tasks so tests can simulate real conditions without network calls or credentials.

How This Skill Works

Each mock uses a DryRunContext to log actions and return representative data structures. Implement lightweight functions that mirror real interfaces (e.g., API calls, SQL queries, file writes) and return deterministic, realistic data so the workflow logic behaves the same in dry runs.

When to Use It

  • Testing workflow steps that call external APIs without making real network requests
  • Validating database interactions using mocked query and insert results
  • Executing workflows that would write files or upload assets outside the working directory
  • Running CI or local tests where credentials or environments are not available
  • Simulating long-running tasks (e.g., model training) without incurring real compute time

Quick Start

  1. Step 1: Import DryRunContext from the dry-run runtime
  2. Step 2: Implement a mock function that accepts ctx: DryRunContext and returns realistic data while logging actions
  3. Step 3: Wire your mock into the workflow and run in dry-run mode to observe logs and results

Best Practices

  • Keep mocks deterministic: return stable data to ensure repeatable tests
  • Use realistic data shapes and timestamps that match real APIs and schemas
  • Log mock actions with DryRunContext to trace workflow execution
  • Name mocks clearly (e.g., mock_fetch_stock_price) and colocate with the workflow
  • Validate that mocks do not perform real I/O or network calls during dry runs

Example Use Cases

  • API Response Mocking: mock_fetch_stock_price and mock_fetch_news return structured dicts/lists with realistic fields
  • Database Operation Mocking: mock_query_database returns mock rows and mock_insert_record returns a generated ID
  • File Operation Mocking: mock_write_file logs intended writes and returns success without touching the filesystem
  • Long-Running Operation Mocking: mock_train_model logs progress and returns mock metrics without training
  • DryRunContext Usage: mock_operation demonstrates logging steps and returning a mock result via DryRunContext

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers