Get the FREE Ultimate OpenClaw Setup Guide →

mcp-chaining

npx machina-cli add skill parcadei/Continuous-Claude-v3/mcp-chaining --openclaw
Files (1)
SKILL.md
5.0 KB

MCP Chaining Pipeline

A research-to-implement pipeline that chains 5 MCP tools for end-to-end workflows.

When to Use

  • Building multi-tool MCP pipelines
  • Understanding how to chain MCP calls with graceful degradation
  • Debugging MCP environment variable issues
  • Learning the tool naming conventions for different MCP servers

What We Built

A pipeline that chains these tools:

StepServerTool IDPurpose
1niania__searchSearch library documentation
2ast-grepast-grep__find_codeFind AST code patterns
3morphmorph__warpgrep_codebase_searchFast codebase search
4qltyqlty__qlty_checkCode quality validation
5gitgit__git_statusGit operations

Key Files

  • scripts/research_implement_pipeline.py - Main pipeline implementation
  • scripts/test_research_pipeline.py - Test harness with isolated sandbox
  • workspace/pipeline-test/sample_code.py - Test sample code

Usage Examples

# Dry-run pipeline (preview plan without changes)
uv run python -m runtime.harness scripts/research_implement_pipeline.py \
    --topic "async error handling python" \
    --target-dir "./workspace/pipeline-test" \
    --dry-run --verbose

# Run tests
uv run python -m runtime.harness scripts/test_research_pipeline.py --test all

# View the pipeline script
cat scripts/research_implement_pipeline.py

Critical Fix: Environment Variables

The MCP SDK's get_default_environment() only includes basic vars (PATH, HOME, etc.), NOT os.environ. We fixed src/runtime/mcp_client.py to pass full environment:

# In _connect_stdio method:
full_env = {**os.environ, **(resolved_env or {})}

This ensures API keys from ~/.claude/.env reach subprocesses.

Graceful Degradation Pattern

Each tool is optional. If unavailable (disabled, no API key, etc.), the pipeline continues:

async def check_tool_available(tool_id: str) -> bool:
    """Check if an MCP tool is available."""
    server_name = tool_id.split("__")[0]
    server_config = manager._config.get_server(server_name)
    if not server_config or server_config.disabled:
        return False
    return True

# In step function:
if not await check_tool_available("nia__search"):
    return StepResult(status=StepStatus.SKIPPED, message="Nia not available")

Tool Name Reference

nia (Documentation Search)

nia__search              - Universal documentation search
nia__nia_research        - Research with sources
nia__nia_grep            - Grep-style doc search
nia__nia_explore         - Explore package structure

ast-grep (Structural Code Search)

ast-grep__find_code      - Find code by AST pattern
ast-grep__find_code_by_rule - Find by YAML rule
ast-grep__scan_code      - Scan with multiple patterns

morph (Fast Text Search + Edit)

morph__warpgrep_codebase_search  - 20x faster grep
morph__edit_file                 - Smart file editing

qlty (Code Quality)

qlty__qlty_check         - Run quality checks
qlty__qlty_fmt           - Auto-format code
qlty__qlty_metrics       - Get code metrics
qlty__smells             - Detect code smells

git (Version Control)

git__git_status          - Get repo status
git__git_diff            - Show differences
git__git_log             - View commit history
git__git_add             - Stage files

Pipeline Architecture

                    +----------------+
                    |   CLI Args     |
                    | (topic, dir)   |
                    +-------+--------+
                            |
                    +-------v--------+
                    | PipelineContext|
                    | (shared state) |
                    +-------+--------+
                            |
    +-------+-------+-------+-------+-------+
    |       |       |       |       |       |
+---v---+---v---+---v---+---v---+---v---+
| nia   |ast-grp| morph | qlty  | git   |
|search |pattern|search |check  |status |
+---+---+---+---+---+---+---+---+---+---+
    |       |       |       |       |
    +-------v-------v-------v-------+
                    |
            +-------v--------+
            | StepResult[]   |
            | (aggregated)   |
            +----------------+

Error Handling

The pipeline captures errors without failing the entire run:

try:
    result = await call_mcp_tool("nia__search", {"query": topic})
    return StepResult(status=StepStatus.SUCCESS, data=result)
except Exception as e:
    ctx.errors.append(f"nia: {e}")
    return StepResult(status=StepStatus.FAILED, error=str(e))

Creating Your Own Pipeline

  1. Copy the pattern from scripts/research_implement_pipeline.py
  2. Define your steps as async functions
  3. Use check_tool_available() for graceful degradation
  4. Chain results through PipelineContext
  5. Aggregate with print_summary()

Source

git clone https://github.com/parcadei/Continuous-Claude-v3/blob/main/.claude/skills/mcp-chaining/SKILL.mdView on GitHub

Overview

An end-to-end research-to-implement pipeline that chains five MCP tools to orchestrate multi-tool workflows. It demonstrates graceful degradation so the pipeline continues even when a tool is unavailable, and includes environment-variable fixes to ensure API keys reach subprocesses. This setup helps debug MCP environments and understand tool naming conventions across MCP servers.

How This Skill Works

The pipeline is implemented in scripts/research_implement_pipeline.py and runs five steps using these tools in order: nia__search, ast-grep__find_code, morph__warpgrep_codebase_search, qlty__qlty_check, and git__git_status. Each step checks availability and can be skipped if the tool is disabled, enabling robust end-to-end workflows. A critical fix propagates the full environment to subprocesses so API keys from ~/.claude/.env reach tools.

When to Use It

  • Building multi-tool MCP pipelines
  • Understanding how to chain MCP calls with graceful degradation
  • Debugging MCP environment variable issues
  • Learning the tool naming conventions for different MCP servers
  • Testing end-to-end MCP workflows in a sandbox

Quick Start

  1. Step 1: Dry-run pipeline to preview the plan
  2. Step 2: Run tests to validate in an isolated sandbox
  3. Step 3: View the pipeline script with cat scripts/research_implement_pipeline.py

Best Practices

  • Start with a dry-run to preview the plan before changes
  • Explicitly check tool availability and allow SKIPPED steps
  • Ensure full environment variables are propagated to subprocesses
  • Use the provided test harness and sample_code.py for isolated testing
  • Modularize steps so you can swap or disable tools without breaking the pipeline

Example Use Cases

  • Chain nia__search to find docs, then morph__warpgrep_codebase_search for codebase search, followed by qlty__qlty_check and git_status in a single workflow
  • Debug a failing tool due to missing API key by testing environment-variable propagation
  • Extend the pipeline to include a new MCP server tool, preserving graceful degradation
  • Run dry-run and test modes to validate changes before deployment
  • Use the test workspace workspace/pipeline-test/sample_code.py as a sandbox example

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers