Get the FREE Ultimate OpenClaw Setup Guide →

Research Implement

Scanned
npx machina-cli add skill Axect/magi-researchers/research-implement --openclaw
Files (1)
SKILL.md
5.9 KB

Research Implement Skill

Description

Implements research code based on an existing research plan. Requires a research_plan.md to be present in the active research output directory.

Usage

/research-implement [path/to/research_plan.md]

Arguments

  • $ARGUMENTS — Optional path to the research plan. If not provided, searches for the most recent outputs/*/plan/research_plan.md.

Instructions

Claude-Only Mode

When --claude-only is active (passed from the parent /research pipeline), all Gemini/Codex MCP calls in this skill are replaced with Claude Agent subagents (subagent_type: general-purpose). Subagents use the Read tool to access files instead of @filepath. Output filenames remain unchanged; each output starts with > Source: Claude Agent subagent (claude-only mode, {style}).

MCP Tool Rules

  • Context7: Use mcp__plugin_context7_context7__query-docs for library documentation lookups. Call resolve-library-id first to get the library ID.
  • File References: Use @filepath in the prompt parameter to pass saved artifacts (e.g., @plan/research_plan.md) instead of pasting file content inline. The CLI tools read files directly, preventing context truncation.
  • Web Search: Use web search freely whenever implementation requires checking library APIs, usage patterns, or recent best practices:
    • Claude: Use the WebSearch tool directly
    • When to search: library API changes, implementation examples, algorithm details, dependency compatibility, debugging known issues

Step 0: Locate Research Plan

  1. If a path is provided in $ARGUMENTS, use it directly.
  2. Otherwise, find the most recent research plan:
    • Glob for outputs/*/plan/research_plan.md
    • Select the most recently modified one.
  3. If no plan is found, inform the user and suggest running /magi-researchers:research-brainstorm first, or creating a plan manually.
  4. Read the research plan and identify:
    • The output base directory (parent of plan/)
    • Required algorithms/models to implement
    • Programming language and framework choices
    • Expected inputs and outputs
    • Dependencies needed

Step 1: Environment Setup

  1. Create the src/ directory under the output base if it doesn't exist.
  2. Check if any additional Python dependencies are needed beyond what's in pyproject.toml.
    • If so, inform the user and suggest adding them via uv add.

Step 2: Implementation

  1. Follow the research plan's implementation section strictly.
  2. Write modular, well-structured code in src/:
    • Main entry point (e.g., src/main.py)
    • Separate modules for distinct components (e.g., src/model.py, src/data.py, src/utils.py)
  3. Use Context7 (mcp__plugin_context7_context7__query-docs) to look up library APIs when needed.
  4. Include docstrings for all public functions explaining:
    • Purpose
    • Parameters and return values
    • Any assumptions or limitations

Step 3: Validation

  1. After implementation, do a basic sanity check:
    • Ensure all files are syntactically valid (e.g., uv run python -c "import src.main" or equivalent)
    • Check for obvious issues (unused imports, undefined variables)
  2. Present the implementation summary to the user:
    • List of files created with brief descriptions
    • Any deviations from the research plan and why
    • Known limitations or TODOs

Step 4: Phase Gate

Before presenting to the user, execute a lightweight quality checkpoint:

  1. Self-assessment: Evaluate the implementation against the following checklist and assign a confidence level (High, Medium, or Low):
Checklist ItemCriteria
Code correctnessAll files are syntactically valid; key functions produce expected output types
Alignment with planImplementation matches the research plan's specification; deviations are documented
Error handlingEdge cases and invalid inputs are handled gracefully
Dependency managementAll required libraries are listed; no undeclared imports
  1. Conditional MAGI mini-review (if confidence is Medium or Low):

    • Send the implementation summary + source code to Codex for a focused review targeting the low-scoring checklist items:
    mcp__codex-cli__ask-codex(
      prompt: "Review this research implementation for correctness, plan alignment, error handling, and dependency management. Focus on: {low_scoring_items}\n\n@{output_dir}/plan/research_plan.md\n@{output_dir}/src/*.py"
    )
    

    If --claude-only: Replace the Codex call above with:

    Agent(
      subagent_type: "general-purpose",
      prompt: "You are an Analytical-Convergent code reviewer. Focus on correctness, practical constraints, and implementation quality.
    
    Use the Read tool to read:
    - {output_dir}/plan/research_plan.md
    - All .py files in {output_dir}/src/
    
    Review this research implementation for correctness, plan alignment, error handling, and dependency management. Focus on: {low_scoring_items}
    
    Return your review as structured text (do not save to a file)."
    )
    
  2. Go/No-Go synthesis: Write a brief gate report with:

    • Confidence level and justification
    • Checklist scores (pass/partial/fail for each item)
    • Issues found (if any) and applied fixes
    • Go/No-Go decision
  3. Save to src/phase_gate.md.

If the gate returns No-Go, fix the identified issues before presenting to the user. Maximum 1 fix iteration.

Step 5: User Review

Present the implementation for user review:

  • Highlight key design decisions
  • Note any areas where alternative approaches were considered
  • Include the phase gate result summary
  • Ask if modifications are needed before proceeding to testing

Notes

  • Prefer simple, readable code over clever optimizations
  • Match the coding style to the research domain conventions
  • If the plan is ambiguous, make reasonable choices and document them
  • Do not over-engineer — implement exactly what the plan specifies

Source

git clone https://github.com/Axect/magi-researchers/blob/main/skills/research-implement/SKILL.mdView on GitHub

Overview

This skill turns a research plan into runnable code within a clean src/ structure. It requires a research_plan.md in the active research output directory and scaffolds a modular Python project (src/main.py, src/model.py, src/data.py, src/utils.py). It enforces documentation, basic validation, and a Phase Gate before presenting an implementation summary.

How This Skill Works

It locates the plan from the provided path or by scanning the most recent outputs/*/plan/research_plan.md, then creates a src/ directory under the output base and implements code according to the plan in modular components. It uses Context7 for library API lookups when needed, adds docstrings to public functions, and performs a lightweight syntax check and sanity validation as part of a Phase Gate.

When to Use It

  • You have an explicit research plan in plan/research_plan.md and need production-ready code scaffolding.
  • You want to convert a plan into modular Python modules (main.py, model.py, data.py, utils.py) under src/.
  • You require an end-to-end validation and Phase Gate before sharing results with stakeholders.
  • You need library API lookups or dependencies clarified via Context7 during implementation.
  • No plan is present and you need guidance to locate one or brainstorm options.

Quick Start

  1. Step 1: Provide a path to the research_plan.md, e.g., /research-implement [path/to/research_plan.md].
  2. Step 2: The tool scaffolds a src/ directory under the output base and starts implementing per the plan.
  3. Step 3: Validate syntax with uv run python -c "import src.main" and review the implementation summary, including created files and any deviations.

Best Practices

  • Follow the algorithms, models, inputs, and outputs specified in the research plan closely.
  • Keep code modular with a clear main entry point (src/main.py) and separate components (src/model.py, src/data.py, src/utils.py).
  • Document public functions with clear docstrings describing purpose, parameters, return values, and limitations.
  • Validate syntax and perform a basic runtime sanity check (e.g., import src.main) before review.
  • Record deviations from the plan and any TODOs or known limitations in an implementation summary.

Example Use Cases

  • Implement a machine learning experiment plan into a PyTorch project with a runnable src/ structure and outputs.
  • Translate a data extraction and cleaning plan into a Python pipeline using src/data.py and src/utils.py.
  • Turn an NLP research plan into a runnable script that processes data and outputs results to outputs/.
  • Create a reinforcement learning study scaffold with modular components and a simple validation harness.
  • Provide a lightweight validation harness that can be tested by importing src.main and verifying basic outputs.

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers