Research Implement
Scannednpx machina-cli add skill Axect/magi-researchers/research-implement --openclawResearch Implement Skill
Description
Implements research code based on an existing research plan. Requires a research_plan.md to be present in the active research output directory.
Usage
/research-implement [path/to/research_plan.md]
Arguments
$ARGUMENTS— Optional path to the research plan. If not provided, searches for the most recentoutputs/*/plan/research_plan.md.
Instructions
Claude-Only Mode
When --claude-only is active (passed from the parent /research pipeline), all Gemini/Codex MCP calls in this skill are replaced with Claude Agent subagents (subagent_type: general-purpose). Subagents use the Read tool to access files instead of @filepath. Output filenames remain unchanged; each output starts with > Source: Claude Agent subagent (claude-only mode, {style}).
MCP Tool Rules
- Context7: Use
mcp__plugin_context7_context7__query-docsfor library documentation lookups. Callresolve-library-idfirst to get the library ID. - File References: Use
@filepathin the prompt parameter to pass saved artifacts (e.g.,@plan/research_plan.md) instead of pasting file content inline. The CLI tools read files directly, preventing context truncation. - Web Search: Use web search freely whenever implementation requires checking library APIs, usage patterns, or recent best practices:
- Claude: Use the
WebSearchtool directly - When to search: library API changes, implementation examples, algorithm details, dependency compatibility, debugging known issues
- Claude: Use the
Step 0: Locate Research Plan
- If a path is provided in
$ARGUMENTS, use it directly. - Otherwise, find the most recent research plan:
- Glob for
outputs/*/plan/research_plan.md - Select the most recently modified one.
- Glob for
- If no plan is found, inform the user and suggest running
/magi-researchers:research-brainstormfirst, or creating a plan manually. - Read the research plan and identify:
- The output base directory (parent of
plan/) - Required algorithms/models to implement
- Programming language and framework choices
- Expected inputs and outputs
- Dependencies needed
- The output base directory (parent of
Step 1: Environment Setup
- Create the
src/directory under the output base if it doesn't exist. - Check if any additional Python dependencies are needed beyond what's in
pyproject.toml.- If so, inform the user and suggest adding them via
uv add.
- If so, inform the user and suggest adding them via
Step 2: Implementation
- Follow the research plan's implementation section strictly.
- Write modular, well-structured code in
src/:- Main entry point (e.g.,
src/main.py) - Separate modules for distinct components (e.g.,
src/model.py,src/data.py,src/utils.py)
- Main entry point (e.g.,
- Use Context7 (
mcp__plugin_context7_context7__query-docs) to look up library APIs when needed. - Include docstrings for all public functions explaining:
- Purpose
- Parameters and return values
- Any assumptions or limitations
Step 3: Validation
- After implementation, do a basic sanity check:
- Ensure all files are syntactically valid (e.g.,
uv run python -c "import src.main"or equivalent) - Check for obvious issues (unused imports, undefined variables)
- Ensure all files are syntactically valid (e.g.,
- Present the implementation summary to the user:
- List of files created with brief descriptions
- Any deviations from the research plan and why
- Known limitations or TODOs
Step 4: Phase Gate
Before presenting to the user, execute a lightweight quality checkpoint:
- Self-assessment: Evaluate the implementation against the following checklist and assign a confidence level (
High,Medium, orLow):
| Checklist Item | Criteria |
|---|---|
| Code correctness | All files are syntactically valid; key functions produce expected output types |
| Alignment with plan | Implementation matches the research plan's specification; deviations are documented |
| Error handling | Edge cases and invalid inputs are handled gracefully |
| Dependency management | All required libraries are listed; no undeclared imports |
-
Conditional MAGI mini-review (if confidence is
MediumorLow):- Send the implementation summary + source code to Codex for a focused review targeting the low-scoring checklist items:
mcp__codex-cli__ask-codex( prompt: "Review this research implementation for correctness, plan alignment, error handling, and dependency management. Focus on: {low_scoring_items}\n\n@{output_dir}/plan/research_plan.md\n@{output_dir}/src/*.py" )If
--claude-only: Replace the Codex call above with:Agent( subagent_type: "general-purpose", prompt: "You are an Analytical-Convergent code reviewer. Focus on correctness, practical constraints, and implementation quality. Use the Read tool to read: - {output_dir}/plan/research_plan.md - All .py files in {output_dir}/src/ Review this research implementation for correctness, plan alignment, error handling, and dependency management. Focus on: {low_scoring_items} Return your review as structured text (do not save to a file)." ) -
Go/No-Go synthesis: Write a brief gate report with:
- Confidence level and justification
- Checklist scores (pass/partial/fail for each item)
- Issues found (if any) and applied fixes
- Go/No-Go decision
-
Save to
src/phase_gate.md.
If the gate returns No-Go, fix the identified issues before presenting to the user. Maximum 1 fix iteration.
Step 5: User Review
Present the implementation for user review:
- Highlight key design decisions
- Note any areas where alternative approaches were considered
- Include the phase gate result summary
- Ask if modifications are needed before proceeding to testing
Notes
- Prefer simple, readable code over clever optimizations
- Match the coding style to the research domain conventions
- If the plan is ambiguous, make reasonable choices and document them
- Do not over-engineer — implement exactly what the plan specifies
Source
git clone https://github.com/Axect/magi-researchers/blob/main/skills/research-implement/SKILL.mdView on GitHub Overview
This skill turns a research plan into runnable code within a clean src/ structure. It requires a research_plan.md in the active research output directory and scaffolds a modular Python project (src/main.py, src/model.py, src/data.py, src/utils.py). It enforces documentation, basic validation, and a Phase Gate before presenting an implementation summary.
How This Skill Works
It locates the plan from the provided path or by scanning the most recent outputs/*/plan/research_plan.md, then creates a src/ directory under the output base and implements code according to the plan in modular components. It uses Context7 for library API lookups when needed, adds docstrings to public functions, and performs a lightweight syntax check and sanity validation as part of a Phase Gate.
When to Use It
- You have an explicit research plan in plan/research_plan.md and need production-ready code scaffolding.
- You want to convert a plan into modular Python modules (main.py, model.py, data.py, utils.py) under src/.
- You require an end-to-end validation and Phase Gate before sharing results with stakeholders.
- You need library API lookups or dependencies clarified via Context7 during implementation.
- No plan is present and you need guidance to locate one or brainstorm options.
Quick Start
- Step 1: Provide a path to the research_plan.md, e.g., /research-implement [path/to/research_plan.md].
- Step 2: The tool scaffolds a src/ directory under the output base and starts implementing per the plan.
- Step 3: Validate syntax with uv run python -c "import src.main" and review the implementation summary, including created files and any deviations.
Best Practices
- Follow the algorithms, models, inputs, and outputs specified in the research plan closely.
- Keep code modular with a clear main entry point (src/main.py) and separate components (src/model.py, src/data.py, src/utils.py).
- Document public functions with clear docstrings describing purpose, parameters, return values, and limitations.
- Validate syntax and perform a basic runtime sanity check (e.g., import src.main) before review.
- Record deviations from the plan and any TODOs or known limitations in an implementation summary.
Example Use Cases
- Implement a machine learning experiment plan into a PyTorch project with a runnable src/ structure and outputs.
- Translate a data extraction and cleaning plan into a Python pipeline using src/data.py and src/utils.py.
- Turn an NLP research plan into a runnable script that processes data and outputs results to outputs/.
- Create a reinforcement learning study scaffold with modular components and a simple validation harness.
- Provide a lightweight validation harness that can be tested by importing src.main and verifying basic outputs.