Get the FREE Ultimate OpenClaw Setup Guide →

survey

Scanned
npx machina-cli add skill Borda/.home/survey --openclaw
Files (1)
SKILL.md
4.7 KB
<objective>

Survey the literature on an AI/ML topic and return actionable findings: what SOTA methods exist, which fits best for the current use case, and a concrete implementation plan. This skill is an orchestrator — it gathers codebase context, delegates literature search and analysis to the ai-researcher agent, and packages results into a structured report.

This skill is NOT for doing research or designing experiments — use the ai-researcher agent directly for hypothesis generation, ablation design, and experiment validation.

</objective> <inputs>
  • $ARGUMENTS: topic, method name, or problem description (e.g. "object detection for small objects", "efficient transformers", "self-supervised pretraining for medical images").
</inputs> <workflow>

Step 1: Understand the codebase context

Before searching, read the current project to extract constraints:

  • Framework in use (PyTorch, JAX, TensorFlow, scikit-learn)?
  • Task being solved (classification, detection, generation, regression)?
  • Constraints (latency, memory, dataset size, compute budget)?

Step 2: Research & codebase check (run in parallel)

Issue both 2a and 2b in the same response — they are independent and must run simultaneously, not sequentially.

2a: Spawn ai-researcher agent (parallel subagent via Task tool)

Task the ai-researcher with a single objective: find the top 5 papers for $ARGUMENTS, produce a comparison table (method, key idea, benchmark results, compute, code availability), and recommend the single best method given the codebase constraints in Step 1 — with a brief implementation plan. The agent's own workflow handles the research and experiment design details.

Use this prompt scaffold (adapt the constraints from Step 1):

Survey the literature on: <$ARGUMENTS>
Codebase constraints: <framework, Python version, compute budget, existing dependencies from Step 1>
Deliver: comparison table (method, key idea, benchmarks, compute, code available), recommendation for best method, and a 3-step implementation plan for this codebase.
Include a ## Confidence block at the end.

2b: Check for existing implementations (main context)

Use the Grep tool to search the codebase for any existing related code:

  • Pattern: $ARGUMENTS (literal)
  • Glob: **/*.py
  • Output mode: files_with_matches
  • Limit to 10 results

Step 3: Report

## Survey: $ARGUMENTS

### SOTA Overview
[2-3 sentence summary of the current state of the field]

### Method Comparison
| Method | Key Idea | SOTA Result | Compute | Code Available |
|--------|----------|-------------|---------|----------------|
| ...    | ...      | ...         | ...     | Yes/No + link  |

### Recommendation
**Use [method]** because [specific reason matching the current codebase constraints].

### Implementation Plan
1. [step with file/component to change]
2. [step]
3. [step]

### Key Hyperparameters
- [param]: [typical range] — [what it controls]

### Gotchas
- [common failure mode and how to avoid it]

### Integration with Current Codebase
- Files to modify: [list with file:line references]
- New dependencies needed: [package versions]
- Estimated effort: [hours/days]
- Risk assessment: [what could go wrong during integration]

### References
- [Paper title] ([year]) — [link]

### Agent Confidence
| Agent | Score | Gaps |
|---|---|---|
| ai-researcher | [score] | [gaps] |

After printing the report above, write the full content to tasks/output-survey-$(date +%Y-%m-%d).md using the Write tool and notify: → saved to tasks/output-survey-$(date +%Y-%m-%d).md

End your response with a ## Confidence block per CLAUDE.md output standards.

</workflow> <notes>
  • This skill orchestrates — it gathers context and delegates research to ai-researcher. For direct hypothesis/experiment work, use the agent directly.
  • Link integrity: All URLs cited in the survey report must be fetched and verified before inclusion. Use WebFetch to confirm each URL exists and says what you claim.
  • Follow-up chains:
    • Survey recommends a method for implementation → /feature for TDD-first implementation of the chosen approach
    • Survey integrates into existing code → /refactor first to prepare the module, then /feature
    • Survey reveals security concerns with a dependency → /security for deep audit
</notes>

Source

git clone https://github.com/Borda/.home/blob/main/.claude/skills/survey/SKILL.mdView on GitHub

Overview

Survey the literature on an AI/ML topic to identify top SOTA methods, compare them in a structured table, and recommend an actionable implementation plan tailored to your codebase. It acts as the orchestrator, delegating deep analysis to the ai-researcher agent and grounding recommendations in your project constraints.

How This Skill Works

First, it reads the codebase to extract framework, task, and resource constraints. Then it runs parallel research: ai-researcher delivers a top-5 paper comparison and a recommended method with a 3-step implementation plan, while Grep checks for any existing related code. Finally, it compiles a structured survey report suitable for stakeholders and stores the results.

When to Use It

  • When you need an evidence-based SOTA survey for a topic and a concrete implementation plan
  • When selecting a method or architecture that must fit your codebase constraints (framework, Python version, compute budget)
  • When you want a structured, repeatable literature review that can be shared with teammates
  • When preparing a stakeholder-friendly report with a recommended path and risk assessment
  • When you want a quick-start, production-oriented plan rather than hypothesis-driven research

Quick Start

  1. Step 1: Define the topic/method and capture codebase constraints (framework, Python version, budget)
  2. Step 2: Spawn ai-researcher for the top-5 papers and run a codebase grep for related implementations
  3. Step 3: Review the generated survey, finalize the recommendation, and save the report to tasks/output-survey-YYYY-MM-DD.md

Best Practices

  • Clarify the topic, method, or problem precisely before starting
  • Document and maintain codebase constraints (framework, versions, dependencies)
  • Insist on top-5 papers with clear benchmarks and available code
  • Cross-check recommendations against your environment and data
  • Annotate the 3-step implementation plan with file-by-file changes and integration notes

Example Use Cases

  • Survey SOTA object detection methods for small objects and propose a tailored integration plan for a PyTorch project
  • Compare efficient transformer variants for a limited-compute deployment in a PyTorch or JAX codebase
  • Evaluate self-supervised pretraining strategies for medical images within TensorFlow code, and select a practical route
  • Assess attention mechanisms for real-time video inference under latency constraints and outline an upgrade path
  • Benchmark data-efficient vision models for a production pipeline and draft an incremental rollout plan

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers