Get the FREE Ultimate OpenClaw Setup Guide →

research

npx machina-cli add skill tslateman/duet/research --openclaw
Files (1)
SKILL.md
4.2 KB

Technical Research

Overview

Systematic technical research for staff-level software engineering decisions. Gather evidence, synthesize findings, and present actionable recommendations.

Research Workflow

1. Scope the Question

Before searching, clarify:

  • What decision does this research inform?
  • What constraints exist (language, framework, team expertise)?
  • What "good enough" looks like—avoid rabbit holes

2. Gather Evidence

Use multiple sources in parallel:

Web search — current state, recent changes, community sentiment

WebSearch: "[topic] 2026" or "[library] vs [alternative]"

Documentation — authoritative specs and APIs

Context7: resolve-library-id then query-docs
WebFetch: official docs, RFCs, specifications

Codebase — existing patterns and constraints

Grep/Glob: how similar problems are solved today

3. Evaluate Sources

Weight sources by reliability:

  1. Official documentation, specs, RFCs
  2. Maintainer statements, changelogs, release notes
  3. Reputable tech blogs, conference talks
  4. Community discussions (HN, Reddit, Discord)
  5. AI-generated content, outdated tutorials

Red flags: No date, no author, SEO-heavy content, contradicts official docs

4. Synthesize Findings

Structure output for decision-making:

## Summary

[1-2 sentence answer to the core question]

## Key Findings

- Finding 1 (source)
- Finding 2 (source)
- Finding 3 (source)

## Comparison (if applicable)

| Criterion    | Option A | Option B |
| ------------ | -------- | -------- |
| [Key factor] | ...      | ...      |

## Recommendation

[Clear recommendation with rationale]

## Open Questions

[What remains uncertain, what to monitor]

5. Cite Sources

Always include sources:

Sources:

- [Official Docs](url)
- [Relevant Article](url)

Research Patterns

Library/Framework Evaluation

Investigate:

  1. Maintenance — Last release, commit frequency, issue response time
  2. Adoption — npm downloads, GitHub stars, production users
  3. Documentation — Quality, examples, migration guides
  4. Bundle size — For frontend, check bundlephobia
  5. TypeScript — Native support or @types package quality
  6. Breaking changes — Major version history, upgrade difficulty

API/Service Comparison

Investigate:

  1. Pricing — Free tier limits, scaling costs
  2. Rate limits — Requests/second, daily quotas
  3. Latency — P50/P99, geographic distribution
  4. Reliability — SLA, status page history
  5. Auth — OAuth, API keys, complexity
  6. SDK quality — Official vs community, maintenance

Architectural Decisions

Investigate:

  1. Prior art — How do similar systems solve this?
  2. Trade-offs — What does each approach sacrifice?
  3. Reversibility — How hard to change later?
  4. Team fit — Existing expertise, learning curve
  5. Operational cost — Monitoring, debugging, scaling

Tool Usage

Parallel searches — Launch multiple WebSearch calls for different angles simultaneously

Context7 for libraries — Always resolve-library-id first, then query-docs for specific questions

WebFetch for docs — Fetch official documentation pages directly when you need authoritative details

Codebase search — Check how the codebase already handles similar problems before recommending external solutions

Output Quality

Research output should:

  • Answer the original question directly
  • Provide evidence, not assertions
  • Acknowledge uncertainty explicitly
  • Include actionable next steps
  • Cite all sources

Reference Material

For detailed research patterns and techniques, see:

  • references/patterns.md — Common research scenarios with examples

See Also

  • /adr — Research informs the decision; ADR captures it
  • skills/FRAMEWORKS.md — Full framework index
  • RECIPE.md — Agent recipe for parallel decomposition (2 workers)

Source

git clone https://github.com/tslateman/duet/blob/main/skills/research/SKILL.mdView on GitHub

Overview

Systematic technical research for staff-level software engineering decisions. Collect evidence, weigh options, and deliver actionable recommendations to inform choices about libraries, APIs, frameworks, or architectural approaches.

How This Skill Works

Follow a formal workflow: scope the question, gather evidence from web searches, official docs, and the codebase, then evaluate sources by reliability. Synthesize findings into a decision-ready report and cite all sources.

When to Use It

  • Research a topic, library, or tool to inform a decision
  • Investigate a library, API, or framework to understand capabilities and trade-offs
  • Look into architectural approaches or design patterns
  • Compare two or more options (e.g., library A vs library B)
  • Analyze how a technology works and how it would integrate with your system

Quick Start

  1. Step 1: Scope the Question — Define the decision, constraints, and what would count as 'good enough'.
  2. Step 2: Gather Evidence — Run parallel WebSearch, consult official docs, and inspect the codebase.
  3. Step 3: Synthesize & Report — Create a concise output with summary, findings, recommendation, and sources.

Best Practices

  • Clearly scope the question and constraints before searching
  • Gather evidence in parallel from web, documentation, and codebase
  • Evaluate sources by reliability and note red flags
  • Synthesize findings into a decision-ready format (summary, key findings, recommendation, open questions)
  • Always cite sources and surface uncertainties

Example Use Cases

  • Decide between REST and GraphQL for an API
  • Compare React vs Svelte for a frontend project
  • Evaluate logging libraries for a Node.js service
  • Assess OAuth vs API keys for service-to-service authentication
  • Compare PostgreSQL vs MySQL for a data-heavy application

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers