research
npx machina-cli add skill tslateman/duet/research --openclawTechnical Research
Overview
Systematic technical research for staff-level software engineering decisions. Gather evidence, synthesize findings, and present actionable recommendations.
Research Workflow
1. Scope the Question
Before searching, clarify:
- What decision does this research inform?
- What constraints exist (language, framework, team expertise)?
- What "good enough" looks like—avoid rabbit holes
2. Gather Evidence
Use multiple sources in parallel:
Web search — current state, recent changes, community sentiment
WebSearch: "[topic] 2026" or "[library] vs [alternative]"
Documentation — authoritative specs and APIs
Context7: resolve-library-id then query-docs
WebFetch: official docs, RFCs, specifications
Codebase — existing patterns and constraints
Grep/Glob: how similar problems are solved today
3. Evaluate Sources
Weight sources by reliability:
- Official documentation, specs, RFCs
- Maintainer statements, changelogs, release notes
- Reputable tech blogs, conference talks
- Community discussions (HN, Reddit, Discord)
- AI-generated content, outdated tutorials
Red flags: No date, no author, SEO-heavy content, contradicts official docs
4. Synthesize Findings
Structure output for decision-making:
## Summary
[1-2 sentence answer to the core question]
## Key Findings
- Finding 1 (source)
- Finding 2 (source)
- Finding 3 (source)
## Comparison (if applicable)
| Criterion | Option A | Option B |
| ------------ | -------- | -------- |
| [Key factor] | ... | ... |
## Recommendation
[Clear recommendation with rationale]
## Open Questions
[What remains uncertain, what to monitor]
5. Cite Sources
Always include sources:
Sources:
- [Official Docs](url)
- [Relevant Article](url)
Research Patterns
Library/Framework Evaluation
Investigate:
- Maintenance — Last release, commit frequency, issue response time
- Adoption — npm downloads, GitHub stars, production users
- Documentation — Quality, examples, migration guides
- Bundle size — For frontend, check bundlephobia
- TypeScript — Native support or @types package quality
- Breaking changes — Major version history, upgrade difficulty
API/Service Comparison
Investigate:
- Pricing — Free tier limits, scaling costs
- Rate limits — Requests/second, daily quotas
- Latency — P50/P99, geographic distribution
- Reliability — SLA, status page history
- Auth — OAuth, API keys, complexity
- SDK quality — Official vs community, maintenance
Architectural Decisions
Investigate:
- Prior art — How do similar systems solve this?
- Trade-offs — What does each approach sacrifice?
- Reversibility — How hard to change later?
- Team fit — Existing expertise, learning curve
- Operational cost — Monitoring, debugging, scaling
Tool Usage
Parallel searches — Launch multiple WebSearch calls for different angles simultaneously
Context7 for libraries — Always resolve-library-id first, then query-docs for specific questions
WebFetch for docs — Fetch official documentation pages directly when you need authoritative details
Codebase search — Check how the codebase already handles similar problems before recommending external solutions
Output Quality
Research output should:
- Answer the original question directly
- Provide evidence, not assertions
- Acknowledge uncertainty explicitly
- Include actionable next steps
- Cite all sources
Reference Material
For detailed research patterns and techniques, see:
references/patterns.md— Common research scenarios with examples
See Also
/adr— Research informs the decision; ADR captures itskills/FRAMEWORKS.md— Full framework indexRECIPE.md— Agent recipe for parallel decomposition (2 workers)
Overview
Systematic technical research for staff-level software engineering decisions. Collect evidence, weigh options, and deliver actionable recommendations to inform choices about libraries, APIs, frameworks, or architectural approaches.
How This Skill Works
Follow a formal workflow: scope the question, gather evidence from web searches, official docs, and the codebase, then evaluate sources by reliability. Synthesize findings into a decision-ready report and cite all sources.
When to Use It
- Research a topic, library, or tool to inform a decision
- Investigate a library, API, or framework to understand capabilities and trade-offs
- Look into architectural approaches or design patterns
- Compare two or more options (e.g., library A vs library B)
- Analyze how a technology works and how it would integrate with your system
Quick Start
- Step 1: Scope the Question — Define the decision, constraints, and what would count as 'good enough'.
- Step 2: Gather Evidence — Run parallel WebSearch, consult official docs, and inspect the codebase.
- Step 3: Synthesize & Report — Create a concise output with summary, findings, recommendation, and sources.
Best Practices
- Clearly scope the question and constraints before searching
- Gather evidence in parallel from web, documentation, and codebase
- Evaluate sources by reliability and note red flags
- Synthesize findings into a decision-ready format (summary, key findings, recommendation, open questions)
- Always cite sources and surface uncertainties
Example Use Cases
- Decide between REST and GraphQL for an API
- Compare React vs Svelte for a frontend project
- Evaluate logging libraries for a Node.js service
- Assess OAuth vs API keys for service-to-service authentication
- Compare PostgreSQL vs MySQL for a data-heavy application