researching-on-the-internet
Scannednpx machina-cli add skill ed3dai/ed3d-plugins/researching-on-the-internet --openclawResearching on the Internet
Overview
Gather accurate, current, well-sourced information from the internet to inform planning and design decisions. Test hypotheses, verify claims, and find authoritative sources for APIs, libraries, and best practices.
When to Use
Use for:
- Finding current API documentation before integration design
- Testing hypotheses ("Is library X faster than Y?", "Does approach Z work with version N?")
- Verifying technical claims or assumptions
- Researching library comparison and alternatives
- Finding best practices and current community consensus
Don't use for:
- Information already in codebase (use codebase search)
- General knowledge within Claude's training (just answer directly)
- Project-specific conventions (check CLAUDE.md)
Core Research Workflow
- Define question clearly - specific beats vague
- Search official sources first - docs, release notes, changelogs
- Cross-reference - verify claims across multiple sources
- Evaluate quality - tier sources (official → verified → community)
- Report concisely - lead with answer, provide links and evidence
Hypothesis Testing
When given a hypothesis to test:
- Identify falsifiable claims - break hypothesis into testable parts
- Search for supporting evidence - what confirms this?
- Search for disproving evidence - what contradicts this?
- Evaluate source quality - weight evidence by tier
- Report findings - supported/contradicted/inconclusive with evidence
- Note confidence level - strong consensus vs single source vs conflicting info
Example:
Hypothesis: "Library X is faster than Y for large datasets"
Search for:
✓ Benchmarks comparing X and Y
✓ Performance documentation for both
✓ GitHub issues mentioning performance
✓ Real-world case studies
Report:
- Supported: [evidence with links]
- Contradicted: [evidence with links]
- Conclusion: [supported/contradicted/mixed] with [confidence level]
Quick Reference
| Task | Strategy |
|---|---|
| API docs | Official docs → GitHub README → Recent tutorials |
| Library comparison | Official sites → npm/PyPI stats → GitHub activity |
| Best practices | Official guides → Recent posts → Stack Overflow |
| Troubleshooting | Error search → GitHub issues → Stack Overflow |
| Current state | Release notes → Changelog → Recent announcements |
| Hypothesis testing | Define claims → Search both sides → Weight evidence |
Source Evaluation Tiers
| Tier | Sources | Usage |
|---|---|---|
| 1 - Most reliable | Official docs, release notes, changelogs | Primary evidence |
| 2 - Generally reliable | Verified tutorials, maintained examples, reputable blogs | Supporting evidence |
| 3 - Use with caution | Stack Overflow, forums, old tutorials | Check dates, cross-verify |
Always note source tier in findings.
Search Strategies
Multiple approaches:
- WebSearch for overview and current information
- WebFetch for specific documentation pages
- Check MCP servers (Context7, search tools) if available
- Follow links to authoritative sources
- Search official documentation before community resources
Cross-reference:
- Verify claims across multiple sources
- Check publication dates - prefer recent
- Flag breaking changes or deprecations
- Note when information might be outdated
- Distinguish stable APIs from experimental features
Reporting Findings
Lead with answer:
- Direct answer to question first
- Supporting details with source links second
- Code examples when relevant (with attribution)
Include metadata:
- Version numbers and compatibility requirements
- Publication dates for time-sensitive topics
- Security considerations or best practices
- Common gotchas or migration issues
- Confidence level based on source consensus
Handle uncertainty clearly:
- "No official documentation found for [topic]" is valid
- Explain what you searched and where you looked
- Distinguish "doesn't exist" from "couldn't find reliable information"
- Present what you found with appropriate caveats
- Suggest alternative search terms or approaches
Common Mistakes
| Mistake | Fix |
|---|---|
| Searching only one source | Cross-reference minimum 2-3 sources |
| Ignoring publication dates | Check dates, flag outdated information |
| Treating all sources equally | Use tier system, weight accordingly |
| Reporting before verification | Verify claims across sources first |
| Vague hypothesis testing | Break into specific falsifiable claims |
| Skipping official docs | Always start with tier 1 sources |
| Over-confident with single source | Note source tier and look for consensus |
Source
git clone https://github.com/ed3dai/ed3d-plugins/blob/main/plugins/ed3d-research-agents/skills/researching-on-the-internet/SKILL.mdView on GitHub Overview
Researching on the Internet gathers accurate, current, well-sourced information from the web to inform planning and design decisions. It helps test hypotheses, verify claims, and locate authoritative sources for APIs, libraries, and best practices.
How This Skill Works
Follow a core workflow: define the question clearly, search official sources first (docs, release notes, changelogs), and cross-reference across multiple sources to verify claims. Evaluate source quality using a tier system (official → verified → community) and report findings concisely with links and evidence.
When to Use It
- Finding current API documentation before integration design
- Testing hypotheses (for example, performance or compatibility claims)
- Verifying technical claims or assumptions
- Researching library comparisons and alternatives
- Finding best practices and current community consensus
Quick Start
- Step 1: Define the research question clearly
- Step 2: Search official sources first (docs, release notes, changelogs)
- Step 3: Cross-reference, assess quality, and report with links and evidence
Best Practices
- Define the research question clearly before starting
- Prioritize official sources (docs, release notes) first
- Cross-check across multiple sources and cite links
- Evaluate source quality using the tier system and note dates
- Report findings succinctly with evidence and links
Example Use Cases
- Benchmarking: compare Library X vs Y using official benchmarks and docs
- Assessing API deprecation and migration guides for a planned upgrade
- Comparing authentication patterns across frameworks with current best practices
- Verifying performance claims with multiple sources before choosing a tech stack
- Listing alternative libraries with pros/cons and community consensus