Get the FREE Ultimate OpenClaw Setup Guide →

researching-on-the-internet

Scanned
npx machina-cli add skill ed3dai/ed3d-plugins/researching-on-the-internet --openclaw
Files (1)
SKILL.md
5.0 KB

Researching on the Internet

Overview

Gather accurate, current, well-sourced information from the internet to inform planning and design decisions. Test hypotheses, verify claims, and find authoritative sources for APIs, libraries, and best practices.

When to Use

Use for:

  • Finding current API documentation before integration design
  • Testing hypotheses ("Is library X faster than Y?", "Does approach Z work with version N?")
  • Verifying technical claims or assumptions
  • Researching library comparison and alternatives
  • Finding best practices and current community consensus

Don't use for:

  • Information already in codebase (use codebase search)
  • General knowledge within Claude's training (just answer directly)
  • Project-specific conventions (check CLAUDE.md)

Core Research Workflow

  1. Define question clearly - specific beats vague
  2. Search official sources first - docs, release notes, changelogs
  3. Cross-reference - verify claims across multiple sources
  4. Evaluate quality - tier sources (official → verified → community)
  5. Report concisely - lead with answer, provide links and evidence

Hypothesis Testing

When given a hypothesis to test:

  1. Identify falsifiable claims - break hypothesis into testable parts
  2. Search for supporting evidence - what confirms this?
  3. Search for disproving evidence - what contradicts this?
  4. Evaluate source quality - weight evidence by tier
  5. Report findings - supported/contradicted/inconclusive with evidence
  6. Note confidence level - strong consensus vs single source vs conflicting info

Example:

Hypothesis: "Library X is faster than Y for large datasets"

Search for:
✓ Benchmarks comparing X and Y
✓ Performance documentation for both
✓ GitHub issues mentioning performance
✓ Real-world case studies

Report:
- Supported: [evidence with links]
- Contradicted: [evidence with links]
- Conclusion: [supported/contradicted/mixed] with [confidence level]

Quick Reference

TaskStrategy
API docsOfficial docs → GitHub README → Recent tutorials
Library comparisonOfficial sites → npm/PyPI stats → GitHub activity
Best practicesOfficial guides → Recent posts → Stack Overflow
TroubleshootingError search → GitHub issues → Stack Overflow
Current stateRelease notes → Changelog → Recent announcements
Hypothesis testingDefine claims → Search both sides → Weight evidence

Source Evaluation Tiers

TierSourcesUsage
1 - Most reliableOfficial docs, release notes, changelogsPrimary evidence
2 - Generally reliableVerified tutorials, maintained examples, reputable blogsSupporting evidence
3 - Use with cautionStack Overflow, forums, old tutorialsCheck dates, cross-verify

Always note source tier in findings.

Search Strategies

Multiple approaches:

  • WebSearch for overview and current information
  • WebFetch for specific documentation pages
  • Check MCP servers (Context7, search tools) if available
  • Follow links to authoritative sources
  • Search official documentation before community resources

Cross-reference:

  • Verify claims across multiple sources
  • Check publication dates - prefer recent
  • Flag breaking changes or deprecations
  • Note when information might be outdated
  • Distinguish stable APIs from experimental features

Reporting Findings

Lead with answer:

  • Direct answer to question first
  • Supporting details with source links second
  • Code examples when relevant (with attribution)

Include metadata:

  • Version numbers and compatibility requirements
  • Publication dates for time-sensitive topics
  • Security considerations or best practices
  • Common gotchas or migration issues
  • Confidence level based on source consensus

Handle uncertainty clearly:

  • "No official documentation found for [topic]" is valid
  • Explain what you searched and where you looked
  • Distinguish "doesn't exist" from "couldn't find reliable information"
  • Present what you found with appropriate caveats
  • Suggest alternative search terms or approaches

Common Mistakes

MistakeFix
Searching only one sourceCross-reference minimum 2-3 sources
Ignoring publication datesCheck dates, flag outdated information
Treating all sources equallyUse tier system, weight accordingly
Reporting before verificationVerify claims across sources first
Vague hypothesis testingBreak into specific falsifiable claims
Skipping official docsAlways start with tier 1 sources
Over-confident with single sourceNote source tier and look for consensus

Source

git clone https://github.com/ed3dai/ed3d-plugins/blob/main/plugins/ed3d-research-agents/skills/researching-on-the-internet/SKILL.mdView on GitHub

Overview

Researching on the Internet gathers accurate, current, well-sourced information from the web to inform planning and design decisions. It helps test hypotheses, verify claims, and locate authoritative sources for APIs, libraries, and best practices.

How This Skill Works

Follow a core workflow: define the question clearly, search official sources first (docs, release notes, changelogs), and cross-reference across multiple sources to verify claims. Evaluate source quality using a tier system (official → verified → community) and report findings concisely with links and evidence.

When to Use It

  • Finding current API documentation before integration design
  • Testing hypotheses (for example, performance or compatibility claims)
  • Verifying technical claims or assumptions
  • Researching library comparisons and alternatives
  • Finding best practices and current community consensus

Quick Start

  1. Step 1: Define the research question clearly
  2. Step 2: Search official sources first (docs, release notes, changelogs)
  3. Step 3: Cross-reference, assess quality, and report with links and evidence

Best Practices

  • Define the research question clearly before starting
  • Prioritize official sources (docs, release notes) first
  • Cross-check across multiple sources and cite links
  • Evaluate source quality using the tier system and note dates
  • Report findings succinctly with evidence and links

Example Use Cases

  • Benchmarking: compare Library X vs Y using official benchmarks and docs
  • Assessing API deprecation and migration guides for a planned upgrade
  • Comparing authentication patterns across frameworks with current best practices
  • Verifying performance claims with multiple sources before choosing a tech stack
  • Listing alternative libraries with pros/cons and community consensus

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers