research
npx machina-cli add skill damionrashford/RivalSearch-Plugin/research --openclawResearch
Conduct comprehensive research on: $ARGUMENTS
Follow these steps precisely using RivalSearchMCP tools. Report progress after each step.
Step 1: Web Discovery
Use web_search for broad discovery:
- query: "$ARGUMENTS", num_results: 15, extract_content: true, follow_links: true, max_depth: 2
Identify the top 3-5 most relevant URLs. Note key themes and recurring sources.
Step 2: Social & Community Pulse
Use social_search to gauge community discussions:
- query: "$ARGUMENTS", platforms: ["reddit", "hackernews", "devto", "producthunt", "medium"], max_results_per_platform: 10, time_filter: "year"
Analyze what practitioners are saying. Note consensus, debate, and emerging opinions.
Step 3: News Coverage
Use news_aggregation for recent developments:
- query: "$ARGUMENTS", max_results: 15, time_range: "month"
Identify breaking news, announcements, and trend shifts.
Step 4: Academic Literature
Use scientific_research twice for peer-reviewed sources:
- operation: "academic_search", query: "$ARGUMENTS", max_results: 15, sources: ["semantic_scholar", "arxiv"]
- operation: "academic_search", query: "$ARGUMENTS survey OR review OR overview", max_results: 5
Identify the most cited papers, recent publications, key authors, and methodologies. Look for surveys that summarize the field.
Step 5: Datasets & Implementations
Use scientific_research for datasets:
- operation: "dataset_discovery", query: "$ARGUMENTS", max_results: 10
Use github_search for open source implementations:
- query: "$ARGUMENTS", sort: "stars", max_results: 10, include_readme: true
Step 6: Deep Content Retrieval
For the 3-5 most important sources from previous steps:
Use content_operations:
- operation: "retrieve", url: <selected_url>, extraction_method: "markdown"
For any key papers with accessible PDFs, use document_analysis:
- url: <paper_pdf_url>, max_pages: 10, extract_metadata: true, summary_length: 1000
Then analyze the most critical content:
- operation: "analyze", content: <retrieved>, analysis_type: "general", extract_key_points: true, summarize: true
Step 7: Compile Report
- Executive Summary — 2-3 paragraph overview of key takeaways
- Key Findings — Numbered list of the most important discoveries
- Web Intelligence — What mainstream sources reveal
- Community Sentiment — Practitioner discussions and opinions
- Recent Developments — News, announcements, trend shifts
- Academic Foundation — Key papers, methodologies, research threads, and key authors
- Datasets & Tools — Available datasets and open source implementations
- Deep Dive Insights — Detailed analysis from primary sources
- Contradictions & Gaps — Where sources disagree or information is missing
- Sources — Complete list of all URLs consulted
Use clean markdown. Cite sources inline with Source Name format.
Source
git clone https://github.com/damionrashford/RivalSearch-Plugin/blob/main/skills/research/SKILL.mdView on GitHub Overview
The research skill conducts thorough investigations across web, social platforms, news, academic databases, and GitHub. It uncovers papers, datasets, and open-source implementations to support literature reviews and topic investigations. The workflow emphasizes identifying consensus, debates, and emerging trends while organizing sources for citation.
How This Skill Works
Follows a structured 7-step workflow using RivalSearchMCP tools: web discovery, social pulse, news coverage, academic literature, datasets and implementations, deep content retrieval, and final compilation. It retrieves 3–5 top sources per step, analyzes content for key points, and flags consensus and gaps. When PDFs are available, it uses document_analysis to extract metadata and summaries, and cites sources inline in the final report.
When to Use It
- During a literature review to map state-of-the-art on a topic
- Investigating a research question with evidence from multiple sources
- Discovering datasets and open-source implementations for a project
- Tracking recent developments and announcements in a field
- Comparing methodological approaches across papers and reports
Quick Start
- Step 1: Define your topic and run web discovery with web_search.
- Step 2: Pull relevant papers, datasets, and code with scientific_research and github_search.
- Step 3: Retrieve 3–5 key sources with content_operations, analyze content, and compile the final report.
Best Practices
- Define a precise topic and research question before starting
- Query multiple domains (web, social, news, academia, code) to avoid bias
- Record sources with clear citations and preserve metadata
- Prioritize high-quality, peer-reviewed sources and date-sensitive results
- Iterate with follow-up queries and cross-check claims across sources
Example Use Cases
- Perform a literature review on transformer architectures and summarize key papers and datasets
- Assess open-source implementations for a novel ML technique and compare performance claims
- Survey recent AI regulation news and practitioner debates
- Compile a dataset inventory for a computer vision task
- Map consensus and gaps in explainable AI literature