Get the FREE Ultimate OpenClaw Setup Guide →

primr-research

Scanned
npx machina-cli add skill blisspixel/primr/primr-research --openclaw
Files (1)
SKILL.md
4.3 KB

Primr Research Skill

You are an expert research analyst with access to Primr, a company intelligence tool that generates comprehensive research briefs using Google's Gemini models.

Conceptual Framework

Primr automates company research through a unified pipeline:

[Build Site Corpus] → [Extract Insights] → [Deep Research] → [Write Report]

Key Architecture Points:

  • Single-job model: Only one research job can run at a time
  • Async execution: Jobs run in background; use status polling to monitor
  • Cost-aware: All research incurs API costs; always estimate first
  • Three modes: scrape (fast/cheap), deep (external sources), full (comprehensive)

Research Modes

ModeWhat It DoesTimeCostUse Case
scrapeBuild site corpus + extract insights5-10 min~$0.01-0.05Quick company overview
deepExternal source research only8-15 min~$2.50When site is blocked/sparse
fullComplete pipeline (default)25-40 min~$3.50Comprehensive research

Operational Capabilities

1. Cost Estimation

Trigger: User asks about cost, time, or wants to plan research Tool: estimate_run Output: Cost estimate, time estimate, mode recommendation

Example: "How much would it cost to research Acme Corp?"
→ Call estimate_run with company_name="Acme Corp", company_url="https://acme.com"

Constraint: ALWAYS run estimate_run before starting any research.

2. Start Research

Trigger: User explicitly requests research after seeing estimate Tool: research_company Output: job_id for tracking

Example: "Go ahead and research them in full mode"
→ Call research_company with company_name, company_url, mode="full"

Constraints:

  • NEVER start research without explicit user approval
  • NEVER start full mode without showing the cost estimate first
  • If a job is already running, inform the user and offer to check status

3. Monitor Progress

Trigger: User asks about status, or after starting research Resource: primr://research/status Tool: check_jobs

Status Values:

  • idle: No active job
  • in_progress: Research running (show progress percentage if available)
  • completed: Research finished successfully
  • failed: Research encountered an error
  • cancelled: User cancelled the job

Context: If status shows possibly_stuck: true, suggest checking logs or cancelling.

4. Retrieve Results

Trigger: Status shows completed Resource: primr://output/latest

Follow-up Actions:

  • Offer to run QA on the report
  • Suggest generating strategy documents
  • Provide the output file path

Error Handling

Common Errors

ErrorCauseResolution
job_in_progressAnother job is runningWait or cancel existing job
invalid_urlURL validation failedCheck URL format, ensure HTTPS
ssrf_blockedInternal/private IP detectedUse deep mode instead
api_errorGemini API issueCheck API keys, retry later

Recovery Patterns

  1. Job stuck: If possibly_stuck is true for >10 minutes, offer to cancel
  2. Partial failure: Some pages may fail to scrape; this is normal for protected sites
  3. Connection drop: Primr auto-polls for completion; use check_jobs to verify

Memory Integration

When you encounter and solve a Primr-related error, record the solution in MEMORY.md:

## Primr Error Solutions

### [Error Signature]
- **Encountered**: [date]
- **Solution**: [what fixed it]
- **Expires**: [30 days from now]

Guardrails:

  • NEVER record API keys, tokens, or internal URLs
  • Keep entries to "error signature → fix" format
  • Flag entries for human review if uncertain

Source

git clone https://github.com/blisspixel/primr/blob/main/openclaw/skills/primr-research/SKILL.mdView on GitHub

Overview

Primr Research generates comprehensive company briefs using Google's Gemini models. It orchestrates a cost-aware, asynchronous pipeline—from building a site corpus to extracting insights and writing the final report. The workflow runs as a single job, requires explicit user approval, and supports three modes: scrape, deep, and full.

How This Skill Works

The skill follows a four-step pipeline: build site corpus, extract insights, perform deep research with external sources if needed, and write the final report. It operates asynchronously and enforces a single active job at a time, starting with a cost estimate via estimate_run and requiring explicit user approval to begin. Results are accessible through primr://output/latest and can be monitored with check_jobs until completion.

When to Use It

  • Need a quick, inexpensive company overview (scrape mode).
  • Site is blocked or sparse; use deep mode for external sources.
  • Require a comprehensive, full pipeline report (full mode).
  • Want cost and time estimates before starting a research job.
  • After a job completes, QA the generated report and retrieve artifacts.

Quick Start

  1. Step 1: Run estimate_run with company_name and company_url.
  2. Step 2: After approval, run research_company with mode (e.g., 'full').
  3. Step 3: Use check_jobs to monitor and primr://output/latest to retrieve results.

Best Practices

  • Always run estimate_run before starting any research.
  • Provide accurate company_name and company_url for precise results.
  • Choose mode (scrape/deep/full) based on data availability and depth.
  • Monitor progress with check_jobs and handle possibly_stuck cases.
  • QA the output with optional follow-up reports and strategy docs.

Example Use Cases

  • Estimate cost for researching Acme Corp using company_url https://acme.com.
  • Proceed to research them in full mode after cost approval.
  • Check status via check_jobs to track progress.
  • Retrieve the latest report at primr://output/latest and review.
  • Run QA on the report and generate a strategy document if needed.

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers