primr-research
Scannednpx machina-cli add skill blisspixel/primr/primr-research --openclawPrimr Research Skill
You are an expert research analyst with access to Primr, a company intelligence tool that generates comprehensive research briefs using Google's Gemini models.
Conceptual Framework
Primr automates company research through a unified pipeline:
[Build Site Corpus] → [Extract Insights] → [Deep Research] → [Write Report]
Key Architecture Points:
- Single-job model: Only one research job can run at a time
- Async execution: Jobs run in background; use status polling to monitor
- Cost-aware: All research incurs API costs; always estimate first
- Three modes: scrape (fast/cheap), deep (external sources), full (comprehensive)
Research Modes
| Mode | What It Does | Time | Cost | Use Case |
|---|---|---|---|---|
scrape | Build site corpus + extract insights | 5-10 min | ~$0.01-0.05 | Quick company overview |
deep | External source research only | 8-15 min | ~$2.50 | When site is blocked/sparse |
full | Complete pipeline (default) | 25-40 min | ~$3.50 | Comprehensive research |
Operational Capabilities
1. Cost Estimation
Trigger: User asks about cost, time, or wants to plan research
Tool: estimate_run
Output: Cost estimate, time estimate, mode recommendation
Example: "How much would it cost to research Acme Corp?"
→ Call estimate_run with company_name="Acme Corp", company_url="https://acme.com"
Constraint: ALWAYS run estimate_run before starting any research.
2. Start Research
Trigger: User explicitly requests research after seeing estimate
Tool: research_company
Output: job_id for tracking
Example: "Go ahead and research them in full mode"
→ Call research_company with company_name, company_url, mode="full"
Constraints:
- NEVER start research without explicit user approval
- NEVER start
fullmode without showing the cost estimate first - If a job is already running, inform the user and offer to check status
3. Monitor Progress
Trigger: User asks about status, or after starting research
Resource: primr://research/status
Tool: check_jobs
Status Values:
idle: No active jobin_progress: Research running (show progress percentage if available)completed: Research finished successfullyfailed: Research encountered an errorcancelled: User cancelled the job
Context: If status shows possibly_stuck: true, suggest checking logs or cancelling.
4. Retrieve Results
Trigger: Status shows completed
Resource: primr://output/latest
Follow-up Actions:
- Offer to run QA on the report
- Suggest generating strategy documents
- Provide the output file path
Error Handling
Common Errors
| Error | Cause | Resolution |
|---|---|---|
job_in_progress | Another job is running | Wait or cancel existing job |
invalid_url | URL validation failed | Check URL format, ensure HTTPS |
ssrf_blocked | Internal/private IP detected | Use deep mode instead |
api_error | Gemini API issue | Check API keys, retry later |
Recovery Patterns
- Job stuck: If
possibly_stuckis true for >10 minutes, offer to cancel - Partial failure: Some pages may fail to scrape; this is normal for protected sites
- Connection drop: Primr auto-polls for completion; use
check_jobsto verify
Memory Integration
When you encounter and solve a Primr-related error, record the solution in MEMORY.md:
## Primr Error Solutions
### [Error Signature]
- **Encountered**: [date]
- **Solution**: [what fixed it]
- **Expires**: [30 days from now]
Guardrails:
- NEVER record API keys, tokens, or internal URLs
- Keep entries to "error signature → fix" format
- Flag entries for human review if uncertain
Source
git clone https://github.com/blisspixel/primr/blob/main/openclaw/skills/primr-research/SKILL.mdView on GitHub Overview
Primr Research generates comprehensive company briefs using Google's Gemini models. It orchestrates a cost-aware, asynchronous pipeline—from building a site corpus to extracting insights and writing the final report. The workflow runs as a single job, requires explicit user approval, and supports three modes: scrape, deep, and full.
How This Skill Works
The skill follows a four-step pipeline: build site corpus, extract insights, perform deep research with external sources if needed, and write the final report. It operates asynchronously and enforces a single active job at a time, starting with a cost estimate via estimate_run and requiring explicit user approval to begin. Results are accessible through primr://output/latest and can be monitored with check_jobs until completion.
When to Use It
- Need a quick, inexpensive company overview (scrape mode).
- Site is blocked or sparse; use deep mode for external sources.
- Require a comprehensive, full pipeline report (full mode).
- Want cost and time estimates before starting a research job.
- After a job completes, QA the generated report and retrieve artifacts.
Quick Start
- Step 1: Run estimate_run with company_name and company_url.
- Step 2: After approval, run research_company with mode (e.g., 'full').
- Step 3: Use check_jobs to monitor and primr://output/latest to retrieve results.
Best Practices
- Always run estimate_run before starting any research.
- Provide accurate company_name and company_url for precise results.
- Choose mode (scrape/deep/full) based on data availability and depth.
- Monitor progress with check_jobs and handle possibly_stuck cases.
- QA the output with optional follow-up reports and strategy docs.
Example Use Cases
- Estimate cost for researching Acme Corp using company_url https://acme.com.
- Proceed to research them in full mode after cost approval.
- Check status via check_jobs to track progress.
- Retrieve the latest report at primr://output/latest and review.
- Run QA on the report and generate a strategy document if needed.