Get the FREE Ultimate OpenClaw Setup Guide →

Classify Review

npx machina-cli add skill jackhendon/ecom-feedback-intelligence/classify-review --openclaw
Files (1)
SKILL.md
2.1 KB

Skill: classify-review

Deep-dive analysis of a single review. Useful for spot-checking, edge cases, or understanding how the classifier handles ambiguous reviews.

Usage

/classify-review

After running the command, paste the review text (or provide rating + text). Claude will prompt if not given.

Optionally provide: /classify-review rating=2 "The product was lovely but delivery was terrible"


Steps

Step 1: Accept input

If the user hasn't provided a review, ask:

Paste the review text (include the star rating if you have it):

Parse:

  • rating: integer 1–5 (if not provided, ask or infer from language)
  • text: the review body
  • date: optional (default to today if not provided — affects recency in priority score)

Step 2: Classify

Apply .claude/rules/analysis-standards.md and .claude/rules/theme-taxonomy.md.

Sentiment analysis:

  • Label + confidence + 1–3 drivers
  • Explain the reasoning briefly (1–2 sentences)

Theme classification:

  • 1–3 themes (primary first)
  • For each theme: quote the specific phrase that justified inclusion

Priority score:

  • Calculate using the formula from .claude/rules/analysis-standards.md
  • Show the working: (6 - rating) × weight × confidence × recency = score

Step 3: Output analysis card

REVIEW ANALYSIS
───────────────────────────────────────
Rating:    ★N
Text:      "[full review text]"

SENTIMENT: POSITIVE/NEUTRAL/NEGATIVE (confidence: 0.X)
  Drivers: driver1, driver2

THEMES
  Primary:   theme_slug — "[quoted evidence]"
  Secondary: theme_slug — "[quoted evidence]"

PRIORITY SCORE: N.N / 12.0
  Working: (6 - N) × N.N × N.N × N.N = N.N
  Threshold: [HIGH PRIORITY ≥ 6.0 | below threshold]

PM NOTE
  [One sentence: what a PM should do with this review, if anything]
───────────────────────────────────────

Step 4: Offer follow-up

Run /analyze-reviews to process all reviews. Run /classify-review again to analyse another.

Source

git clone https://github.com/jackhendon/ecom-feedback-intelligence/blob/main/.claude/skills/classify-review/SKILL.mdView on GitHub

Overview

Classify Review performs a deep-dive analysis of a single customer review. It extracts sentiment with drivers, identifies primary and secondary themes with quoted evidence, and computes a priority score to help prioritize responses or training data. This focused approach helps spot-check edge cases and understand how the classifier handles ambiguous wording.

How This Skill Works

You provide the review text (and rating if available). The system applies the defined analysis standards and the theme taxonomy to output a sentiment label with confidence and 1–3 drivers, tag 1–3 themes with quoted justification, and compute a priority score using the specified formula. The result is an analysis card you can review or hand to product management for follow-up.

When to Use It

  • Spot-check ambiguous reviews to understand how the classifier handles tricky wording
  • QA reviews where rating conflicts with the language or where drivers are unclear
  • Prioritize reviews for response, moderation, or product fixes based on potential impact
  • Validate theme coverage when labeling data for training or evaluation
  • Perform a quick single-review sanity check before bulk analysis

Quick Start

  1. Step 1: Paste the review text (and rating if you have it) into /classify-review
  2. Step 2: Let the tool apply the analysis standards and theme taxonomy to produce sentiment, drivers, and themes
  3. Step 3: Review the outlined analysis card and use it for prioritization, PM notes, or training data labeling

Best Practices

  • Always provide the rating if you have it; if not, let the system infer from language
  • Quote the exact phrase that justifies each theme to maintain traceability
  • List primary and secondary themes in order of relevance with evidence
  • Show the working for the priority score using the standard formula (6 − rating) × weight × confidence × recency
  • Preserve the original review text and date to keep context for future audits

Example Use Cases

  • Example 1: 'Love the camera quality, but delivery was delayed' — POSITIVE sentiment; drivers: 'delivery was delayed'
  • Example 2: 'The product is good, but battery life is short' — MIXED/NEGATIVE sentiment; drivers: 'battery life' and 'short'
  • Example 3: 'It's okay, nothing special' — NEUTRAL sentiment; drivers: 'no standout features'
  • Example 4: 'Price is high, but quality is excellent' — NEGATIVE/FOCUS = price vs. quality; drivers: 'price' and 'quality'
  • Example 5: 'Exceeded expectations on performance and design' — POSITIVE sentiment; drivers: 'performance', 'design'

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers