Get the FREE Ultimate OpenClaw Setup Guide →
I

Recommend

Verified

@ivangdavila

npx machina-cli add skill @ivangdavila/recommend --openclaw
Files (1)
SKILL.md
2.6 KB

Core Loop

Context → Preferences → Research → Match → Recommend

Every recommendation requires: knowing the user + knowing the options.

Check sources.md for where to find user context. Check categories.md for domain-specific factors.


Step 1: Context Gathering

Before recommending, search user context. See sources.md for full source list.

Minimum output: 3-5 relevant user signals before proceeding. If insufficient, ask targeted questions.


Step 2: Preference Extraction

From gathered context, extract:

DimensionQuestion
ValuesWhat matters most? (Quality, price, speed, novelty, safety)
ConstraintsHard limits? (Budget, time, dietary, ethical)
HistoryWhat worked? What disappointed?
MoodAdventurous or safe? Exploring or comfort?

Output: 3-5 bullet preference profile for this request.


Step 3: Research Options

Now—and only now—research candidates:

  • Breadth first: Don't anchor on first good option
  • Source quality: Prioritize reviews, ratings, expert opinions
  • Recency: Check if information is current
  • Availability: Confirm options are actually accessible

Output: Shortlist of 3-7 viable candidates with key attributes.


Step 4: Match & Rank

Score each candidate against the preference profile:

Candidate → Values alignment + Constraint fit + History match + Mood fit

Disqualify anything that violates hard constraints.

Rank by total alignment, not just one dimension.


Step 5: Recommend

Present 1-3 recommendations:

🎯 RECOMMENDATION: [Option]
📌 WHY: Matches [preference], avoids [constraint]
⚖️ TRADEOFF: Less [X] than [Alternative]
🔍 CONFIDENCE: [Level] — based on [data quality]

Adaptive Learning

After each recommendation:

  • Track outcome: Accepted? Modified? Rejected?
  • Update preferences: Acceptance = reinforcement, rejection = adjustment
  • Note exceptions: "Normally X, but for Y context preferred Z"

Store learnings in memory for future recommendations.


Traps

  • Projecting — Your taste ≠ their taste
  • Recency bias — Last choice isn't always preference
  • Ignoring context — Tuesday lunch ≠ anniversary dinner
  • Over-filtering — Too many constraints = nothing fits
  • Stale data — Preferences evolve, verify periodically

Recommendations are predictions. More context = better predictions.

Source

git clone https://clawhub.ai/ivangdavila/recommendView on GitHub

Overview

Recommend delivers personalized options by gathering user signals, extracting preferences, and researching viable candidates. It then matches constraints and past history with the current mood to anticipate expectations.

How This Skill Works

Follows the core loop: Context → Preferences → Research → Match → Recommend. It relies on concrete user signals (3-5 minimum) and a pool of options to produce a 3-7 candidate shortlist, scored across values alignment, constraint fit, history, and mood, before presenting 1-3 picks.

When to Use It

  • When you need tailored product or content recommendations with explicit preferences
  • When hard constraints (budget, time, dietary, ethical) must be respected
  • When options are numerous and require a curated shortlist
  • When user context is incomplete and targeted clarifying questions are needed
  • When you want to improve future recommendations via adaptive learning from feedback

Quick Start

  1. Step 1: Gather 3-5 relevant user signals from available sources
  2. Step 2: Extract a 3-5 item preference profile (values, constraints, history, mood)
  3. Step 3: Research 3-7 candidates, score them, and present 1-3 recommendations with rationale and confidence

Best Practices

  • Gather 3-5 relevant user signals before proceeding
  • Extract a 3-5 bullet preference profile (values, constraints, history, mood)
  • Research 3-7 candidates with breadth, verifying quality, recency, and availability
  • Score and rank candidates on multi-criteria alignment; disqualify hard constraints
  • Present 1-3 recommendations with rationale and a confidence rating; learn from outcomes

Example Use Cases

  • Recommend laptop options within a budget, aligned to performance, battery life, and brand preferences
  • Suggest recipes or meal plans that match dietary constraints and mood (adventurous vs. safe)
  • Propose travel itineraries that fit past experiences, timing, and trip goals
  • Curate streaming or reading picks aligned with taste, reviews, and current trends
  • Recommend professional tools or software, updated with user feedback and usage history

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers