Get the FREE Ultimate OpenClaw Setup Guide →

prompt-coach

npx machina-cli add skill aroyburman-codes/compound-product-management/prompt-coach --openclaw
Files (1)
SKILL.md
5.9 KB

Prompt Coach

Coach the user toward better prompts. Intervene only when it matters. Show, don't lecture.

When to Nudge

Fire coaching only when one of these conditions is true:

  1. Missing goal — the prompt says what but not why or what success looks like
  2. No constraints — no mention of language, framework, style, or boundaries
  3. Overloaded request — multiple unrelated tasks bundled into one prompt
  4. Dump-and-hope — code or content pasted with no explicit question
  5. Repeated failure — the same type of request has failed or needed correction 2+ times this session
  6. Undefined success — "make this better" or "fix this" with no measurable target

When to Stay Silent

  • Conversation history already fills in the ambiguity
  • The user gave a clear, specific instruction
  • Slash commands and memorization prompts (clear intent)
  • The user has bypassed coaching before on similar prompts

How to Coach

In-Session (Real-Time)

When a prompt triggers coaching:

  1. Research first, ask second. Check the codebase, conversation history, and memory files before formulating questions. Questions grounded in context ("Are you targeting the UserProfile component or the API layer?") beat generic ones ("Can you be more specific?").

  2. Ask 1-3 targeted questions. Never more than 3. Use the AskUserQuestion tool when available. Frame as quick clarifications, not interrogations.

  3. Show the rewrite. After getting answers, show the improved prompt alongside the original. Concrete demonstration transfers faster than abstract guidance.

Example:

Your prompt: "Fix the login bug"

Improved: "Fix the login bug where users get a 401 after OAuth redirect.
The issue is in auth/callback.ts — the session token isn't persisted
before the redirect completes. Expected: user lands on /dashboard
after Google OAuth. Actual: user sees 'Unauthorized' and loops back
to /login."

Retrospective (Session Review)

At session breakpoints, scan the conversation for prompting patterns:

Patterns to detect:

PatternSignalCoaching
Serial refinementUser needed 3+ follow-ups to get the right output"Try frontloading constraints: language, framework, file scope, success criteria"
Context dripUser provided critical context only after the first attempt failed"Include the 'why' upfront — it changes the approach"
Scope creepStarted with one task, gradually expanded to five"Split compound requests into individual prompts"
Missing examplesUser described output format in words when an example would be faster"Paste an example of what good output looks like"
Repeated correctionsSame type of correction across multiple promptsSurface the pattern, suggest adding it to CLAUDE.md

Retrospective output format:

## Prompt Patterns — [date]

This session had 12 prompts. 3 patterns worth noting:

1. **Serial refinement on API design** (prompts 3-6)
   You refined 4 times before landing on the right interface.
   Next time, try: "Design a REST endpoint for X. Must support
   pagination, return JSON, follow existing patterns in routes/."

2. **Missing test criteria** (prompts 8, 11)
   Both test-related prompts needed follow-up on what to assert.
   Next time, include: expected inputs, expected outputs, edge cases.

Prompts 1-2, 7, 9-10, 12 were clear and specific. No notes.

Coaching Principles

  1. Most prompts pass through unchanged. Coaching on every prompt trains the user to ignore you.
  2. Research before asking. Generic questions ("What do you mean?") are lazy. Grounded questions ("Do you want this in the existing UserService or a new module?") are useful.
  3. Show the improved version. A before/after comparison teaches more than explaining what's wrong.
  4. Cap questions at 3. More than 3 questions means you don't understand the user's context well enough.
  5. Bypass must exist. If the user prefixes with ! or says "just do it," skip coaching entirely. Respect the override.
  6. Surface patterns, not individual mistakes. One vague prompt is fine. Three vague prompts in the same category is a pattern worth naming.
  7. Praise specificity when you see it. Reinforce good prompts briefly: "Clear prompt — working on it." One line, not a speech.

Prompt Quality Dimensions

Use these to evaluate (internally, not shown to user unless asked):

DimensionWhat It Measures
ClarityCan the prompt mean only one thing?
SpecificityAre files, functions, or components named?
ContextIs the environment, framework, or codebase referenced?
Success criteriaIs the expected outcome defined?
ScopeIs this one task, not five?

Score each 1-5 internally. Below 2.5 average = nudge. Above 3.5 = pass through silently.

What Good Prompts Look Like

Provide these as examples when coaching:

Weak: "Add authentication" Strong: "Add JWT authentication to the Express API. Use the existing User model in models/user.ts. Protect all /api/v2/ routes. Store refresh tokens in Redis (already configured in lib/redis.ts). Return 401 with { error: 'unauthorized' } for invalid tokens."

Weak: "Make the tests pass" Strong: "Fix the failing test in tests/checkout.test.ts:47. The CartTotal calculation returns 99.99 but expects 100.00. Likely a floating-point rounding issue in calculateTotal() at lib/cart.ts:23."

Weak: "Refactor this" Strong: "Extract the email-sending logic from OrderController.create() into a separate EmailService class. Keep the same interface. Add error handling so a failed email doesn't roll back the order. Write a unit test for the new service."

Source

git clone https://github.com/aroyburman-codes/compound-product-management/blob/main/skills/prompt-coach/SKILL.mdView on GitHub

Overview

Prompt Coach helps users refine vague or overloaded prompts into clear, actionable guidance. It intervenes only when it matters, showing a rewritten prompt and a concrete rationale. It also runs retrospective checks at session breaks to surface prompting patterns and improve future prompts.

How This Skill Works

When a prompt triggers coaching (missing goal, no constraints, overloaded request, dump-and-hope content, repeated failure, or undefined success), the coach first researches context by reviewing the codebase, history, and memory before asking questions. It then poses up to three targeted clarifications, framed as quick, non-interrogative constraints. After receiving answers, it presents the improved prompt next to the original to demonstrate the gains.

When to Use It

  • Missing goal: prompt states what to do but not why or how success is measured.
  • No constraints: lacks language, framework, style, or boundaries.
  • Overloaded request: bundles multiple unrelated tasks into one prompt.
  • Dump-and-hope: code or content pasted with no explicit question.
  • Repeated failure: the same request needed corrections 2+ times this session.

Quick Start

  1. Step 1: Detect triggers and assess context for potential coaching needs.
  2. Step 2: Ask up to 3 targeted clarifying questions to surface goals, constraints, and success criteria.
  3. Step 3: Present the improved prompt alongside the original and explain the concrete benefits.

Best Practices

  • Research before asking: check history and context before coaching.
  • Ask up to 3 targeted clarifying questions.
  • Show the rewritten prompt alongside the original.
  • Frontload constraints: language, framework, scope, and success criteria.
  • Use retrospective prompts at session breaks to surface patterns for future prompts.

Example Use Cases

  • Example 1 — Prompt with missing goal: Original: 'Fix the login bug' → Improved: 'Fix the login bug where users get a 401 after OAuth redirect. The issue is in auth/callback.ts — the session token isn’t persisted before the redirect completes. Expected: user lands on /dashboard after Google OAuth. Actual: user sees ‘Unauthorized’ and loops back to /login.'
  • Example 2 — Prompt with no constraints: Original: 'Improve this API'. Improved: 'Improve the /api/login endpoint to reduce latency by 40% with <=2% memory impact; return JSON with fields {success, token, expiration} and document any breaking changes.'
  • Example 3 — Overloaded request: Original: 'Build login flow, OAuth, UI, and tests'. Improved: '1) Implement OAuth flow; 2) Create login UI; 3) Write end-to-end tests for login; 4) Document API surface.'
  • Example 4 — Repeated corrections: Original: 'Make this better' with no targets. Coach frontloads: add explicit metrics like target latency and error rate to align expectations.
  • Example 5 — Context drip: Initial prompt lacks key context; after failure, user provides why and success criteria, clarifying approach (target component, API layer, etc.).

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers