Get the FREE Ultimate OpenClaw Setup Guide →

refine-prompt

npx machina-cli add skill PaulRBerg/agent-skills/refine-prompt --openclaw
Files (1)
SKILL.md
1.5 KB

Context

  • Working directory: !pwd
  • Request: $ARGUMENTS

Task

You are an expert prompt engineer. Create an optimized prompt based on $ARGUMENTS.

1. Craft the Prompt

Apply relevant techniques:

  • Few-shot examples (when helpful)
  • Chain-of-thought reasoning
  • Role/perspective setting
  • Output format specification
  • Constraints and boundaries
  • Self-consistency checks

Structure with:

  • Clear role definition (if applicable)
  • Explicit task description
  • Expected output format
  • Constraints and guidelines

2. Display the Result

Show the complete prompt in a code block, ready to copy:

[Complete prompt text]

Briefly note which techniques you applied and why.

3. Save to .ai/PROMPT.md

First ensure the directory exists: mkdir -p .ai

If .ai/PROMPT.md exists:

Read current contents and append:

---

## [Brief title from $ARGUMENTS]

[The optimized prompt]

If .ai/PROMPT.md does not exist:

Create with:

# Optimized Prompts

## [Brief title from $ARGUMENTS]

[The optimized prompt]

Confirm: "Saved to .ai/PROMPT.md"

Source

git clone https://github.com/PaulRBerg/agent-skills/blob/main/skills/refine-prompt/SKILL.mdView on GitHub

Overview

An expert prompt engineer refines user prompts for LLMs to improve clarity, constraints, and results. It applies techniques like few-shot examples, chain-of-thought, role/perspective settings, and explicit output formats to produce a ready-to-use prompt and an updated PROMPT.md entry.

How This Skill Works

The skill takes ARGUMENTS to craft an optimized prompt. It structures the prompt with a clear role, explicit task, defined output format, and constraints, then displays the complete prompt in a code block and notes the techniques used. Finally it saves the result to .ai/PROMPT.md according to the presence or absence of the file.

When to Use It

  • You want to optimize or rewrite a prompt for an LLM to improve clarity and results
  • You need explicit output formats, constraints, or role-based prompts
  • You want to add few-shot examples or chain-of-thought guidance to improve accuracy
  • You want to append the optimized prompt to PROMPT.md for reuse
  • You are prompting for prompt engineering or appending to PROMPT.md

Quick Start

  1. Step 1: Provide the prompt you want refined as ARGUMENTS
  2. Step 2: The assistant crafts an optimized prompt with a clear role, task, output format, constraints, and self-checks, then displays it in a code block
  3. Step 3: The assistant saves or appends the result to .ai/PROMPT.md per the SKILL rules

Best Practices

  • Define a clear role and task for the model
  • Specify explicit output format and required fields
  • Embed constraints and boundaries, plus validation steps
  • Include few-shot examples when helpful
  • Document which techniques were used and why

Example Use Cases

  • Rewrite a user prompt to produce a JSON summary with fields: title, date, author, and key_points
  • Create a prompt that instructs the model to compare two products and output results as YAML with pros/cons
  • Ask the model to extract structured data from customer messages, e.g., email, order_id, date
  • Provide a role-based prompt: act as a senior developer debugging Python code and output a concise plan
  • Request an outline for a blog post on prompt engineering with clear H2 sections and a final CTA

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers