Get the FREE Ultimate OpenClaw Setup Guide →

claude-orator

npx machina-cli add skill Vvkmnn/claude-emporium/claude-orator --openclaw
Files (1)
SKILL.md
2.9 KB

Orator Plugin

Prompt optimization. Scores prompts across 7 dimensions and restructures them using 8 Anthropic techniques. Deterministic — no LLM calls, no network, in-memory only.

Hooks

HookWhenAction
PreToolUse(Task)Subagent prompt lacks structureSuggests orator_optimize before dispatching

Token cost: 0 on well-structured prompts (XML tags, markdown headers, action verbs). ~50-80 on vague prompts. Never blocks — suggestion only.

Commands

CommandDescription
/reprompt-orator <prompt>Optimize a prompt using Anthropic best practices

Workflows

Optimize (standalone)

  1. /reprompt-orator "your prompt here" or call orator_optimize(prompt: "...")
  2. Review score breakdown (7 dimensions, 1-10 each)
  3. Use the restructured prompt with applied techniques

Optimize (with siblings)

  1. If historian active: search_conversations("prompt optimization") to find past well-scored prompts
  2. orator_optimize(prompt: "...") — score and restructure
  3. If praetorian active: save_context(type: "decisions", ...) to preserve the optimized prompt rationale
  4. If gladiator active: observe(summary: "xml-tags improved code prompts by +3.2") to track what works

Batch review

  1. Review subagent prompts across a session
  2. orator_optimize on each under-specified prompt
  3. If vigil active: vigil_save("before-rewrite") before applying changes
  4. Apply restructured prompts

Sibling Synergy

SiblingValueHow
HistorianPast well-scored prompts as examplessearch_conversations("prompt patterns") finds effective prompts from history
PraetorianPreserve optimization rationaleCompact optimized prompts and their scores for future reference
GladiatorTrack what techniques work bestobserve() records which techniques improve scores most
OracleFind prompt engineering toolssearch("prompt patterns") discovers relevant community tools
VigilCheckpoint before batch rewritesvigil_save() before applying optimized prompts across files

MCP Tools Reference

ToolPurpose
orator_optimizeScore prompt across 7 dimensions, apply techniques, return restructured version

Scoring Dimensions

Clarity · Specificity · Structure · Context · Examples · Constraints · Tone (each 1-10)

Techniques

System prompts · XML tags · Chain of thought · Few-shot · Prefill · Long context · Extended thinking · Tool use

Storage

In-memory only. Zero disk storage. No databases, no external services.

Requires

claude mcp add orator -- npx claude-orator-mcp

Source

git clone https://github.com/Vvkmnn/claude-emporium/blob/main/plugins/claude-orator/skills/claude-orator/SKILL.mdView on GitHub

Overview

Claude-orator is a rhetoric coach for prompts. It deterministically scores prompts across seven dimensions and restructures them using Anthropic best practices, without external API calls. The in-memory workflow ensures consistent results with no network dependency.

How This Skill Works

Orator analyzes a given prompt by scoring it across seven dimensions (CLARITY, SPECIFICITY, STRUCTURE, CONTEXT, EXAMPLES, CONSTRAINTS, TONE) on a 1-10 scale, then applies eight Anthropic techniques to produce a restructured prompt. All processing is in-memory with zero disk storage and no LLM or network calls, returning the optimized prompt via orator_optimize or /reprompt-orator.

When to Use It

  • You have a vague or under-structured prompt and want a guided rewrite before tool use.
  • You want to compare past prompts to reuse effective patterns using Historian.
  • You need to preserve the optimized prompt rationale for future reference (Praetorian).
  • You want to track which techniques improve scores over time (Gladiator).
  • You are batch rewriting multiple under-specified prompts in a session.

Quick Start

  1. Step 1: /reprompt-orator "your prompt here" or call orator_optimize(prompt: "...")
  2. Step 2: Review the 7-dimension score breakdown and the restructured prompt
  3. Step 3: Use the optimized prompt with applied techniques in your task

Best Practices

  • Draft prompts with clear intent and structure them with XML tags and action verbs to minimize token cost.
  • Invoke /reprompt-orator or orator_optimize to run the analysis.
  • Review the 7 dimension scores (1-10 each) and target the weakest areas.
  • Use the restructured prompt and applied techniques in subsequent prompts.
  • Leverage Historian, Praetorian, Gladiator synergies to improve and track results.

Example Use Cases

  • A vague product brief is transformed into a specific, actionable instruction with clear constraints.
  • A code-generation prompt containing XML tags is enhanced for readability and accuracy.
  • A multi-step QA prompt is restructured to guide few-shot reasoning effectively.
  • A batch of under-specified prompts is rewritten consistently in a single session.
  • Past well-scored prompts are used as templates to seed new optimizations via Historian.

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers