claude-orator
npx machina-cli add skill Vvkmnn/claude-emporium/claude-orator --openclawOrator Plugin
Prompt optimization. Scores prompts across 7 dimensions and restructures them using 8 Anthropic techniques. Deterministic — no LLM calls, no network, in-memory only.
Hooks
| Hook | When | Action |
|---|---|---|
| PreToolUse(Task) | Subagent prompt lacks structure | Suggests orator_optimize before dispatching |
Token cost: 0 on well-structured prompts (XML tags, markdown headers, action verbs). ~50-80 on vague prompts. Never blocks — suggestion only.
Commands
| Command | Description |
|---|---|
/reprompt-orator <prompt> | Optimize a prompt using Anthropic best practices |
Workflows
Optimize (standalone)
/reprompt-orator "your prompt here"or callorator_optimize(prompt: "...")- Review score breakdown (7 dimensions, 1-10 each)
- Use the restructured prompt with applied techniques
Optimize (with siblings)
- If historian active:
search_conversations("prompt optimization")to find past well-scored prompts orator_optimize(prompt: "...")— score and restructure- If praetorian active:
save_context(type: "decisions", ...)to preserve the optimized prompt rationale - If gladiator active:
observe(summary: "xml-tags improved code prompts by +3.2")to track what works
Batch review
- Review subagent prompts across a session
orator_optimizeon each under-specified prompt- If vigil active:
vigil_save("before-rewrite")before applying changes - Apply restructured prompts
Sibling Synergy
| Sibling | Value | How |
|---|---|---|
| Historian | Past well-scored prompts as examples | search_conversations("prompt patterns") finds effective prompts from history |
| Praetorian | Preserve optimization rationale | Compact optimized prompts and their scores for future reference |
| Gladiator | Track what techniques work best | observe() records which techniques improve scores most |
| Oracle | Find prompt engineering tools | search("prompt patterns") discovers relevant community tools |
| Vigil | Checkpoint before batch rewrites | vigil_save() before applying optimized prompts across files |
MCP Tools Reference
| Tool | Purpose |
|---|---|
orator_optimize | Score prompt across 7 dimensions, apply techniques, return restructured version |
Scoring Dimensions
Clarity · Specificity · Structure · Context · Examples · Constraints · Tone (each 1-10)
Techniques
System prompts · XML tags · Chain of thought · Few-shot · Prefill · Long context · Extended thinking · Tool use
Storage
In-memory only. Zero disk storage. No databases, no external services.
Requires
claude mcp add orator -- npx claude-orator-mcp
Source
git clone https://github.com/Vvkmnn/claude-emporium/blob/main/plugins/claude-orator/skills/claude-orator/SKILL.mdView on GitHub Overview
Claude-orator is a rhetoric coach for prompts. It deterministically scores prompts across seven dimensions and restructures them using Anthropic best practices, without external API calls. The in-memory workflow ensures consistent results with no network dependency.
How This Skill Works
Orator analyzes a given prompt by scoring it across seven dimensions (CLARITY, SPECIFICITY, STRUCTURE, CONTEXT, EXAMPLES, CONSTRAINTS, TONE) on a 1-10 scale, then applies eight Anthropic techniques to produce a restructured prompt. All processing is in-memory with zero disk storage and no LLM or network calls, returning the optimized prompt via orator_optimize or /reprompt-orator.
When to Use It
- You have a vague or under-structured prompt and want a guided rewrite before tool use.
- You want to compare past prompts to reuse effective patterns using Historian.
- You need to preserve the optimized prompt rationale for future reference (Praetorian).
- You want to track which techniques improve scores over time (Gladiator).
- You are batch rewriting multiple under-specified prompts in a session.
Quick Start
- Step 1: /reprompt-orator "your prompt here" or call orator_optimize(prompt: "...")
- Step 2: Review the 7-dimension score breakdown and the restructured prompt
- Step 3: Use the optimized prompt with applied techniques in your task
Best Practices
- Draft prompts with clear intent and structure them with XML tags and action verbs to minimize token cost.
- Invoke /reprompt-orator or orator_optimize to run the analysis.
- Review the 7 dimension scores (1-10 each) and target the weakest areas.
- Use the restructured prompt and applied techniques in subsequent prompts.
- Leverage Historian, Praetorian, Gladiator synergies to improve and track results.
Example Use Cases
- A vague product brief is transformed into a specific, actionable instruction with clear constraints.
- A code-generation prompt containing XML tags is enhanced for readability and accuracy.
- A multi-step QA prompt is restructured to guide few-shot reasoning effectively.
- A batch of under-specified prompts is rewritten consistently in a single session.
- Past well-scored prompts are used as templates to seed new optimizations via Historian.