Get the FREE Ultimate OpenClaw Setup Guide →

text-optimizer

npx machina-cli add skill kochetkov-ma/claude-brewcode/text-optimizer --openclaw
Files (1)
SKILL.md
6.7 KB

Text Optimizer

Reduces token count in prompts, docs, and agent instructions by 20–40% without losing meaning. Applies 41 research-backed rules across 6 categories: Claude behavior, token efficiency, structure, reference integrity, perception, LLM comprehension.

Benefits: cheaper API calls · faster model responses · clearer LLM instructions · fewer hallucinations

Examples:

/text-optimize prompt.md          # single file, medium mode (default)
/text-optimize -d agents/         # deep mode — all .md files in directory

Skill text is written for LLM consumption and optimized for token efficiency.


Text & File Optimizer

Step 0: Load Rules

REQUIRED: Read references/rules-review.md before ANY optimization. If file not found -> ERROR + STOP. Do not proceed without rules reference.

Modes

Parse $ARGUMENTS: -l/--light | -d/--deep | no flag -> medium (default).

ModeFlagScope
Light-l, --lightText cleanup only — structure, lists, flow untouched
Medium(default)Balanced restructuring — all standard transformations
Deep-d, --deepMax density — rephrase, merge, compress aggressively

Rule ID Quick Reference

CategoryRule IDsScope
Claude behaviorC.1-C.6Literal following, avoid "think", positive framing, match style, descriptive instructions, overengineering
Token efficiencyT.1-T.8Tables, bullets, one-liners, inline code, abbreviations, filler, comma lists, arrows
StructureS.1-S.8XML tags, imperative, single source, context/motivation, blockquotes, progressive disclosure, consistent terminology, ref depth
Reference integrityR.1-R.3Verify file paths, check URLs, linearize circular refs
PerceptionP.1-P.6Examples near rules, hierarchy, bold keywords, standard symbols, instruction order, default over options

ID-to-Rule Mapping

IDRuleIDRule
C.1Literal instruction followingC.2Avoid "think" word
C.3Positive framing (do Y not don't X)C.4Match prompt style to output
C.5Descriptive over emphatic instructionsC.6Overengineering prevention
T.1Tables over prose (multi-column)T.2Bullets over numbered (~5-10%)
T.3One-liners for rulesT.4Inline code over blocks
T.5Standard abbreviations (tables only)T.6Remove filler words
T.7Comma-separated inline listsT.8Arrows for flow notation
S.1XML tags for sectionsS.2Imperative form
S.3Single source of truthS.4Add context/motivation
S.5Blockquotes for criticalS.6Progressive disclosure
R.1Verify file pathsR.2Check URLs
R.3Linearize circular refsP.1Examples near rules
P.2Hierarchy via headers (max 3-4)P.3Bold for keywords (max 2-3/100 lines)
P.4Standard symbols (→ + / ✅❌⚠️)
S.7Consistent terminologyS.8One-level reference depth
P.5Instruction order (anchoring)P.6Default over options

Mode-to-Rules Mapping

ModeAppliesNotes
LightC.1-C.6, T.6, R.1-R.3, P.1-P.4Text cleanup only — no restructuring
MediumAll rules (C + T + S + R + P)Balanced transformations
DeepAll rules + aggressive rephrasingMerge sections, max compression

Usage

InputAction
No argsPrompt user for file or folder path
Single pathProcess file directly
path1, path2Process files sequentially
-l file.mdLight mode — text cleanup only
-d file.mdDeep mode — max compression
folder/All .md files in directory

File Processing

Input Parsing

InputAction
No argsPrompt user for file or folder path
Single pathProcess directly
path1, path2Process files sequentially

Execution Flow

  1. Read references/rules-review.md — load all optimization rules
  2. Read target file(s)
  3. Analyze: identify type (prompt, docs, agent, skill), note critical info and cross-references
  4. Apply rules by mode (see Mode-to-Rules Mapping)
  5. Edit file with optimized content
  6. Generate optimization report

Quality Checklist

Before

  • Read entire text
  • Identify type (prompt, docs, agent, skill)
  • Note critical info and cross-references

During — Apply by Mode

CheckLightMedDeep
C.1-C.6 (Claude behavior)YesYesYes
T.6 (filler removal)YesYesYes
T.1-T.5, T.7-T.8 (token compression)-YesYes
S.1-S.8 (structure/clarity)-YesYes
R.1-R.3 (reference integrity)YesYesYes
P.1-P.4 (LLM perception)YesYesYes
Aggressive rephrasing--Yes
No information lossYesYesYes

After

  • All facts preserved
  • Logic consistent
  • References valid (R.1-R.3)
  • Tokens reduced

Output Format

## Optimization Report: [filename]

| Metric | Before | After | Change |
|--------|--------|-------|--------|
| Lines  | X      | Y     | -Z%    |
| Tokens | ~X     | ~Y    | -Z%    |

### Rules Applied
- [Rule IDs]: [Description of changes]

### Issues Found & Fixed
- [Issue]: [Resolution]

### Cross-Reference Verification
- [x] All file refs valid (R.1)
- [x] All URLs checked (R.2)
- [x] No circular refs (R.3)

---

**Full development platform:** [brewcode](https://github.com/kochetkov-ma/claude-brewcode) — infinite focus tasks, 14 agents, quorum reviews, knowledge persistence.
Install: `claude plugin marketplace add https://github.com/kochetkov-ma/claude-brewcode && claude plugin install brewcode@claude-brewcode`

Anti-Patterns

AvoidWhy
Remove all examplesHurts generalization (P.1)
Over-abbreviateReduces readability (T.5 caveat)
Generic compressionDomain terms matter
Over-aggressive languageOpus 4.5 overtriggers (C.5)
Flatten hierarchyLoses structure (P.2)
"Don't do X" framingLess effective than "Do Y" (C.3)
Overengineer promptsOpus 4.5 follows literally (C.6)
Overload single promptsDivided attention, hallucinations (S.3)
Over-focus on wordingStructure > word choice (T.1)

Source

git clone https://github.com/kochetkov-ma/claude-brewcode/blob/main/skills/text-optimizer/SKILL.mdView on GitHub

Overview

Text-optimizer reduces token counts in prompts, docs, and agent instructions by 20–40% without sacrificing meaning. It relies on 41 research-backed rules across six categories—Claude behavior, token efficiency, structure, reference integrity, perception, and LLM comprehension—to improve instruction quality and lower API costs.

How This Skill Works

The tool loads rules from references/rules-review.md and applies mode-based transformations (Light, Medium, Deep) that implement the 41 rules across six categories. It rewrites text for density and clarity, prioritizing structured outputs (tables, bullets, single-source truth, consistent terminology) to maximize token efficiency while preserving meaning.

When to Use It

  • To shrink prompt or document length without changing meaning (token reduction 20–40%).
  • Before deploying or sharing agent instructions, to improve clarity and alignment.
  • Compress verbose docs or knowledge bases for faster model consumption.
  • Reduce API costs by decreasing token counts in prompts and responses.
  • Standardize prompts with consistent terminology and structure across projects.

Quick Start

  1. Step 1: Read references/rules-review.md and understand the 41 rules.
  2. Step 2: Choose a mode (-l for Light, -d for Deep, or default Medium) and run the optimizer on targets.
  3. Step 3: Review the transformed files and verify meaning and instruction quality.

Best Practices

  • Always load and reference references/rules-review.md before optimizing.
  • Start with Light mode for cleanup; progress to Medium or Deep only if density is needed.
  • Prefer 1–2 sentence outputs and bullet lists; use tables for multi-column data.
  • Maintain consistent terminology and single source of truth (S.3/S.7).
  • Review results to ensure meaning is preserved and avoid reference loops.

Example Use Cases

  • Reduce a long Claude prompt from 600 tokens to around 360 tokens without changing intent.
  • Compress an verbose wiki-style documentation page into concise steps and bullets.
  • Rewrite an instruction set to remove 'think' phrasing and favor descriptive directives.
  • Convert a policy file with nested sections into a single-source, clear guide.
  • Refactor a library of prompts to use inline code blocks and standard abbreviations.

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers