markdown-token-optimizer
npx machina-cli add skill microsoft/GitHub-Copilot-for-Azure/markdown-token-optimizer --openclawFiles (1)
SKILL.md
1.4 KB
Markdown Token Optimizer
This skill analyzes markdown files and suggests optimizations to reduce token consumption while maintaining clarity.
When to Use
- Optimize markdown files for token efficiency
- Reduce SKILL.md file size or check for bloat
- Make documentation more concise for AI consumption
Workflow
- Count - Calculate tokens (~4 chars = 1 token), report totals
- Scan - Find patterns: emojis, verbosity, duplication, large blocks
- Suggest - Table with location, issue, fix, savings estimate
- Summary - Current/potential/savings with top recommendations
See ANTI-PATTERNS.md for detection patterns and OPTIMIZATION-PATTERNS.md for techniques.
Rules
- Suggest only (no auto-modification)
- Preserve clarity in all optimizations
- SKILL.md target: <500 tokens, references: <1000 tokens
References
- OPTIMIZATION-PATTERNS.md - Optimization techniques
- ANTI-PATTERNS.md - Token-wasting patterns
Source
git clone https://github.com/microsoft/GitHub-Copilot-for-Azure/blob/main/.github/skills/markdown-token-optimizer/SKILL.mdView on GitHub Overview
Markdown Token Optimizer analyzes markdown files to identify token-wasting patterns and suggests concise edits. It helps reduce token usage while preserving clarity, benefiting AI ingestion and faster processing.
How This Skill Works
It counts tokens with a rough heuristic (roughly 4 chars per token). It scans for patterns like emojis, verbosity, duplication, and large blocks. It then outputs a Suggest table with location, issue, fix, and savings estimates, plus a Summary of potential gains.
When to Use It
- Optimize markdown files for token efficiency
- Reduce SKILL.md file size or check for bloat
- Make documentation more concise for AI consumption
- Prepare docs for AI ingestion or integration
- Audit for token-wasting patterns and opportunities
Quick Start
- Step 1: Run Count to estimate total tokens
- Step 2: Run Scan to detect verbosity, duplication, and blocks
- Step 3: Review Suggest table and Summary, then apply chosen edits
Best Practices
- Only suggest changes, never auto-modify the source
- Preserve meaning and readability while trimming
- Target SKILL.md length to under 500 tokens and refs under 1000
- Prioritize high-impact savings in long blocks and repeated phrases
- Review emoji and formatting usage for token impact
Example Use Cases
- Trim a long README by removing redundant phrases
- Consolidate repeated boilerplate sentences
- Replace verbose bullet points with concise equivalents
- Reduce emoji usage if not essential to clarity
- Highlight top savings with a compact summary
Frequently Asked Questions
Add this skill to your agents