skill-review-optimizer
Scannednpx machina-cli add skill pantheon-org/tekhne/skill-review-optimizer --openclawSkill Review Optimizer
Automate the process of iteratively improving skills using tessl skill review feedback until they achieve target quality scores.
Output Format
Displays baseline scores, suggestions, and score progression directly to stdout for immediate review and action.
Workflow
- Setup: Verify tessl is installed (auto-install if needed via npm or brew)
- Baseline: Run initial
tessl skill reviewto get starting scores - Analyze: Parse review output to extract scores, warnings, and suggestions
- Improve: Apply suggested improvements to skill files based on feedback
- Validate: Re-run review to verify improvements and measure progress
- Iterate: Repeat steps 4-5 until target score reached or max iterations hit
- Summarize: Generate final report with all changes and score progression
Prerequisites
Tessl CLI (auto-installs via npm or brew if missing)
Quick Start
Step 1: Run Baseline Review
Run scripts/optimize_skill.py from this skill directory:
python3 scripts/optimize_skill.py /path/to/skill [--max-iterations N]
Target criteria: No validation errors, Description score 100%, Content score ≥ 90%
Step 2: Apply Improvements Based on Suggestions
The script identifies improvement opportunities but does not auto-apply edits. Review suggestions and make targeted changes:
Metadata fields: Add missing frontmatter entries
metadata:
version: "1.0.0"
category: "your-category"
Description improvements: Add concrete action verbs and trigger terms
description: Automate X by doing Y. Use when user needs Z. Performs A, B, and C.
Content actionability: Replace vague guidance with executable commands
# Instead of: "Run the build"
# Write: "npm run build"
See STRATEGIES.md for comprehensive optimization patterns.
Step 3: Iterate Until Target Reached
After making improvements, re-run the script to measure progress. Continue the improve → review cycle until target criteria are met.
python3 scripts/optimize_skill.py /path/to/skill
Validation checkpoint: If score decreased or unchanged after 3 iterations, review STRATEGIES.md for alternative approaches. Focus on the first 2-3 suggestions in review output—these typically have highest impact on scores.
Troubleshooting
Scores not improving: Review suggestions in output, focus on highest-impact items first. See STRATEGIES.md for proven optimization patterns.
Understanding scores: See SCORING_GUIDE.md for how tessl evaluates description and content quality.
Validation errors: Fix YAML frontmatter, ensure required fields (name, description) exist.
Source
git clone https://github.com/pantheon-org/tekhne/blob/main/.tessl/tiles/tessl-labs/skill-review-optimizer/SKILL.mdView on GitHub Overview
The Skill Review Optimizer automates the iterative improvement cycle for skills using tessl skill reviews. It runs baselines, parses scores and suggestions, identifies missing metadata fields, and rewrites descriptions and content with concrete actions until target scores are achieved.
How This Skill Works
It executes tessl skill reviews, parses baseline scores and actionable suggestions, fills metadata gaps, rewrites descriptions with explicit verbs and triggers, restructures content sections, adjusts frontmatter fields, and iterates refinement steps until the desired description and content scores are met.
When to Use It
- Optimizing a skill's quality scores and overall content
- Iterating skill design based on tessl feedback to improve usability and guidance
- Systematically enhancing skill descriptions and content for clarity and actionability
- Ensuring all required metadata/frontmatter fields are present and accurate
- Guiding incremental refinement with score-tracking until targets are reached
Quick Start
- Step 1: Run a baseline tessl skill review to capture scores and suggestions
- Step 2: Apply suggested improvements to metadata, description, and content
- Step 3: Re-run reviews and iterate until target scores are met
Best Practices
- Ensure a complete frontmatter block with required fields (name, description, metadata version, category)
- Use concrete action verbs and trigger terms in all descriptions
- Replace vague guidance with executable commands and examples
- Run a baseline, then an iterative review-improve loop until scores meet targets
- Document score progression and changes in a changelog or summary report
Example Use Cases
- Improve a skill's description to achieve Description score of 100% and Content score ≥ 90%
- Fill in missing metadata fields (version, category) to satisfy frontmatter requirements
- Restructure content sections for clearer guidance and actionable steps
- Replace vague steps like 'Run the build' with explicit commands (e.g., npm run build)
- Produce a final optimization report showing score progression and changes