writing-skills
Scannednpx machina-cli add skill a5c-ai/babysitter/writing-skills --openclawWriting Skills
Overview
Writing skills IS Test-Driven Development applied to process documentation. Write pressure tests, watch agents fail, write the skill, watch them pass, close loopholes.
Core principle: If you did not watch an agent fail without the skill, you do not know if the skill teaches the right thing.
TDD for Skills
| TDD Concept | Skill Creation |
|---|---|
| Test case | Pressure scenario with subagent |
| Production code | Skill document (SKILL.md) |
| RED | Agent violates rule without skill |
| GREEN | Agent complies with skill present |
| REFACTOR | Close loopholes |
Skill Structure
- YAML frontmatter:
nameanddescriptiononly - Description: "Use when..." (triggering conditions only, never summarize workflow)
- Flat namespace, separate files only for heavy reference or reusable tools
Tool Use
Meta-skill for creating new skills within the methodology.
Source
git clone https://github.com/a5c-ai/babysitter/blob/main/plugins/babysitter/skills/babysit/process/methodologies/superpowers/skills/writing-skills/SKILL.mdView on GitHub Overview
Writing skills apply Test-Driven Development to process documentation. You write pressure tests, observe agent failures, craft the skill, then validate that agents pass with the skill and fail without it, closing loopholes. The core principle is that you must witness failure without the skill to prove the skill teaches the right behavior.
How This Skill Works
First, write a pressure test as a subagent scenario and align it with a SKILL.md production artifact. Use RED-GREEN-REFACTOR to verify: without the skill, the agent breaks the rule; with the skill, the agent complies, and loopholes are closed through refactoring.
When to Use It
- When creating a new skill
- When editing an existing skill
- Before deploying a skill to production to verify behavior
- To close loopholes by testing edge cases
- During regression testing after updates to the skill or related processes
Quick Start
- Step 1: Write a pressure test scenario and create the SKILL.md frontmatter with name and description
- Step 2: Run the RED-GREEN-REFACTOR cycle to verify failure without the skill and success with it
- Step 3: Validate readiness for deployment and record verification results
Best Practices
- Write a pressure test scenario before drafting SKILL.md
- Keep YAML frontmatter to only name and description
- Validate the red, green, and refactor cycles with real agent behavior
- Iterate to close loopholes and edge cases
- Document the justification and verification criteria for the skill
Example Use Cases
- Create a new skill to enforce a data handling policy and validate with a pressure test
- Edit an existing skill to tighten its triggering conditions
- Run a pre-deployment test to ensure agents violate rules without the skill and comply with it
- Refactor after testing to close loopholes
- Document a complex skill with subagents and reusable tools using the TDD approach