garden
Use Cautionnpx machina-cli add skill rana/yogananda-skills/garden --openclawRead all project markdown documents (CLAUDE.md, CONTEXT.md, DESIGN.md, DECISIONS.md, ROADMAP.md, and any others present) to ground in the project's actual state.
Document Identifier Gardening
Phase 1: Discover Identifier Conventions
Before analyzing, inventory what exists:
- What
PREFIX-NNNpatterns are in use? (ADR-NNN, DES-NNN, Phase N, or others) - Where is each identifier scheme's canonical home? (e.g., ADRs in DECISIONS.md, DES in DESIGN.md)
- What is the total count per scheme? What is the highest number?
- Are there numbering gaps? Are gaps intentional (reserved ranges) or accidental?
Phase 2: Apply the Requested Lens
If $ARGUMENTS specifies a mode, run only that mode. If no mode specified, run Safe Deletion and Merge Candidates first (highest cognitive load reduction per finding).
Safe Deletion Audit (prune)
For each identifier, evaluate deletion safety:
- Superseded — Is this fully replaced by a later identifier that captures all essential content? (e.g., "ADR-013 amended by ADR-117" — is ADR-013's original content still needed?)
- Reversed — Was this decision reversed and the reversal is the operative state?
- Absorbed — Was this content absorbed into a broader identifier? Does the broader one stand alone?
- Orphaned — Is this identifier referenced by anything outside its own document? No inbound references + no unique content = candidate for deletion.
- Vestigial — Does this describe a state or decision that no longer applies and has no historical value as context?
- Reconstructible — Could a developer or AI session reconstruct this identifier's essential content from DESIGN.md, the code, and domain knowledge? Distinguish between reconstructible-fact (the decision is visible in the codebase — low reconstruction cost) and irreplaceable-reasoning (the why behind the decision would require a design session to re-derive — high reconstruction cost). Only low-cost reconstructibility qualifies. If keeping the identifier costs more scan-time than reconstructing its content would cost, it's weight.
Before confirming safe deletion: Check whether the identifier is referenced in other documents, cross-reference chains, or commit messages. An identifier with zero inbound references and no unique surviving content is safe. An identifier with inbound references requires updating those references first.
Merge Candidates (merge)
Identify identifiers that cover overlapping territory:
- Same topic split across multiple identifiers (e.g., three ADRs about search that could be one)
- Progressive refinement chains (ADR-X → amended by ADR-Y → updated by ADR-Z) that could collapse into a single canonical version
- Identifiers that exist only to cross-reference each other
For each merge candidate group: what would the merged identifier contain? What would its number be? What cross-references need updating?
Category Coherence (reorder)
Evaluate the grouping structure:
- Are identifiers in the right categories? Would moving any reduce scan cost?
- Are categories balanced? (A category with 15 items and another with 2 suggests restructuring)
- Does the category ordering match the reader's likely priority? (Foundational first, distant-future last)
- Within categories, are identifiers in a logical order? (By dependency, by phase, or by topic cluster)
Cross-Reference Repair (refs)
Check reference health:
- Dangling references — Identifier mentioned but doesn't exist (typo, deleted without cleanup)
- Stale amendment chains — "Updated by ADR-X" where ADR-X doesn't reference back, or the chain is broken
- Missing bidirectional links — A references B but B doesn't reference A
- Implicit references — Content that clearly relates to another identifier but has no explicit cross-reference
- Index-body mismatch — Index entry differs from the actual heading (title drift after edits)
Greenfield Audit (greenfield)
For each identifier, apply one test: if this identifier disappeared entirely, would a developer or AI session with access to the codebase lose knowledge they cannot reconstruct from DESIGN.md, the code, and domain conventions?
Categorize each identifier as:
- Irreplaceable — contains unique reasoning, constraints, trade-off analysis, or context not derivable from other sources. The why behind the decision would require a design session to re-derive.
- Reconstructible — content is derivable from the codebase or other documents at low cost. The identifier adds scan-time overhead without informational value. State what source makes it reconstructible (specific code file, DESIGN.md section, domain convention).
- Borderline — the fact is reconstructible but the reasoning holds nuance that might be lost. Worth flagging but not a deletion candidate without judgment.
This mode assumes a technical audience (developers and AI sessions with code access). For identifiers that serve non-technical stakeholders who can't read the codebase, note the audience dependency rather than marking as reconstructible.
Phase 3: Sequence Health (always runs)
Brief assessment regardless of mode:
- Numbering gaps and whether they matter
- Highest allocated number vs. total count (gap density)
- Whether renumbering would help or hurt (default recommendation: don't renumber — it breaks external references in git history, conversations, and notes. Instead, annotate gaps or use a mapping note.)
Plan Mode (--plan)
If --plan is specified, instead of findings list, produce a concrete execution plan:
- Ordered list of edits to perform
- For each edit: which file, which section, what changes
- Dependency ordering (update references before deleting the target)
- What to verify after each step
Without --plan, present findings as an action list (standard format).
For every finding:
- The specific identifier(s) involved
- What the issue is (with exact document locations)
- The proposed action (delete, merge into X, move to category Y, add reference, etc.)
- Risk level (safe / needs verification / requires judgment call)
Present as a prioritized action list. No changes to files — document only.
Output Management
Hard constraints:
- Segment output into groups of up to 8 findings, ordered by cognitive load reduction impact.
- If no $ARGUMENTS mode is given, run Safe Deletion and Merge Candidates only — the two modes with highest signal-to-noise.
- Write each segment incrementally. Do not accumulate a single large response.
- After completing each segment, continue immediately to the next. Do not wait for user input.
- Continue until ALL findings across all requested modes are reported. State the total count when complete.
- If the analysis surface is too large to complete in one session, state what was covered and what remains.
Document reading strategy:
- Read document indexes and tables of contents first. These reveal structure, counts, and categories without reading every identifier's full text.
- Only drill into specific identifier text when evaluating merge candidates or safe deletion (need to compare content).
- For cross-reference repair, grep for identifier patterns across all documents rather than reading linearly.
What identifiers are carrying weight they shouldn't?
What structure has accumulated rather than been composed?
What would a newcomer find hardest to navigate — and would gardening fix that or is the problem deeper?
Source
git clone https://github.com/rana/yogananda-skills/blob/main/skills/garden/SKILL.mdView on GitHub Overview
Garden is the practice of Document Identifier Gardening to keep identifier systems lean and navigable. It audits ADR-NNN, DES-NNN, and similar schemes for safe deletion, merge candidates, category coherence, cross-reference integrity, and cognitive load reduction. Use it when identifier systems accumulate weight, after greenfield analysis questions, or when documents feel harder to navigate than they should.
How This Skill Works
Phase 1 discovers conventions by inventorying prefixes, canonical homes, counts, and gaps across CLAUDE.md, CONTEXT.md, DESIGN.md, DECISIONS.md, ROADMAP.md, and related docs. Phase 2 applies the lens: start with Safe Deletion and Merge if no specific mode is provided, then perform Safe Deletion Audit with six criteria (Superseded, Reversed, Absorbed, Orphaned, Vestigial, Reconstructible). Next, identify Merge Candidates, then check Category Coherence, and finally repair Cross-References to ensure references remain valid.
When to Use It
- When identifier systems start to feel heavy or unwieldy.
- After greenfield analysis questions have been asked.
- When navigating documents becomes harder than it should be.
- Before major refactors that touch identifiers.
- During regular maintenance audits to reduce cognitive load.
Quick Start
- Step 1: Read all project markdown documents (CLAUDE.md, CONTEXT.md, DESIGN.md, DECISIONS.md, ROADMAP.md, etc.) to ground in the current state.
- Step 2: Inventory identifier patterns, canonical homes, counts, and gaps; decide on modes.
- Step 3: Run Safe Deletion and Merge, then reorder categories and repair cross-references; update docs accordingly.
Best Practices
- Inventory existing PREFIX-NNN patterns and their canonical homes.
- Define a single source of truth for each scheme (e.g., ADRs in DECISIONS.md, DES in DESIGN.md).
- Prioritize Safe Deletion and Merge early to reduce cognitive load.
- Document planned merges and category reorganizations before applying changes.
- Update cross-references and commit messages to reflect the new structure.
Example Use Cases
- ADR-013 amended by ADR-117: prune ADR-013 after confirming no unique content remains.
- DES-NNN clusters collapsed into a single canonical DES entry after detecting overlap.
- Category reordering to place foundational identifiers before dependent or future-oriented ones.
- Orphaned ADRs identified by zero inbound references and deleted after cross-reference updates.
- Gaps in numbering addressed by planning reserved ranges and planned future identifiers.