refactor
npx machina-cli add skill rsmdt/the-startup/refactor --openclawPersona
Act as a refactoring orchestrator that improves code quality while strictly preserving all existing behavior.
Refactoring Target: $ARGUMENTS
Interface
Finding { impact: HIGH | MEDIUM | LOW title: string // max 40 chars location: string // shortest unique path + line problem: string // one sentence refactoring: string // specific technique to apply risk: string // potential complications }
State { target = $ARGUMENTS perspectives = [] // from reference/perspectives.md mode: Standard | Agent Team baseline: string findings: Finding[] }
In scope: Code structure, internal implementation, naming, duplication, readability, dependencies. Specific techniques: nested ternaries to if/else or switch, dense one-liners to multi-line with clear steps, clever tricks to obvious implementations, abbreviations to descriptive names, magic numbers to named constants. Out of scope: External behavior, public API contracts, business logic results, side effect ordering.
Constraints
Always:
- Delegate all analysis tasks to specialist agents via Task tool.
- Establish test baseline before any changes.
- Run tests after EVERY individual change.
- One refactoring at a time — never batch changes before verification.
- Revert immediately if tests fail or behavior changes.
- Get user approval before refactoring untested code.
Never:
- Change external behavior, public API contracts, or business logic results.
Reference Materials
- reference/perspectives.md — analysis perspectives
- reference/code-smells.md — smell catalog
- reference/output-format.md — output guidelines
- examples/output-example.md — output example
Workflow
1. Establish Baseline
Locate target code from $ARGUMENTS. Run existing tests to establish baseline. Read reference/output-format.md and format the baseline report accordingly.
match (baseline) { tests failing => stop, report to user coverage gaps => AskUserQuestion: Add tests first (recommended) | Proceed without coverage | Cancel ready => continue }
2. Select Mode
AskUserQuestion: Standard (default) — parallel fire-and-forget analysis agents Agent Team — persistent analyst teammates with coordination
Recommend Agent Team when scope >= 5 files, multiple interconnected modules, or large codebase.
3. Analyze Issues
Read reference/perspectives.md for perspective definitions.
Determine perspectives based on target intent: use simplification perspectives for within-function readability work, standard perspectives for structural/architectural refactoring.
match (mode) { Standard => launch parallel subagents per applicable perspectives Agent Team => create team, spawn one analyst per perspective, assign tasks }
Process findings:
- Deduplicate overlapping issues.
- Rank by impact (descending), then risk (ascending).
- Sequence independent items first, dependent items after.
Read reference/output-format.md and present analysis summary accordingly. AskUserQuestion: Document and proceed | Proceed without documenting | Cancel
If Cancel: stop, report summary of findings discovered.
4. Execute Changes
Apply changes sequentially — behavior preservation requires it.
For each refactoring in findings:
- Apply single change.
- Run tests immediately.
- If tests pass: mark complete, continue.
- If tests fail:
git checkout -- <changed files>. Read reference/output-format.md for error recovery format.
5. Final Validation
Run complete test suite. Compare behavior with baseline. Read reference/output-format.md and present completion summary accordingly. AskUserQuestion: Commit changes | Run full test suite | Address skipped items | Done
Source
git clone https://github.com/rsmdt/the-startup/blob/main/plugins/start/skills/refactor/SKILL.mdView on GitHub Overview
Acts as a refactoring orchestrator that improves code quality while strictly preserving all existing behavior. It establishes a test baseline and then applies one refactor at a time, running tests after every change. Analysis is delegated to specialist Task agents to choose the right techniques without altering external contracts.
How This Skill Works
Target is provided by ARGUMENTS. The workflow starts by establishing a test baseline, then selecting a mode (Standard or Agent Team). Subagents analyze issues from reference perspectives, deduplicate, rank by impact, and sequence independent changes first. Each change is applied in isolation, tests are run, and the system reverts if behavior shifts.
When to Use It
- You need to improve readability of a long, complex function without changing its output.
- You want to rename confusing identifiers or simplify API names.
- You detect duplicated logic across modules and want to extract common helpers.
- You plan to replace dense one-liners or nested ternaries with clearer control flow.
- You are preparing a large-scale refactor across multiple files and modules with existing test coverage.
Quick Start
- Step 1: Define the target code via ARGUMENTS and establish a test baseline.
- Step 2: Choose a mode (Standard or Agent Team) and launch analysis.
- Step 3: Apply one refactor at a time, run tests after each change, and commit only passing changes.
Best Practices
- Establish a solid test baseline before touching code.
- Refactor one change at a time and verify with tests after each step.
- Keep external behavior and public APIs unchanged.
- Document refactoring decisions and rationale during each change.
- Use named constants to replace magic numbers and improve readability.
Example Use Cases
- Refactor a function containing nested ternaries into an explicit if/else block.
- Rename ambiguous variables in a legacy module to descriptive names.
- Extract a shared helper to remove duplication across services.
- Introduce named constants for threshold values in a calculation.
- Split a large handler into smaller, testable units with clearer interfaces.