Get the FREE Ultimate OpenClaw Setup Guide →

discover-codebase-enhancements

Scanned
npx machina-cli add skill kasperjunge/agent-resources/discover-codebase-enhancements --openclaw
Files (1)
SKILL.md
2.5 KB

Discover Codebase Enhancements

Overview

Spend significant time crawling and analyzing the codebase to surface high-impact improvements. Center findings on the jobs-to-be-done of the codebase, developers, end users, and AI agents working in the repo.

Inputs (ask if missing, max 5)

  • Target area or scope (whole repo or specific modules)
  • Primary user jobs-to-be-done and business goals
  • Known pain points or incidents
  • Constraints (time, risk tolerance, release window)
  • Evidence sources allowed (tests, metrics, logs)

Jobs-to-Be-Done Lens

  • Codebase: reliability, simplicity, maintainability
  • Developers: speed, clarity, safe changes
  • End users: correctness, performance, usability
  • AI agents: discoverability, consistency, explicit patterns

Workflow

  1. Deep crawl
    • Read architecture docs, READMEs, key modules, and tests.
    • Search for hotspots (TODO/FIXME, large files, duplication, complex flows).
  2. Evidence gathering
    • Note error-prone areas, missing tests, performance risks, and coupling.
    • Capture references to files/functions and concrete symptoms.
  3. Opportunity synthesis
    • Group findings by theme: correctness, performance, DX, architecture, tests, tooling.
  4. Impact scoring
    • Rate impact, effort, risk, and evidence strength.
  5. Ranked recommendations
    • Present top enhancements with rationale and expected outcomes.

Output Format

## Codebase Enhancement Discovery

### Context Summary
[1-3 sentences]

### JTBD Summary
- Codebase: ...
- Developers: ...
- End users: ...
- AI agents: ...

### Evidence Sources
- Files/modules reviewed: ...
- Patterns searched: ...
- Tests or metrics considered: ...

### Ranked Enhancements
1) [Enhancement]
   - Category: ...
   - Impact: high | Effort: medium | Risk: low | Evidence: moderate
   - Rationale: ...
   - Affected areas: ...

### Quick Wins
- ...

### Open Questions
- ...

Quick Reference

  • Spend more time exploring than feels necessary.
  • Prefer evidence-backed findings over speculation.
  • Center recommendations on user and developer outcomes.

Common Mistakes

  • Skimming without enough code context
  • Listing fixes without evidence or impact scoring
  • Ignoring AI agent or developer workflows
  • Recommending changes that fight existing architecture

Source

git clone https://github.com/kasperjunge/agent-resources/blob/main/skills/development/codebase-maintenance/discover-codebase-enhancements/SKILL.mdView on GitHub

Overview

This skill performs a thorough crawl of the repository to surface high-impact improvements. It centers findings on codebase jobs-to-be-done for developers, end users, and AI agents, prioritizing reliability, maintainability, performance, and DX. By aggregating evidence and ranking recommendations, it guides safe, impactful changes.

How This Skill Works

It follows a five-step workflow: (1) deep crawl of architecture docs, READMEs, key modules, and tests; (2) evidence gathering of error-prone areas, missing tests, and performance risks with concrete file/function references; (3) opportunity synthesis grouped by theme (correctness, performance, DX, architecture, tests, tooling); (4) impact scoring evaluating impact, effort, risk, and evidence strength; (5) ranked recommendations with rationale and expected outcomes.

When to Use It

  • You need a deep analysis to identify reliability, maintainability, or architectural gaps across a whole repo or specific modules.
  • You want to surface performance bottlenecks, DX gaps, or architectural improvements aligned to codebase jobs-to-be-done for developers, end users, and AI agents.
  • There are known pain points or incidents requiring root-cause analysis and prioritized fixes with evidence-backed context.
  • You want to rank proposed enhancements by impact, effort, and risk, supported by concrete evidence.
  • You need recommendations that align to codebase JTBD for developers, end users, and AI agents (discoverability, consistency, explicit patterns).

Quick Start

  1. Step 1: Define inputs and scope (target area, primary JTBD, constraints, evidence sources).
  2. Step 2: Run the deep crawl and gather evidence from architecture docs, tests, and key modules.
  3. Step 3: Review Ranked Enhancements and create an implementation plan with rationale and outcomes.

Best Practices

  • Define the scope and inputs up front (target area, JTBD, constraints) to focus the analysis.
  • Ground findings in evidence from tests, metrics, and logs; capture references to files/functions and concrete symptoms.
  • Group findings by theme (correctness, performance, DX, architecture, tests, tooling) to clarify impact areas.
  • Apply formal impact scoring (impact, effort, risk, evidence strength) to guide prioritization.
  • Prioritize changes that improve codebase outcomes for developers and end users while preserving architectural coherence.

Example Use Cases

  • Identified a monolithic module with high churn; recommended modularization, clearer interfaces, and smaller, well-scoped changes.
  • Detected gaps in test coverage for critical paths; added unit and integration tests with deterministic fixtures to reduce flakiness.
  • Found a repeated data-fetch pattern causing latency spikes; introduced caching and query optimization to improve response times.
  • Observed flaky tests due to shared state; proposed test isolation improvements and better teardown strategies.
  • Noted poor observability across services; added metrics, traces, and dashboards to improve incident response and debugging.

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers