Get the FREE Ultimate OpenClaw Setup Guide →

ln-623-code-principles-auditor

Scanned
npx machina-cli add skill levnikolaevich/claude-code-skills/ln-623-code-principles-auditor --openclaw
Files (1)
SKILL.md
11.8 KB

Paths: File paths (shared/, references/, ../ln-*) are relative to skills repo root. If not found at CWD, locate this SKILL.md directory and go up one level for repo root.

Code Principles Auditor (L3 Worker)

Specialized worker auditing code principles (DRY, KISS, YAGNI) and design patterns.

Purpose & Scope

  • Worker in ln-620 coordinator pipeline - invoked by ln-620-codebase-auditor
  • Audit code principles (DRY/KISS/YAGNI, error handling, DI)
  • Return structured findings with severity, location, effort, pattern_signature, recommendations
  • Calculate compliance score (X/10) for Code Principles category

Inputs (from Coordinator)

MANDATORY READ: Load shared/references/task_delegation_pattern.md#audit-coordinator--worker-contract for contextStore structure.

Receives contextStore with: tech_stack, best_practices, principles, codebase_root, output_dir.

Domain-aware: Supports domain_mode + current_domain (see audit_output_schema.md#domain-aware-worker-output).

Workflow

  1. Parse context — extract fields, determine scan_path (domain-aware if specified), extract output_dir
  2. Load detection patterns
    • MANDATORY READ: Load references/detection_patterns.md for language-specific Grep/Glob patterns
    • Select patterns matching project's tech_stack
  3. Scan codebase for violations
    • All Grep/Glob patterns use scan_path (not codebase_root)
    • Follow step-by-step detection from detection_patterns.md
    • Apply exclusions from detection_patterns.md#exclusions
  4. Generate recommendations
    • MANDATORY READ: Load references/refactoring_decision_tree.md for pattern selection
    • Match each finding to appropriate refactoring pattern via decision tree
  5. Collect findings with severity, location, effort, pattern_id, pattern_signature, recommendation
    • Tag each finding with domain: domain_name (if domain-aware)
    • Assign pattern_signature for cross-domain matching by ln-620
  6. Calculate score using penalty algorithm
  7. Write Report: Build full markdown report in memory per shared/templates/audit_worker_report_template.md, write to {output_dir}/623-principles-{domain}.md (or 623-principles.md in global mode) in single Write call. Include <!-- FINDINGS-EXTENDED --> JSON block with pattern_signature fields for cross-domain DRY analysis
  8. Return Summary: Return minimal summary to coordinator (see Output Format)

Audit Rules

1. DRY Violations (Don't Repeat Yourself)

MANDATORY READ: Load references/detection_patterns.md for detection steps per type.

TypeWhatSeverityDefault RecommendationEffort
1.1 Identical CodeSame functions/constants/blocks (>10 lines) in multiple filesHIGH: business-critical (auth, payment). MEDIUM: utilities. LOW: simple constants <5xExtract function → decide location by duplication scopeM
1.2 Duplicated ValidationSame validation patterns (email, password, phone, URL) across filesHIGH: auth/payment. MEDIUM: user input 3+x. LOW: format checks <3xExtract to shared validators moduleM
1.3 Repeated Error MessagesHardcoded error strings instead of centralized catalogMEDIUM: critical messages hardcoded or no error catalog. LOW: <3 placesCreate constants/error-messages fileM
1.4 Similar PatternsFunctions with same call sequence/control flow but different names/entitiesMEDIUM: business logic in critical paths. LOW: utilities <3xExtract common logic (see decision tree for pattern)M
1.5 Duplicated SQL/ORMSame queries in different servicesHIGH: payment/auth queries. MEDIUM: common 3+x. LOW: simple <3xExtract to Repository layerM
1.6 Copy-Pasted TestsIdentical setup/teardown/fixtures across test filesMEDIUM: setup in 5+ files. LOW: <5 filesExtract to test helpersM
1.7 Repeated API ResponsesSame response object shapes without DTOsMEDIUM: in 5+ endpoints. LOW: <5 endpointsCreate DTO/Response classesM
1.8 Duplicated Middleware ChainsIdentical middleware/decorator stacks on multiple routesMEDIUM: same chain on 5+ routes. LOW: <5 routesCreate named middleware group, apply at router levelM
1.9 Duplicated Type DefinitionsInterfaces/structs/types with 80%+ same fieldsMEDIUM: in 5+ files. LOW: 2-4 filesCreate shared base type, extend where neededM
1.10 Duplicated Mapping LogicSame entity→DTO / DTO→entity transformations in multiple locationsMEDIUM: in 3+ locations. LOW: 2 locationsCreate dedicated Mapper class/functionM

Recommendation selection: Use references/refactoring_decision_tree.md to choose the right refactoring pattern based on duplication location (Level 1) and logic type (Level 2).

2. KISS Violations (Keep It Simple, Stupid)

ViolationDetectionSeverityRecommendationEffort
Abstract class with 1 implementationGrep abstract class → count subclassesHIGH: prevents understanding core logicRemove abstraction, inlineL
Factory for <3 typesGrep factory patterns → count branchesMEDIUM: unnecessary patternReplace with direct constructionM
Deep inheritance >3 levelsTrace extends chainHIGH: fragile hierarchyFlatten with compositionL
Excessive generic constraintsGrep <T extends ... & ...>LOW: acceptable tradeoffSimplify constraintsM
Wrapper-only classesRead: all methods delegate to innerMEDIUM: unnecessary indirectionRemove wrapper, use inner directlyM

3. YAGNI Violations (You Aren't Gonna Need It)

ViolationDetectionSeverityRecommendationEffort
Dead feature flags (always true/false)Grep flags → verify never toggledLOW: cleanup neededRemove flag, keep active code pathM
Abstract methods never overriddenGrep abstract → search implementationsMEDIUM: unused extensibilityRemove abstract, make concreteM
Unused config optionsGrep config key → 0 referencesLOW: dead configRemove optionS
Interface with 1 implementationGrep interface → count implementorsMEDIUM: premature abstractionRemove interface, use class directlyM
Premature generics (used with 1 type)Grep generic usage → count type paramsLOW: over-engineeringReplace generic with concrete typeS

4. Missing Error Handling

  • Find async functions without try-catch
  • Check API routes without error middleware
  • Verify database calls have error handling
SeverityCriteria
CRITICALPayment/auth without error handling
HIGHUser-facing operations without error handling
MEDIUMInternal operations without error handling

Effort: M

5. Centralized Error Handling

  • Search for centralized error handler: ErrorHandler, errorHandler, error-handler.*
  • Check if middleware delegates to handler
  • Verify async routes use promises/async-await
  • Anti-pattern: process.on("uncaughtException") usage
SeverityCriteria
HIGHNo centralized error handler
HIGHUsing uncaughtException listener (Express anti-pattern)
MEDIUMMiddleware handles errors directly (no delegation)
MEDIUMAsync routes without proper error handling
LOWStack traces exposed in production

Recommendation: Create single ErrorHandler class. Middleware catches and forwards. Use async/await. DO NOT use uncaughtException listeners.

Effort: M-L

6. Dependency Injection / Centralized Init

  • Check for DI container: inversify, awilix, tsyringe (Node), dependency_injector (Python), Spring @Autowired (Java), ASP.NET IServiceCollection (C#)
  • Grep for new SomeService() in business logic (direct instantiation)
  • Check for bootstrap module: bootstrap.ts, init.py, Startup.cs, app.module.ts
SeverityCriteria
MEDIUMNo DI container (tight coupling)
MEDIUMDirect instantiation in business logic
LOWMixed DI and direct imports

Recommendation: Use DI container. Centralize init in bootstrap module. Inject via constructor.

Effort: L

7. Missing Best Practices Guide

  • Check for: docs/architecture.md, docs/best-practices.md, ARCHITECTURE.md, CONTRIBUTING.md
SeverityCriteria
LOWNo architecture/best practices guide

Recommendation: Create docs/architecture.md with layering rules, error handling patterns, DI usage, coding conventions.

Effort: S

Scoring Algorithm

MANDATORY READ: Load shared/references/audit_scoring.md for unified scoring formula.

Output Format

MANDATORY READ: Load shared/templates/audit_worker_report_template.md for file format.

Write report to {output_dir}/623-principles-{domain}.md (or 623-principles.md in global mode) with category: "Architecture & Design".

FINDINGS-EXTENDED block (required for this worker): After the Findings table, include a <!-- FINDINGS-EXTENDED --> JSON block containing all DRY findings with pattern_signature for cross-domain matching by ln-620 coordinator. See template for format.

pattern_id: DRY type identifier (dry_1.1 through dry_1.10). Omit for non-DRY findings.

pattern_signature: Normalized key for the detected pattern (e.g., validation_email, sql_users_findByEmail, middleware_auth_validate_ratelimit). Same signature in multiple domains triggers cross-domain DRY finding. See detection_patterns.md for format per DRY type.

Return summary to coordinator:

Report written: docs/project/.audit/ln-620/{YYYY-MM-DD}/623-principles-users.md
Score: X.X/10 | Issues: N (C:N H:N M:N L:N)

Critical Rules

  • Do not auto-fix: Report only
  • Domain-aware scanning: If domain_mode="domain-aware", scan ONLY scan_path
  • Tag findings: Include domain field in each finding when domain-aware
  • Pattern signatures: Include pattern_id + pattern_signature for every DRY finding
  • Context-aware: Use project's principles.md to define what's acceptable
  • Effort realism: S = <1h, M = 1-4h, L = >4h
  • Exclusions: Skip generated code, vendor, migrations (see detection_patterns.md#exclusions)

Definition of Done

  • contextStore parsed (including domain_mode, current_domain, output_dir)
  • scan_path determined (domain path or codebase root)
  • Detection patterns loaded from references/detection_patterns.md
  • All 7 checks completed (scoped to scan_path):
    • DRY (10 subcategories: 1.1-1.10), KISS, YAGNI, Error Handling, Centralized Errors, DI/Init, Best Practices Guide
  • Recommendations selected via references/refactoring_decision_tree.md
  • Findings collected with severity, location, effort, pattern_id, pattern_signature, recommendation, domain
  • Score calculated per shared/references/audit_scoring.md
  • Report written to {output_dir}/623-principles-{domain}.md with FINDINGS-EXTENDED block (atomic single Write call)
  • Summary returned to coordinator

Reference Files


Version: 5.0.0 Last Updated: 2026-02-08

Source

git clone https://github.com/levnikolaevich/claude-code-skills/blob/master/ln-623-code-principles-auditor/SKILL.mdView on GitHub

Overview

Code Principles Auditor (L3) inspects a codebase for DRY, KISS/YAGNI, error handling, and DI patterns. It returns structured findings with severity, location, effort, and pattern_signature, plus recommendations and a compliance score. Operates within the ln-620 coordinator pipeline and supports domain-aware outputs.

How This Skill Works

The auditor parses the contextStore to identify scan_path, loads language-aware detection patterns, and scans the codebase with Grep/Glob. Findings are enriched with severity, location, effort, and pattern_signature, then mapped to refactoring options via a decision tree and written into a single report that includes a FINDINGS-EXTENDED JSON block for cross-domain analysis.

When to Use It

  • Auditing a codebase for DRY violations across services and modules.
  • Validating adherence to KISS/YAGNI and robust error handling.
  • Checking DI patterns and dependency wiring for consistency.
  • Generating a stakeholder-ready audit report with findings and a score.
  • Preparing cross-domain refactoring recommendations in a domain-aware project.

Quick Start

  1. Step 1: Load contextStore, load detection patterns from references, and determine scan_path.
  2. Step 2: Run the Grep/Glob based scan to collect violations under scan_path.
  3. Step 3: Generate recommendations via the decision tree, write the 623-principles-*.md report, and include the FINDINGS-EXTENDED JSON block.

Best Practices

  • Load detection patterns aligned with the project's tech_stack to ensure relevant findings.
  • Exclude boilerplate or known safe patterns using the provided exclusions.
  • Tag each finding with domain and a stable pattern_signature for cross-domain analysis.
  • Prioritize high-severity findings and pair them with concrete refactoring recommendations.
  • Review the generated report and the FINDINGS-EXTENDED JSON to verify cross-domain consistency.

Example Use Cases

  • Identical Code across services triggers 1.1 and is refactored into a shared function.
  • Duplicated Validation across modules flagged and moved to a shared validators module.
  • Repeated error messages replaced with a centralized error catalog.
  • Two functions with similar control flow but different names are refactored into a common pattern.
  • Same SQL/ORM queries detected in multiple services and extracted to a shared data access layer.

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers