ln-650-persistence-performance-auditor
Scannednpx machina-cli add skill levnikolaevich/claude-code-skills/ln-650-persistence-performance-auditor --openclawPaths: File paths (
shared/,references/,../ln-*) are relative to skills repo root. If not found at CWD, locate this SKILL.md directory and go up one level for repo root.
Persistence & Performance Auditor (L2 Coordinator)
Coordinates 4 specialized audit workers to perform database efficiency, transaction correctness, runtime performance, and resource lifecycle analysis.
Purpose & Scope
- Coordinates 4 audit workers (ln-651, ln-652, ln-653, ln-654) running in parallel
- Research current best practices for detected DB, ORM, async framework via MCP tools ONCE
- Pass shared context to all workers (token-efficient)
- Aggregate worker results into single consolidated report
- Write report to
docs/project/persistence_audit.md(file-based, no task creation) - Manual invocation by user; not part of Story pipeline
- Independent from ln-620 (can be run separately or after ln-620)
Workflow
- Discovery: Load tech_stack.md, package manifests, detect DB/ORM/async framework, auto-discover Team ID
- Research: Query MCP tools for DB/ORM/async best practices ONCE
- Build Context: Create contextStore with best practices + DB-specific metadata
- Prepare Output: Create output directory
- Delegate: 4 workers in PARALLEL
- Aggregate: Collect worker results, calculate scores
- Write Report: Save to
docs/project/persistence_audit.md
Phase 1: Discovery
Load project metadata:
docs/project/tech_stack.md- detect DB, ORM, async framework- Package manifests:
requirements.txt,pyproject.toml,package.json,go.mod - Auto-discover Team ID from
docs/tasks/kanban_board.md
Extract DB-specific metadata:
| Metadata | Source | Example |
|---|---|---|
| Database type | tech_stack.md, docker-compose.yml | PostgreSQL 16 |
| ORM | imports, requirements.txt | SQLAlchemy 2.0 |
| Async framework | imports, requirements.txt | asyncio, FastAPI |
| Session config | grep create_async_engine, sessionmaker | expire_on_commit=False |
| Triggers/NOTIFY | migration files | pg_notify('job_events', ...) |
| Connection pooling | engine config | pool_size=10, max_overflow=20 |
Scan for triggers:
Grep("pg_notify|NOTIFY|CREATE TRIGGER", path="alembic/versions/")
OR path="migrations/"
→ Store: db_config.triggers = [{table, event, function}]
Phase 2: Research Best Practices (ONCE)
For each detected technology:
| Technology | Research Focus |
|---|---|
| SQLAlchemy | Session lifecycle, expire_on_commit, bulk operations, eager/lazy loading |
| PostgreSQL | NOTIFY/LISTEN semantics, transaction isolation, batch operations |
| asyncio | to_thread, blocking detection, event loop best practices |
| FastAPI | Dependency injection scopes, background tasks, async endpoints |
Build contextStore:
{
"tech_stack": {"db": "postgresql", "orm": "sqlalchemy", "async": "asyncio"},
"best_practices": {"sqlalchemy": {...}, "postgresql": {...}, "asyncio": {...}},
"db_config": {
"expire_on_commit": false,
"triggers": [{"table": "jobs", "event": "UPDATE", "function": "notify_job_events"}],
"pool_size": 10
},
"codebase_root": "/project",
"output_dir": "docs/project/.audit/ln-650/{YYYY-MM-DD}"
}
Phase 3: Prepare Output Directory
mkdir -p {output_dir} # No deletion — date folders preserve history
Phase 4: Delegate to Workers
CRITICAL: All delegations use Task tool with
subagent_type: "general-purpose"for context isolation.
Prompt template:
Task(description: "Audit via ln-65X",
prompt: "Execute ln-65X-{worker}-auditor. Read skill from ln-65X-{worker}-auditor/SKILL.md. Context: {contextStore}",
subagent_type: "general-purpose")
Anti-Patterns:
- ❌ Direct Skill tool invocation without Task wrapper
- ❌ Any execution bypassing subagent context isolation
Workers (ALL 4 in PARALLEL):
| # | Worker | Priority | What It Audits |
|---|---|---|---|
| 1 | ln-651-query-efficiency-auditor | HIGH | Redundant queries, N-UPDATE loops, over-fetching, caching scope |
| 2 | ln-652-transaction-correctness-auditor | HIGH | Commit patterns, trigger interaction, transaction scope, rollback |
| 3 | ln-653-runtime-performance-auditor | MEDIUM | Blocking IO in async, allocations, sync sleep, string concat |
| 4 | ln-654-resource-lifecycle-auditor | HIGH | Session scope mismatch, streaming resource holding, pool config, cleanup |
Invocation (4 workers in PARALLEL):
FOR EACH worker IN [ln-651, ln-652, ln-653, ln-654]:
Task(description: "Audit via " + worker,
prompt: "Execute " + worker + ". Read skill. Context: " + JSON.stringify(contextStore),
subagent_type: "general-purpose")
Worker Output Contract (File-Based):
Workers write full report to {output_dir}/{worker_id}.md per shared/templates/audit_worker_report_template.md.
Workers return minimal summary in-context (~50 tokens):
Report written: docs/project/.audit/ln-650/{YYYY-MM-DD}/651-query-efficiency.md
Score: 6.0/10 | Issues: 8 (C:0 H:3 M:4 L:1)
Phase 5: Aggregate Results (File-Based)
Workers wrote reports to {output_dir}/ and returned minimal summaries. Aggregation uses return values for numbers and file reads for findings tables.
Aggregation steps:
- Parse scores/counts from worker return strings (already in context)
- Read worker report files from
{output_dir}/for findings tables - Calculate overall score: average of 4 category scores
- Sum severity counts across all workers
- Sort findings by severity (CRITICAL → HIGH → MEDIUM → LOW)
- Context Validation (Post-Filter)
Context Validation:
MANDATORY READ: Load shared/references/context_validation.md
Apply Rules 1, 6 to merged findings:
FOR EACH finding WHERE severity IN (HIGH, MEDIUM):
# Rule 1: ADR/Planned Override
IF finding matches ADR → advisory "[Planned: ADR-XXX]"
# Rule 6: Execution Context
IF finding.check IN (blocking_io, redundant_fetch, transaction_wide, cpu_bound):
context = 0
- Function in __init__/setup/bootstrap/migrate → context += 1
- File in tasks/jobs/cron/ → context += 1
- Has timeout/safeguard nearby → context += 1
- Small data (<100KB file, <100 items dataset) → context += 1
IF context >= 3 → advisory
IF context >= 1 → severity -= 1
Downgraded findings → "Advisory Findings" section in report.
Recalculate overall score excluding advisory findings from penalty.
Exempt: Missing rollback CRITICAL, N-UPDATE loops in hot paths.
Output Format
## Persistence & Performance Audit Report - [DATE]
### Executive Summary
[2-3 sentences on overall persistence/performance health]
### Compliance Score
| Category | Score | Notes |
|----------|-------|-------|
| Query Efficiency | X/10 | ... |
| Transaction Correctness | X/10 | ... |
| Runtime Performance | X/10 | ... |
| Resource Lifecycle | X/10 | ... |
| **Overall** | **X/10** | |
### Severity Summary
| Severity | Count |
|----------|-------|
| Critical | X |
| High | X |
| Medium | X |
| Low | X |
### Findings by Category
#### 1. Query Efficiency
| Severity | Location | Issue | Recommendation | Effort |
|----------|----------|-------|----------------|--------|
| HIGH | job_processor.py:434 | Redundant entity fetch | Pass object not ID | S |
#### 2. Transaction Correctness
| Severity | Location | Issue | Recommendation | Effort |
|----------|----------|-------|----------------|--------|
| CRITICAL | job_processor.py:412 | Missing intermediate commits | Add commit at milestones | S |
#### 3. Runtime Performance
| Severity | Location | Issue | Recommendation | Effort |
|----------|----------|-------|----------------|--------|
| HIGH | job_processor.py:444 | Blocking read_bytes() in async | Use aiofiles/to_thread | S |
#### 4. Resource Lifecycle
| Severity | Location | Issue | Recommendation | Effort |
|----------|----------|-------|----------------|--------|
| CRITICAL | sse_stream.py:112 | DbSession held for entire SSE stream | Scope session to auth check only | M |
### Recommended Actions (Priority-Sorted)
| Priority | Category | Location | Issue | Recommendation | Effort |
|----------|----------|----------|-------|----------------|--------|
| CRITICAL | Transaction | ... | Missing commits | Add strategic commits | S |
| HIGH | Query | ... | Redundant fetch | Pass object not ID | S |
### Sources Consulted
- SQLAlchemy best practices: [URL]
- PostgreSQL NOTIFY docs: [URL]
- Python asyncio-dev: [URL]
Phase 6: Write Report
Write consolidated report to docs/project/persistence_audit.md with the Output Format above.
Critical Rules
- Single context gathering: Research best practices ONCE, pass contextStore to all workers
- Parallel execution: All 4 workers run in PARALLEL
- Trigger discovery: Scan migrations for triggers/NOTIFY before delegating (pass to ln-652)
- Metadata-only loading: Coordinator loads metadata; workers load full file contents
- Do not audit: Coordinator orchestrates only; audit logic lives in workers
Definition of Done
- Tech stack discovered (DB type, ORM, async framework)
- DB-specific metadata extracted (triggers, session config, pool settings)
- Best practices researched via MCP tools
- contextStore built with output_dir =
docs/project/.audit/ln-650/{YYYY-MM-DD} - Output directory created (no deletion of previous runs)
- All 4 workers invoked in PARALLEL and completed; each wrote report to
{output_dir}/ - Results aggregated from return values (scores) + file reads (findings tables)
- Compliance score calculated per category + overall
- Executive Summary included
- Report written to
docs/project/persistence_audit.md - Sources consulted listed with URLs
Workers
- ln-651-query-efficiency-auditor
- ln-652-transaction-correctness-auditor
- ln-653-runtime-performance-auditor
- ln-654-resource-lifecycle-auditor
Reference Files
- Tech stack:
docs/project/tech_stack.md - Kanban board:
docs/tasks/kanban_board.md
Version: 1.0.0 Last Updated: 2026-02-04
Source
git clone https://github.com/levnikolaevich/claude-code-skills/blob/master/ln-650-persistence-performance-auditor/SKILL.mdView on GitHub Overview
Coordinates four specialized audit workers (ln-651 to ln-654) to assess database efficiency, transaction correctness, runtime performance, and resource lifecycle. Performs a one-time MCP-based research of DB/ORM/async best practices, shares a context across all workers, and consolidates results into a single report written to docs/project/persistence_audit.md.
How This Skill Works
Follows a seven-step workflow: discovery, research, build contextStore, prepare output, delegate to four workers in parallel, aggregate results, and write the final report. Discovery loads tech_stack.md and manifests to detect DB/ORM/async components; research runs ONCE via MCP tools; the contextStore includes best practices and DB metadata; output is prepared; four auditors run in parallel; results are aggregated and saved to docs/project/persistence_audit.md.
When to Use It
- When auditing a new project’s DB stack to validate efficiency, correctness, and lifecycle controls.
- When you need to validate query performance and transaction correctness across multiple services.
- When assessing runtime performance under simulated load and concurrency.
- When verifying resource lifecycle details such as connection pooling, session config, and engine settings.
- When generating a consolidated persistence audit document for the project wiki at docs/project/persistence_audit.md.
Quick Start
- Step 1: Load tech_stack.md, package manifests, and detect DB/ORM/async tech.
- Step 2: Run Phase 2 MCP-based best-practices research ONCE and build contextStore.
- Step 3: Delegate four auditors in parallel and write the final report to docs/project/persistence_audit.md.
Best Practices
- Identify DB/ORM/async tech from tech_stack.md and manifest files (requirements.txt, pyproject.toml, package.json, go.mod) early in Phase 1.
- Run the one-time MCP-based best-practices research for all detected technologies (SQLAlchemy, PostgreSQL, asyncio, FastAPI) in Phase 2.
- Build a contextStore containing best practices and DB metadata (expire_on_commit, triggers, pool_size) to guide all audits.
- Delegate work to four auditors (ln-651–ln-654) in parallel using the Task tool to ensure context isolation.
- Write a consolidated report to docs/project/persistence_audit.md and preserve history in an output_dir per date.
Example Use Cases
- Audit a FastAPI + SQLAlchemy + PostgreSQL stack to verify NOTIFY/LISTEN semantics and transaction handling.
- Assess asyncio-based services for blocking calls using to_thread and event loop best practices.
- Review SQLAlchemy session lifecycle and expire_on_commit settings to prevent stale state.
- Validate NOTIFY triggers and pg_notify usage within migrations and job tables.
- Consolidate all findings into docs/project/persistence_audit.md for team review.