ln-610-docs-auditor
Scannednpx machina-cli add skill levnikolaevich/claude-code-skills/ln-610-docs-auditor --openclawPaths: File paths (
shared/,references/,../ln-*) are relative to skills repo root. If not found at CWD, locate this SKILL.md directory and go up one level for repo root.
Documentation Auditor (L2 Coordinator)
Coordinates 3 specialized audit workers to perform comprehensive documentation quality analysis.
Purpose & Scope
- Coordinates 3 audit workers running in parallel:
- ln-611 (documentation structure) — 1 invocation
- ln-612 (semantic content) — N invocations (per target document)
- ln-613 (code comments) — 1 invocation
- Detect project type + tech stack ONCE
- Pass shared context to all workers (token-efficient)
- Aggregate worker results into single consolidated report
- Write report to
docs/project/docs_audit.md(file-based, no task creation) - Manual invocation by user or called by ln-100-documents-pipeline
Workflow
- Discovery: Detect project type, tech stack, scan .md files
- Context Build: Build contextStore with output_dir, project_root, tech_stack
- Prepare Output: Create output directory
- Delegate: Invoke 3 workers in parallel
- Aggregate: Collect worker results, calculate overall score
- Context Validation: Post-filter findings
- Write Report: Save to
docs/project/docs_audit.md
Phase 1: Discovery
Load project metadata:
CLAUDE.md— root of documentation hierarchydocs/README.md— documentation index- Package manifests:
package.json,requirements.txt,go.mod,Cargo.toml - Existing docs in
docs/project/
Extract:
- Programming language(s)
- Major frameworks/libraries
- List of
.mdfiles in project (for ln-611 hierarchy check) - Target documents for semantic audit (for ln-612)
Target documents for ln-612:
FOR doc IN [CLAUDE.md, docs/README.md, docs/documentation_standards.md,
docs/principles.md, docs/project/*.md]:
IF doc exists AND doc NOT IN [docs/tasks/*, docs/reference/*, docs/presentation/*]:
semantic_targets.append(doc)
Phase 2: Build contextStore
{
"tech_stack": {"language": "...", "frameworks": [...]},
"project_root": "...",
"output_dir": "docs/project/.audit/ln-610/{YYYY-MM-DD}"
}
Where {YYYY-MM-DD} is current date (e.g., 2026-03-01).
Phase 3: Prepare Output
mkdir -p {output_dir}
No deletion of previous date folders — history preserved for comparison.
Phase 4: Delegate to Workers
Invoke all workers in parallel via Skill tool:
| Worker | Invocations | Output |
|---|---|---|
| ln-611-docs-structure-auditor | 1 | {output_dir}/611-structure.md |
| ln-612-semantic-content-auditor | N (per target document) | {output_dir}/612-semantic-{doc-slug}.md |
| ln-613-code-comments-auditor | 1 | {output_dir}/613-code-comments.md |
Pass contextStore to each worker. For ln-612, additionally pass doc_path per invocation.
Worker return format: Report written: ... | Score: X.X/10 | Issues: N (C:N H:N M:N L:N)
Phase 5: Aggregate Results
- Parse scores from worker return values
- Read worker reports from
{output_dir}/for detailed findings - Calculate category scores:
| Category | Source | Weight |
|---|---|---|
| Documentation Structure | ln-611 | 35% |
| Semantic Content | ln-612 (avg across docs) | 40% |
| Code Comments | ln-613 | 25% |
- Calculate overall score: weighted average of 3 categories
Phase 6: Context Validation (Post-Filter)
MANDATORY READ: Load shared/references/context_validation.md
Apply Rule 1 + documentation-specific inline filters:
FOR EACH finding WHERE severity IN (HIGH, MEDIUM):
# Rule 1: ADR/Planned Override
IF finding matches ADR → advisory "[Planned: ADR-XXX]"
# Doc-specific: Compression context (from ln-611)
IF Structure finding Cat 3 (Compression):
- Skip if path in references/ or templates/ (reference docs = naturally large)
- Skip if filename contains architecture/design/api_spec
- Skip if tables+lists > 50% of content (already structured)
# Doc-specific: Actuality severity calibration (from ln-611)
IF Structure finding Cat 5 (Actuality):
- Path/function COMPLETELY missing → CRITICAL
- Path exists but deprecated/renamed → HIGH
- Example code outdated but concept valid → MEDIUM
# Comment-specific: Per-category density targets (from ln-613)
IF Comment finding Cat 2 (Density):
- test/ or tests/ → target density 2-10%
- infra/ or config/ or ci/ → target density 5-15%
- business/domain/services → target density 15-25%
Recalculate with per-category target.
# Comment-specific: Complexity context for WHY-not-WHAT (from ln-613)
IF Comment finding Cat 1 (WHY not WHAT):
- If file McCabe complexity > 15 → WHAT comments acceptable
- If file in domain/ or business/ → explanatory comments OK
Downgraded findings → "Advisory Findings" section in report.
Phase 7: Write Report
Write consolidated report to docs/project/docs_audit.md:
## Documentation Audit Report - {DATE}
### Overall Score: X.X/10
| Category | Score | Worker | Issues |
|----------|-------|--------|--------|
| Documentation Structure | X/10 | ln-611 | N issues |
| Semantic Content | X/10 | ln-612 | N issues (across M docs) |
| Code Comments | X/10 | ln-613 | N issues |
### Critical Findings
- [ ] **[Category]** `path/file:line` - Issue. **Action:** Fix suggestion.
### Advisory Findings
(Context-validated findings downgraded from MEDIUM/HIGH)
### Recommended Actions
| Priority | Action | Location | Category |
|----------|--------|----------|----------|
| High | ... | ... | ... |
Scoring Algorithm
MANDATORY READ: Load shared/references/audit_scoring.md for unified scoring formula.
Critical Notes
- Pure coordinator: Does NOT perform any audit checks directly. ALL auditing delegated to workers.
- Fix content, not rules: NEVER modify standards/rules files to make violations pass
- Verify facts against code: Workers actively check every path, function name, API, config
- Compress always: Size limits are upper bounds, not targets
- No code in docs: Documents describe algorithms in tables or ASCII diagrams
- Code is truth: When docs contradict code, always update docs
- Delete, don't archive: Legacy content removed, not archived
Definition of Done
- Project metadata discovered (tech stack, doc list)
- contextStore built with output_dir =
docs/project/.audit/ln-610/{YYYY-MM-DD} - Output directory created (no deletion of previous runs)
- All 3 workers invoked and completed
- Worker reports aggregated: 3 category scores + overall
- Context Validation applied to all findings
- Consolidated report written to
docs/project/docs_audit.md
Reference Files
- Context validation rules:
shared/references/context_validation.md - Audit scoring formula:
shared/references/audit_scoring.md - Worker report template:
shared/templates/audit_worker_report_template.md - Task delegation pattern:
shared/references/task_delegation_pattern.md
Version: 5.0.0 Last Updated: 2026-03-01
Source
git clone https://github.com/levnikolaevich/claude-code-skills/blob/master/ln-610-docs-auditor/SKILL.mdView on GitHub Overview
ln-610-docs-auditor acts as the central coordinator for three specialized documentation audits: structure (ln-611), semantic content (ln-612), and code comments (ln-613). It automatically detects the project type and tech stack, delegates work in parallel, and consolidates findings into a single report at docs/project/docs_audit.md. This ensures a consistent, auditable view of documentation quality across a project.
How This Skill Works
On invocation, it loads key metadata (CLAUDE.md, docs/README.md, and relevant manifests), builds a shared contextStore, and prepares the output directory. It then delegates to the three workers in parallel, passing the contextStore and, for semantic audits, per-doc doc_path. After all reports are produced, it aggregates scores and findings and writes the final consolidated report to docs/project/docs_audit.md.
When to Use It
- When starting a new documentation overhaul and you need consistent checks across structure, semantics, and code comments.
- When auditing a multi-file project to produce a single, centralized documentation quality report.
- When you want historical comparison by date, storing archives under docs/project/.audit/ln-610/{YYYY-MM-DD}.
- When compliance requires documented quality metrics and a computable overall score.
- When you need an on-demand audit triggered by a user or as part of a docs pipeline.
Quick Start
- Step 1: Detect project type and load key metadata (CLAUDE.md, docs/README.md, manifests) and build contextStore.
- Step 2: Invoke ln-611-docs-structure-auditor, ln-612-semantic-content-auditor, and ln-613-code-comments-auditor in parallel, passing shared context.
- Step 3: Aggregate worker reports, compute scores, and write the consolidated report to docs/project/docs_audit.md.
Best Practices
- Ensure project type and tech stack are reliably detected before delegating to workers.
- Pass a minimal, token-efficient contextStore to all workers to minimize overhead.
- Verify the output path docs/project/docs_audit.md exists or is created before writing.
- Run Phase 4 in parallel and review Phase 5 aggregated scores for consistency.
- Consult shared/references/context_validation.md and apply mandatory post-filter rules to findings.
Example Use Cases
- Audit a CLAUDE-based project with docs/ and manifests to produce docs/project/docs_audit.md on a scheduled run.
- Perform an overhaul of documentation in a mono-repo, aggregating results across 3 auditors.
- Generate a compliance-ready docs audit for a release, with a consolidated score and findings.
- Trigger an on-demand audit via ln-100-documents-pipeline integration before stakeholder review.
- Compare current audit results against the previous date by inspecting docs/project/.audit/ln-610/{YYYY-MM-DD} folders.