Get the FREE Ultimate OpenClaw Setup Guide →

documentation-audit

Scanned
npx machina-cli add skill aiskillstore/marketplace/documentation-audit --openclaw
Files (1)
SKILL.md
3.5 KB
<!-- ABOUTME: Documentation audit skill for verifying claims against codebase --> <!-- ABOUTME: Uses two-pass extraction with pattern expansion for comprehensive detection -->

Documentation Audit

Systematically verify claims in documentation against the actual codebase using a two-pass approach.

Overview

Core principle: Low recall is worse than false positives—missed claims stay invisible.

Two-pass process:

  1. Pass 1: Extract and verify claims directly from docs
  2. Pass 2A: Expand patterns from false claims to find similar issues
  3. Pass 2B: Compare codebase inventory vs documented items (gap detection)

Quick Start

  1. Identify target docs (user-facing only, skip plans/, audits/)
  2. Note current git commit for report header
  3. Run Pass 1 extraction using parallel agents (one per doc)
  4. Analyze false claims for patterns
  5. Run Pass 2 expansion searches
  6. Generate docs/audits/AUDIT_REPORT_YYYY-MM-DD.md

Claim Types

TypeExampleVerification
file_refscripts/foo.pyFile exists?
config_default"defaults to 'AI Radio'"Check schema/code
env_varSTATION_NAMEIn .env.example + code?
cli_command--normalize flagScript supports it?
behavior"runs every 2 minutes"Check timers/code

Verification confidence:

  • Tier 1 (auto): file_ref, config_default, env_var, cli_command
  • Tier 2 (semi-auto): symbol_ref, version_req
  • Tier 3 (human review): behavior, constraint

Pass 2 Pattern Expansion

After Pass 1, analyze false claims and search for similar patterns:

Dead script found: diagnose_track_selection.py
  → Search: all script references → Found 8 more dead scripts

Wrong interval: "every 10 seconds"
  → Search: "every \d+ (seconds?|minutes?)" → Found 3 more

Wrong service name: ai-radio-break-gen.service
  → Search: service/timer names → Found naming inconsistencies

Common patterns to always check:

  • Dead scripts: scripts/*.py references
  • Timer intervals: every \d+ (seconds?|minutes?)
  • Service names: ai-radio-*.service, *.timer
  • Config vars: RADIO_* environment variables
  • CLI flags: --flag patterns in bash blocks

Output Format

Generate docs/audits/AUDIT_REPORT_YYYY-MM-DD.md:

# Documentation Audit Report
Generated: YYYY-MM-DD | Commit: abc123

## Executive Summary
| Metric | Count |
|--------|-------|
| Documents scanned | 12 |
| Claims verified | ~180 |
| Verified TRUE | ~145 (81%) |
| **Verified FALSE** | **31 (17%)** |

## False Claims Requiring Fixes
### CONFIGURATION.md
| Line | Claim | Reality | Fix |
|------|-------|---------|-----|
| 135 | `claude-sonnet-4-5` | Actual: `claude-3-5-sonnet-latest` | Update |

## Pattern Summary
| Pattern | Count | Root Cause |
|---------|-------|------------|
| Dead scripts | 9 | Scripts deleted, docs not updated |

## Human Review Queue
- [ ] Line 436: behavior claim needs verification

Detailed References

For execution checklist and anti-patterns: checklist.md For claim extraction patterns: extraction-patterns.md

Source

git clone https://github.com/aiskillstore/marketplace/blob/main/skills/2389-research/documentation-audit/SKILL.mdView on GitHub

Overview

Systematically verify documentation claims against the actual codebase using a two-pass approach. This method reduces hidden issues by catching drift between docs and reality, especially before releases or after refactors.

How This Skill Works

The process uses two passes: Pass 1 extracts and verifies claims directly from user-facing docs; Pass 2A expands patterns from any false claims to surface related issues, and Pass 2B compares the codebase inventory against documented items to detect gaps. The workflow culminates in a generated audit report that highlights fixes and discrepancies.

When to Use It

  • Before releasing software to ensure docs reflect the codebase
  • After substantial refactors to catch documentation drift
  • When you suspect discrepancies between claims in docs and actual implementation
  • During audits of user-facing documentation for accuracy
  • To surface gaps between inventory and what is documented

Quick Start

  1. Step 1: Identify target user-facing docs (skip plans/ and audits/).
  2. Step 2: Note the current git commit to include in the audit header.
  3. Step 3: Run Pass 1 extraction per doc in parallel, then perform Pass 2 pattern expansion and gap detection.

Best Practices

  • Target only user-facing docs and skip internal paths like plans/ and audits/
  • Run Pass 1 extractions in parallel for each doc to speed up audits
  • Capture and include the current git commit in the audit header
  • Apply Pass 2A pattern expansion to reveal related issues and gaps
  • Document findings with actionable fixes and generate reports to docs/audits/

Example Use Cases

  • Dead script found: diagnose_track_selection.py → uncovered 8 more dead scripts via Pass 2 search
  • Wrong interval: 'every 10 seconds' → identified 3 more instances to fix in docs
  • Wrong service name: ai-radio-break-gen.service → discovered naming inconsistencies
  • Config variable mismatch: documentation claimed default not aligned with RADIO_* vars
  • CLI flag omission: docs claimed support for --normalize while code lacked support

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers