Get the FREE Ultimate OpenClaw Setup Guide →

consistency-analysis

npx machina-cli add skill datamaker-kr/synapse-claude-marketplace/consistency-analysis --openclaw
Files (1)
SKILL.md
4.7 KB

Consistency Analysis Skill

Purpose

This skill provides expertise in detecting inconsistencies, gaps, and conflicts across specification documents. It performs read-only cross-document analysis to ensure that requirements, tasks, data models, and plans are aligned and complete. The output is a structured analysis report with categorized findings and severity levels.

IMPORTANT: This skill NEVER modifies files. It is strictly a read-only analysis tool. All findings are reported for human review and resolution.

When It Activates

The skill is triggered when the conversation involves:

  • Reviewing specification quality or completeness
  • Checking cross-document consistency (spec vs. tasks vs. plan)
  • Detecting gaps in requirement coverage or task traceability
  • Identifying conflicting definitions across documents
  • Performing traceability analysis between artifacts

Analysis Rule Categories

The skill evaluates documents against eight rule categories:

1. Requirement Coverage

Verify that every functional requirement (FR-XXX) and non-functional requirement (NFR-XXX) in the specification is addressed by at least one implementation task or plan item. Flag orphaned requirements with no downstream mapping.

2. Task Traceability

Ensure every task in tasks.md traces back to a specific requirement or user story in spec.md. Flag tasks with missing [Spec §X.Y] references or invalid reference targets.

3. Plan Alignment

Check that the implementation plan is consistent with the task list and specification. Detect mismatches in scope, ordering assumptions, or phase assignments between documents.

4. Data Model Consistency

Verify that entity definitions, field names, types, and relationships are consistent across the specification, data model section, and API contracts. Flag naming mismatches, type conflicts, or missing fields.

5. Contract Coverage

Ensure every API endpoint defined in the specification has corresponding tasks for implementation and testing. Flag endpoints missing from the task list or with incomplete request/response schema definitions.

6. Constitution Compliance

If a project constitution exists, verify that the specification and tasks comply with its architectural principles, technology constraints, and coding conventions.

7. Duplication Detection

Identify duplicate or near-duplicate requirements, tasks, or definitions across documents. Flag redundancies that may lead to conflicting implementations or wasted effort.

8. Ambiguity Detection

Scan for vague, unmeasurable, or subjective language in requirements (e.g., "fast," "user-friendly," "scalable"). Flag ambiguous terms that need quantification or clarification.

Severity Classification

Each finding is assigned a severity level:

SeverityMeaningAction Required
CRITICALBlocking issue that prevents correct implementationMust resolve before proceeding
HIGHSignificant gap or conflict likely to cause defectsShould resolve before implementation
MEDIUMInconsistency that may cause confusion or reworkResolve during implementation
LOWMinor style or convention issueResolve at convenience

Report Output Format

The analysis report is structured as follows:

# Consistency Analysis Report

## Summary
- Total findings: N
- Critical: N | High: N | Medium: N | Low: N

## Findings

### [CRITICAL] RC-001: FR-012 has no implementing task
- **Category**: Requirement Coverage
- **Location**: spec.md FR-012
- **Details**: Payment retry logic requirement has no corresponding task in tasks.md.
- **Recommendation**: Add a task in the Stories phase covering FR-012.

### [HIGH] TT-001: Task T008 references non-existent FR-099
- **Category**: Task Traceability
- **Location**: tasks.md T008
- **Details**: The [Spec FR-099] reference does not match any requirement.
- **Recommendation**: Correct the reference or add the missing requirement.

Each finding includes an ID, category, location, detailed description, and a recommended resolution.

References

For detailed analysis rules, severity definitions, and configuration options, consult:

  • references/analysis-rules.md -- Full rule definitions with detection logic for each category
  • references/severity-levels.md -- Severity classification criteria and escalation guidelines

Source

git clone https://github.com/datamaker-kr/synapse-claude-marketplace/blob/main/plugins/speckit-helper/skills/consistency-analysis/SKILL.mdView on GitHub

Overview

Consistency Analysis identifies inconsistencies, gaps, and conflicts across specification documents. It performs read-only cross-document checks to ensure requirements, tasks, data models, and plans are aligned and complete. The results are a structured report with categorized findings and severity levels to guide human review.

How This Skill Works

The tool reads multiple artifacts (e.g., spec.md, tasks.md, data model, plan) and evaluates them against eight rule categories, including Requirement Coverage, Task Traceability, Plan Alignment, Data Model Consistency, Contract Coverage, Constitution Compliance, Duplication Detection, and Ambiguity Detection. It assigns a severity to each finding and outputs a read-only Consistency Analysis Report for human review.

When to Use It

  • Reviewing specification quality or completeness
  • Checking cross-document consistency (spec vs. tasks vs. plan)
  • Detecting gaps in requirement coverage or task traceability
  • Identifying conflicting definitions across documents
  • Performing traceability analysis between artifacts

Quick Start

  1. Step 1: Gather spec.md, tasks.md, plan, data model, and API contract documents
  2. Step 2: Run Consistency Analysis to generate the findings report
  3. Step 3: Review high-severity findings, assign owners, and update documentation accordingly

Best Practices

  • Run the analysis after drafting specs and tasks to catch gaps early
  • Ensure every FR/NFR is mapped to at least one task or plan item
  • Require valid cross-references (e.g., [Spec §X.Y]) in all tasks
  • Keep data models and API contracts synchronized across documents
  • Prioritize and resolve high-severity findings (CRITICAL/HIGH) before implementation

Example Use Cases

  • [CRITICAL] FR-012 has no implementing task in the task list
  • Data model name mismatches between spec.md and the data model section
  • API endpoint exists in spec but lacks corresponding task or tests
  • Ambiguity like 'fast' or 'scalable' without measurable criteria in FRs
  • Duplicate requirements across sections leading to conflicting implementations

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers