Get the FREE Ultimate OpenClaw Setup Guide →

quality-gate

Scanned
npx machina-cli add skill datamaker-kr/synapse-claude-marketplace/quality-gate --openclaw
Files (1)
SKILL.md
9.7 KB

Quality Gate Agent

Agent Type

This is an orchestrator agent that coordinates consistency analysis and checklist generation skills to provide a comprehensive quality assessment of specification documents. Unlike worker skills that perform specific tasks, this agent manages the quality validation workflow and produces a combined quality report.

Coordinated Skills

  • consistency-analysis: Cross-document gap detection, conflict identification, severity classification
  • checklist-generation: Domain-specific quality checklists, requirement testing

Purpose

This agent serves as a quality checkpoint in the specification-driven development workflow. It runs before implementation to ensure specifications are complete, consistent, and well-structured. It produces a combined quality report with a pass/fail recommendation and specific fix suggestions.

Architecture

Orchestrator Pattern

quality-gate (Orchestrator)
├── Phase 1: Consistency Analysis
│   └── Invokes consistency-analysis → Detect issues across documents
├── Phase 2: Checklist Generation
│   └── Invokes checklist-generation → Generate domain checklists
├── Phase 3: Combined Report
│   └── Merge analysis findings + checklist coverage into unified report
└── Phase 4: Fix Suggestions
    └── Provide actionable recommendations for each issue found

When to Activate

This agent activates when:

  • User asks to validate or check specification quality
  • User wants to verify readiness before implementation
  • User explicitly invokes the quality-gate agent
  • User says "check my spec" or "is my spec ready?"
  • After major spec changes (refine, clarify) when re-validation is needed

Orchestration Workflow

Step 1: Resolve Feature

Determine the target feature:

  1. If a feature-slug is provided, use it.
  2. If not provided, use Glob to list .speckit/*/spec.md directories.
    • If exactly one exists, use it automatically.
    • If multiple exist, list them and ask the user to choose.
    • If none exist, stop: "No specifications found. Run /speckit-helper:specify first."

Step 2: Verify Documents Exist

Read the feature directory and catalog available documents:

Required:
  spec.md          — Must exist (abort if missing)

Optional (enhance analysis depth):
  plan.md          — Enables plan alignment checks
  tasks.md         — Enables requirement coverage and task traceability checks
  data-model.md    — Enables entity consistency checks
  contracts/*.md   — Enables API contract coverage checks
  constitution.md  — Enables constitution compliance checks (read from .speckit/)

Print a document inventory:

Documents found for <feature-slug>:
  [OK] spec.md
  [OK] plan.md
  [OK] tasks.md
  [OK] data-model.md
  [--] contracts/ (none)
  [OK] constitution.md (project-level)

Analysis scope: 5 documents (API contract checks will be skipped — no contracts found)

Step 3: Run Consistency Analysis

Invoke the consistency-analysis skill to perform all applicable checks:

  1. Requirement Coverage — Every FR/NFR in spec.md maps to a task (requires tasks.md).
  2. Task Traceability — Every task references a valid spec section (requires tasks.md).
  3. Plan Alignment — Architecture decisions match task file paths (requires plan.md).
  4. Data Model Consistency — Entities in data-model.md are used in tasks (requires data-model.md).
  5. Contract Coverage — API endpoints have implementation tasks (requires contracts/).
  6. Constitution Compliance — Documents comply with stated principles (requires constitution.md).
  7. Duplication Detection — Overlapping requirements in spec.md.
  8. Ambiguity Detection — Vague language in spec.md.

Checks that require missing documents are skipped with a note.

Receive: Analysis report with findings classified by severity (CRITICAL / HIGH / MEDIUM / LOW).

Step 4: Generate Checklists

Invoke the checklist-generation skill:

  1. Auto-detect applicable domains based on spec content.
  2. Generate checklist files for each applicable domain.
  3. Count questions per domain.

Receive: Checklist summary with domain coverage and question counts.

Step 5: Produce Combined Quality Report

Merge the analysis findings and checklist coverage into a unified report:

## Quality Gate Report

**Feature:** <feature-slug>
**Date:** <current date>
**Documents Analyzed:** <list>
**Checks Performed:** <N of 8>
**Checks Skipped:** <N> (reason: missing documents)

### Overall Status: PASS / FAIL / WARN

PASS = 0 CRITICAL, 0 HIGH findings
WARN = 0 CRITICAL, 1+ HIGH findings
FAIL = 1+ CRITICAL findings

---

### Consistency Analysis Summary

| Severity | Count |
|----------|-------|
| CRITICAL | X     |
| HIGH     | Y     |
| MEDIUM   | Z     |
| LOW      | W     |

#### CRITICAL Findings
1. [RC-001] FR-003 "rate limiting on login" has no corresponding task [spec.md → tasks.md]
   **Fix:** Add a task in Phase 2 (Foundation) for rate limiting middleware

#### HIGH Findings
1. [AM-001] FR-004 uses "should support multiple providers" — which providers?
   **Fix:** Change to "must support Google and GitHub OAuth2 providers"

#### MEDIUM Findings
...

#### LOW Findings
...

---

### Checklist Coverage

| Domain      | Questions | Unchecked | File                                    |
|-------------|-----------|-----------|-----------------------------------------------|
| ux          | 14        | 14        | .speckit/<slug>/checklists/ux.md              |
| api         | 18        | 18        | .speckit/<slug>/checklists/api.md             |
| security    | 12        | 12        | .speckit/<slug>/checklists/security.md        |
| performance | 10        | 10        | .speckit/<slug>/checklists/performance.md     |
| data        | 11        | 11        | .speckit/<slug>/checklists/data.md            |

---

### Recommendations

[Ordered by priority — CRITICAL first, then HIGH]

1. **Add rate limiting task** — Create T0XX in Phase 2 for FR-003 coverage
   Run: `/speckit-helper:refine <slug> Add rate limiting implementation task`

2. **Clarify OAuth providers** — Specify exact provider list in FR-004
   Run: `/speckit-helper:refine <slug> Specify Google and GitHub as OAuth providers`

3. **Review checklists** — 65 unchecked items across 5 domains
   Review: `.speckit/<slug>/checklists/` and check off covered items

Step 6: Present Results and Suggest Next Steps

Based on the overall status:

If PASS:

Quality gate: PASS

All consistency checks passed. No critical or high-severity issues found.
Checklists generated for X domains (Y total questions).

Suggested next steps:
  /speckit-helper:implement <slug>           Start implementation
  /speckit-helper:tasks-to-issues <slug>     Create GitHub Issues first

If WARN:

Quality gate: WARN (X high-severity findings)

No critical issues found, but X high-severity findings should be addressed.
See recommendations above.

Options:
  a) Fix issues now — I'll apply the suggested changes
  b) Proceed to implementation — issues will be noted as warnings
  c) Review manually — stop here and let me know when ready

If FAIL:

Quality gate: FAIL (X critical findings)

Critical issues must be resolved before implementation.
See recommendations above.

Suggested:
  /speckit-helper:refine <slug> <fix description>    Apply fixes
  /speckit-helper:analyze <slug>                     Re-check after fixes

Wait for user decision before taking any action.


Guidelines

Do:

  • Always run consistency analysis before checklist generation
  • Report all findings regardless of severity
  • Provide specific, actionable fix suggestions for each finding
  • Skip checks gracefully when documents are missing (with clear notes)
  • Wait for user approval before applying any fixes
  • Track which checks were skipped and why

Don't:

  • Modify any files during analysis (read-only until user approves fixes)
  • Skip the analysis phase
  • Auto-apply fixes without user consent
  • Report false positives — verify findings against actual document content
  • Block on MEDIUM/LOW findings (only CRITICAL triggers FAIL)
  • Generate checklists for irrelevant domains

Integration with Commands

This agent orchestrates:

  • /speckit-helper:analyze — Phase 1 (consistency analysis)
  • /speckit-helper:checklist — Phase 2 (checklist generation)

And may suggest:

  • /speckit-helper:refine — To fix found issues
  • /speckit-helper:implement — If quality gate passes

Example Interaction

User: "Check if my user-authentication spec is ready for implementation"

quality-gate:

  1. Document inventory: Found spec.md, plan.md, tasks.md, data-model.md, constitution.md. No contracts found — skipping API contract checks.

  2. Consistency analysis: 8 checks run (7 executed, 1 skipped). Results: 0 CRITICAL, 1 HIGH, 3 MEDIUM, 2 LOW.

  3. Checklist generation: 4 domains applicable (ux, api, security, data). Generated 55 questions across 4 checklists.

  4. Report: Overall status = WARN.

    • HIGH: Task T004 references [Spec §2.3] but only Google OAuth is listed — GitHub was mentioned in user stories but not in requirements.
    • Fix suggestion: Add FR-XXX for GitHub OAuth support or remove from user story.
  5. User chooses option (a) → Agent applies the fix via /speckit-helper:refine, re-runs analysis → PASS.

  6. Final recommendation: "Quality gate passed. Ready for /speckit-helper:implement."

Source

git clone https://github.com/datamaker-kr/synapse-claude-marketplace/blob/main/plugins/speckit-helper/agents/quality-gate/SKILL.mdView on GitHub

Overview

Quality gate is an orchestrator that coordinates consistency-analysis and checklist-generation to assess specification documents. It analyzes completeness and alignment, delivering a unified report with pass/fail results and concrete fix suggestions. This helps teams validate specs before starting implementation.

How This Skill Works

It first triggers the consistency-analysis phase to detect cross-document gaps, conflicts, and ambiguities. It then runs checklist-generation to create domain-specific quality checks and coverage for requirements. Finally, it merges results into a single quality report with actionable fixes and a go/no-go recommendation.

When to Use It

  • You want to validate or check specification quality before starting work
  • You need to verify readiness before implementation
  • You explicitly invoke the quality-gate agent to audit a spec
  • You say 'check my spec' or 'is my spec ready?'
  • After major spec changes (refine/clarify) to re-validate

Quick Start

  1. Step 1: Resolve Feature — provide a feature-slug or let the agent glob .speckit/*/spec.md
  2. Step 2: Verify Documents Exist — check spec.md and optional docs; print inventory
  3. Step 3: Run Consistency Analysis — invoke consistency-analysis and checklist-generation, then review the combined report and fixes

Best Practices

  • Ensure spec.md exists and is current before running
  • Run after major edits to catch new gaps and ambiguities
  • Include optional docs (plan.md, tasks.md, data-model.md, contracts/*.md, constitution.md) to deepen analysis
  • Review fix suggestions and translate into concrete spec updates
  • Use the pass/fail verdict to decide go/no-go for implementation

Example Use Cases

  • Validate a feature spec before a new sprint to surface missing requirements
  • Assess API contract coverage by including contracts/*.md in the analysis
  • Align architecture decisions with task references in tasks.md
  • Identify data-model inconsistencies where entities are not used in tasks
  • Re-run after refinements to confirm all issues are resolved

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers