cross-artifact-analysis
Scannednpx machina-cli add skill a5c-ai/babysitter/cross-artifact-analysis --openclawCross-Artifact Analysis
Overview
Analyze all pipeline artifacts (constitution, specification, plan, tasks) for consistency, coverage, and alignment. This is the pre-implementation quality gate that ensures all artifacts are coherent before code is written.
When to Use
- After task decomposition, before implementation
- When verifying that all specification requirements have corresponding tasks
- When checking for contradictions between constitution and plan
- When assessing readiness for the implementation phase
Key Principle
Every specification requirement must be traceable through the plan to at least one task. No artifact should contradict another. Coverage gaps and conflicts must be resolved before implementation.
Process
- Build traceability matrix - Map requirements -> plan components -> tasks
- Detect coverage gaps - Requirements without corresponding tasks
- Identify conflicts - Contradictory constraints or requirements across artifacts
- Verify constitution compliance - Plan and tasks comply with governance
- Check acceptance criteria - Task criteria match specification requirements
- Score consistency - Numeric score (0-100) across dimensions
- Determine readiness - Boolean assessment for implementation phase
- Human review - Approve analysis results before proceeding
Tool Use
Invoke via babysitter process: methodologies/spec-kit/spec-kit-planning (analysis phase)
Full pipeline: methodologies/spec-kit/spec-kit-orchestrator
Source
git clone https://github.com/a5c-ai/babysitter/blob/main/plugins/babysitter/skills/babysit/process/methodologies/spec-kit/skills/cross-artifact-analysis/SKILL.mdView on GitHub Overview
Cross-artifact-analysis is the pre-implementation quality gate that examines constitution, specification, plan, and tasks for consistency, coverage, and alignment. It ensures traceability from requirements to tasks, detects gaps and conflicts, and yields a readiness assessment before coding begins.
How This Skill Works
Build a traceability matrix linking requirements to plan components and tasks; identify coverage gaps and conflicts across artifacts; verify governance compliance and acceptance criteria; assign a 0-100 consistency score and a boolean readiness flag; conclude with a human review before proceeding. It uses the specified tooling and can be invoked via the babysitter process for planning and orchestration.
When to Use It
- After task decomposition, before implementation
- When verifying that all specification requirements have corresponding tasks
- When checking for contradictions between constitution and plan
- When assessing readiness for the implementation phase
- During pre-implementation quality gate to ensure coherence
Quick Start
- Step 1: Build traceability matrix mapping requirements -> plan components -> tasks
- Step 2: Run coverage gap and cross-artifact conflict checks, note issues
- Step 3: Apply governance verification, score consistency (0-100), and perform human review
Best Practices
- Maintain a living traceability matrix that updates with every artifact change
- Start from high-level requirements and map down to concrete tasks
- Run regular coverage and conflict checks across constitution, specification, plan, and tasks
- Apply a numeric consistency score (0-100) and document rationale for scores
- Require a final human review and approval before moving to implementation
Example Use Cases
- A feature rollout where every requirement maps to at least one user story and task, ensuring no requirement is orphaned
- A regulated healthcare project that must demonstrate traceability from governance policies to implementation tasks
- An API integration initiative where conflicting constraints between protocol spec and plan are resolved prior to coding
- A multi-team build where plan components must align with architectural constitution to avoid misalignment
- A legacy system upgrade where acceptance criteria are traced back to explicit specification requirements