Get the FREE Ultimate OpenClaw Setup Guide →

release-readiness

Scanned
npx machina-cli add skill cyberwalk3r/qa-toolkit/release-readiness --openclaw
Files (1)
SKILL.md
2.3 KB

Release Readiness Assessment

Generate a go/no-go release decision. Read qa-artifacts/.qa-config.json for project context.

Input

Accept via $ARGUMENTS: release type, version, or just start an interactive assessment.

Workflow

  1. Identify release type:
    • Hotfix (emergency fix, minimal testing)
    • Patch (bug fixes, targeted testing)
    • Minor (new features, standard testing)
    • Major (breaking changes, full regression)
  2. Gather quality signals — ask the user about each:
    • Test execution status — how many tests passed/failed/blocked?
    • Known open bugs — any blockers or criticals?
    • New features tested? — against acceptance criteria?
    • Regression testing — completed to what scope?
    • Performance — any degradation noted?
    • Security — any new vulnerabilities?
  3. Score each quality gate (0-100):
    • Test Coverage Score
    • Bug Status Score (based on open severity)
    • Regression Score
    • Deployment Readiness Score
  4. Generate recommendation

Output Structure

## Release Readiness Assessment
Version: <version>
Release Type: <type>
Date: YYYY-MM-DD
Assessor: QA Toolkit

### Overall Verdict: GO / NO-GO / CONDITIONAL GO
Confidence: <percentage>

### Quality Gate Scores
| Gate | Score | Status | Details |
|------|-------|--------|---------|
| Test Coverage | xx/100 | ✅/⚠️/❌ | ... |
| Bug Status | xx/100 | ✅/⚠️/❌ | ... |
| Regression | xx/100 | ✅/⚠️/❌ | ... |
| Deployment | xx/100 | ✅/⚠️/❌ | ... |

### Risk Matrix
| Risk | Likelihood | Impact | Mitigation |
|------|-----------|--------|------------|

### Conditions (if Conditional Go)
- [ ] <condition that must be met>

### Post-Release Monitoring Plan
- First 1 hour: <what to monitor>
- First 24 hours: <what to monitor>
- First 72 hours: <what to monitor>

### Rollback Trigger Criteria
- <condition that triggers rollback>

For release type templates, read references/release-types.md.

Save

Save to qa-artifacts/release-assessments/release-YYYY-MM-DD-<version>.md

Suggested Next Steps

After generating the assessment, suggest based on results:

  • If verdict is NO-GO or Regression Score is below 70: "Plan targeted regression testing with /qa-toolkit:regression-planner."

Source

git clone https://github.com/cyberwalk3r/qa-toolkit/blob/main/skills/release-readiness/SKILL.mdView on GitHub

Overview

Release Readiness Assessment generates a go/no-go decision by collecting quality signals, scoring four quality gates, and presenting a risk-aware verdict. It reads project context from qa-artifacts/.qa-config.json and supports input as a release type, version, or an interactive flow. The output guides whether to proceed, pause, or monitor post-release.

How This Skill Works

It identifies the release type, gathers signals (test status, blockers, feature testing, regression scope, performance, security), and scores each gate from 0-100. It renders a structured assessment with an overall verdict, confidence, risk matrix, and a post-release monitoring plan. It supports release-type templates from references/release-types.md and saves the artifact to qa-artifacts/release-assessments.

When to Use It

  • Before hotfix deployments to ensure minimal risk despite urgent fixes
  • Before patch releases with targeted testing and known bug fixes
  • Prior to minor releases introducing new features with standard testing
  • Prior to major releases with breaking changes and full regression
  • During security or compliance updates requiring risk analysis

Quick Start

  1. Step 1: Start assessment by providing release type and version, or choose interactive mode
  2. Step 2: Answer prompts for test status, blockers, new feature testing, regression scope, performance, and security
  3. Step 3: Review the generated assessment and save to qa-artifacts/release-assessments/release-YYYY-MM-DD-<version>.md

Best Practices

  • Start with a clear release type and version to anchor scoring
  • Collect complete signals: test status, blockers, regression scope, performance, and security
  • Review quality gate scores and verify margins before verdict
  • Use interactive flow when context is uncertain or data is incomplete
  • Save the assessment to qa-artifacts and align with rollback criteria

Example Use Cases

  • Hotfix for a critical production bug with minimal regression testing
  • Patch release addressing blockers with targeted regression checks
  • Minor feature release with standard acceptance criteria
  • Major release introducing breaking changes with full regression suite
  • Security patch requiring updated vulnerability checks and monitoring

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers