test-cases
Scannednpx machina-cli add skill cyberwalk3r/qa-toolkit/test-cases --openclawTest Case Generator
Generate test cases from any requirements input. Read qa-artifacts/.qa-config.json for project context.
Input
Accept via $ARGUMENTS: user stories, PRD excerpts, feature descriptions, Jira ticket content, PR descriptions, or casual feature explanations.
Workflow
- Parse the requirement — identify the feature, actors, and expected behavior
- Identify testable scenarios across these dimensions:
- Happy path — normal expected flow
- Negative cases — invalid inputs, unauthorized access, missing data
- Boundary values — min, max, empty, one-off limits
- Edge cases — concurrent actions, timeouts, network failures
- Security — injection, XSS, CSRF, auth bypass
- Performance — load, stress, response time
- Prioritize each test case:
- P0 (Smoke) — must pass for any release, core functionality
- P1 (Critical) — essential for feature completeness
- P2 (Extended) — important edge cases and integration
- P3 (Exploratory) — nice-to-have, deep edge cases
- Format output based on user preference or default to table
Output Format — Table (Default)
| # | Test Case | Steps | Expected Result | Priority | Type |
|---|---|---|---|---|---|
| 1 | ... | ... | ... | P0 | Happy Path |
Output Format — Gherkin BDD
For BDD format, read references/bdd-patterns.md.
Feature: <feature name>
Scenario: <scenario description>
Given <precondition>
When <action>
Then <expected result>
Output Format — Checklist
Simple checkbox format for manual testing:
## <Feature> Test Cases
### P0 — Smoke Tests
- [ ] <test case description>
### P1 — Critical Tests
- [ ] <test case description>
Traceability
Link each test case back to the original requirement with a reference tag.
Save
Save to qa-artifacts/test-cases/tc-YYYY-MM-DD-<feature>.md
Suggested Next Steps
After generating test cases, suggest:
- "Generate synthetic test data for these scenarios with
/qa-toolkit:test-data." - "Automate the P0/P1 cases as Playwright E2E tests with
/qa-toolkit:e2e-test."
Source
git clone https://github.com/cyberwalk3r/qa-toolkit/blob/main/skills/test-cases/SKILL.mdView on GitHub Overview
Test Case Generator converts user stories, PRD excerpts, and feature descriptions into concrete test cases. It supports table, Gherkin BDD, or checklist outputs and adds traceability back to requirements, with priority levels (P0-P3) and project context for saving to qa-artifacts/test-cases.
How This Skill Works
The tool parses the input to identify feature, actors, and expected behavior, then enumerates testable scenarios across categories like Happy Path, Negative, Boundary, Edge, Security, and Performance. It then prioritizes each case (P0-P3) and formats the output in the chosen style (default table), saving results to qa-artifacts/test-cases.
When to Use It
- When you have user stories, PRD excerpts, or Jira content that need test coverage.
- When you need a structured set of test cases with traceability back to requirements.
- When prioritizing tests into P0-P3 before release to focus on core coverage.
- When generating multiple output formats (table, Gherkin BDD, or checklist) for different stakeholders.
- When saving and reusing test cases in qa-artifacts/test-cases for project context.
Quick Start
- Step 1: Provide requirements via ARGUMENTS (user stories, PRD excerpts, or feature descriptions).
- Step 2: Run the generator to parse, identify testable scenarios, and assign priorities (P0-P3).
- Step 3: Export and save the test cases in the preferred format (default table) to qa-artifacts/test-cases.
Best Practices
- Parse the requirement to extract feature, actors, and expected behavior for precise test coverage.
- Explicitly cover Happy Path, Negative, Boundary, Edge, Security, and Performance scenarios.
- Tag or reference each test case with its original requirement for traceability.
- Select an output format that matches your workflow (default table; optional Gherkin or Checklist).
- Prioritize test cases clearly using P0-P3 and review gaps before release.
Example Use Cases
- Login flow test cases derived from a user story.
- Checkout process test cases for a PRD with a valid payment method.
- Access control: admin-only pages and unauthorized access checks.
- Search feature: results present vs. no results, with filters.
- Performance: catalog load under peak traffic.