pwp-test
Scannednpx machina-cli add skill shandar/pwp-plugin/pwp-test --openclawPWP Testing Protocol
You are now operating under the Project Workflow Protocol's testing discipline. Follow this structured approach for every testing task.
Context: $ARGUMENTS
Phase 1: Assess Testing Needs
Before writing any test, understand:
- What is being tested? — Business logic, API endpoint, UI component, integration?
- What layer of the test pyramid? — Unit (fast, many), Integration (medium, some), E2E (slow, few)
- What already exists? — Check for existing test files, test utils, and coverage reports
- What's the testing stack? — Jest, Vitest, Playwright, Cypress, React Testing Library, etc.
Phase 2: Apply the Test Pyramid
Distribute testing effort intentionally:
| Layer | What It Tests | Speed | Quantity |
|---|---|---|---|
| Unit | Pure functions, utilities, isolated logic | Fast (ms) | Many |
| Integration | Component interactions, API endpoints, DB queries | Medium (s) | Some |
| E2E | Full user journeys through real UI | Slow (min) | Few |
Always Test
- Business logic and calculations
- Data transformations and parsing
- Edge cases: empty inputs, null values, boundary conditions
- Error paths: what happens when things fail
- API contracts: request/response shapes
- Auth and permission logic
Test With Judgment
- Component rendering (test behavior, not snapshot every div)
- Form validation rules
- State transitions
- Navigation flows
Rarely Test
- Third-party library internals (trust the library, mock the boundary)
- Pure styling (visual regression tools handle this better)
- One-line getters or trivial pass-through functions
- Framework boilerplate
Phase 3: Write Tests
File Naming
Co-locate test files with source:
src/utils/formatCurrency.ts
src/utils/formatCurrency.test.ts ← co-located
src/components/InvoiceTable.tsx
src/components/InvoiceTable.test.tsx ← co-located
__tests__/integration/checkout.test.ts ← integration in dedicated folder
Test Case Naming
Pattern: it('should {expected behavior} when {condition}')
describe('formatCurrency')
it('should format positive numbers with two decimals')
it('should return "$0.00" when given zero')
it('should handle negative numbers with a minus prefix')
it('should throw when given NaN')
Test Data Rules
- Use factories, not fixtures. Generate test data with overridable defaults.
- No production data. Use realistic but synthetic data.
- Isolate state. Each test sets up and tears down its own state.
- Name clearly.
validUser,expiredToken,emptyCart— nottestData1.
Phase 4: When to Write Tests
| Scenario | Approach |
|---|---|
| New business logic | Write tests alongside implementation |
| Bug fix | Write failing test first, then fix (TDD for bugs) |
| Refactoring | Ensure tests exist BEFORE refactoring (safety net) |
| Exploratory/prototype | Skip tests, but flag as untested |
| Performance-critical path | Add benchmark tests with baseline thresholds |
Phase 5: Coverage Targets
| Context | Target | Rationale |
|---|---|---|
| Business logic / utils | 90%+ | Most testable and most critical |
| API routes / controllers | 80%+ | Integration tests cover important paths |
| UI components | 60%+ | Test behavior and interactions |
| Overall project | 70%+ | Meaningful floor that compounds confidence |
Coverage is a compass, not a destination.
Philosophy (non-negotiable)
- Test behavior, not implementation. Tests should survive refactors.
- Tests are documentation. A well-named test suite tells the next dev what the code does.
- Diminishing returns are real. 80% meaningful coverage beats 100% checkbox coverage.
- Flaky tests are worse than no tests. A flaky test erodes trust in the entire suite. Fix or delete.
Verification Checklist
After writing tests, confirm:
- New business logic has corresponding tests
- Bug fixes include a regression test
- Tests are named descriptively (behavior + condition)
- Test data is synthetic, isolated, and clearly named
- No test depends on another test's state
- Flaky tests are fixed or removed
- Coverage targets met for changed code
- Test suite passes with exit code 0
Source
git clone https://github.com/shandar/pwp-plugin/blob/main/skills/pwp-test/SKILL.mdView on GitHub Overview
pwp-test defines a disciplined approach to testing: understand what to test, apply the test pyramid, and implement clear naming and data factories. It emphasizes coverage targets and when to test what, so teams catch real bugs early. This guide helps you set up testing strategies, write testable code, and maintain meaningful test suites.
How This Skill Works
Follow Phase 1 through Phase 5 to implement tests: assess testing needs, distribute tests by the pyramid (unit, integration, E2E), and decide when to write tests. Enforce co-located test files, descriptive naming patterns, and factories for test data. Use the coverage targets as a compass to balance effort and risk.
When to Use It
- User asks to write tests, add test coverage, or set up a testing strategy
- Need to decide which testing layer to target (unit, integration, E2E)
- Need to ensure test data quality using factories instead of fixtures
- You're fixing a bug and want to apply TDD-style tests first
- Setting up testing standards, naming, and coverage targets across the project
Quick Start
- Step 1: Assess testing needs and stack (Unit, Integration, E2E) for the feature
- Step 2: Write co-located tests with clear naming and build data via factories
- Step 3: Set coverage targets and iterate on tests as requirements evolve
Best Practices
- Co-locate test files with source files (same folder as the implementation)
- Use factories, not fixtures, with overridable defaults
- Name tests clearly using patterns like it('should ...')
- Distribute tests according to the pyramid: unit first, then integration, then E2E
- Isolate state for each test and keep tests deterministic
Example Use Cases
- Unit test a pure function in src/utils/formatCurrency.ts
- Integration test for an API endpoint (e.g., /users) with realistic data
- E2E test for a user journey (login, navigate, perform action)
- Test data built with factories: validUser, expiredToken, emptyCart
- Add regression tests before refactoring to serve as a safety net