Get the FREE Ultimate OpenClaw Setup Guide →

integration-e2e-testing

Scanned
npx machina-cli add skill shinpr/claude-code-workflows/integration-e2e-testing --openclaw
Files (1)
SKILL.md
3.8 KB

Integration and E2E Testing Principles

Test Type Definition and Limits

Test TypePurposeScopeLimit per FeatureImplementation Timing
IntegrationVerify component interactionsPartial system integrationMAX 3Created alongside implementation
E2EVerify critical user journeysFull systemMAX 1-2Executed in final phase only

Behavior-First Principle

Include (High ROI)

  • Business logic correctness (calculations, state transitions, data transformations)
  • Data integrity and persistence behavior
  • User-visible functionality completeness
  • Error handling behavior (what user sees/experiences)

Exclude (Low ROI in CI/CD)

  • External service real connections → Use contract/interface verification
  • Performance metrics → Non-deterministic, defer to load testing
  • Implementation details → Focus on observable behavior
  • UI layout specifics → Focus on information availability

Principle: Test = User-observable behavior verifiable in isolated CI environment

ROI Calculation

ROI Score = (Business Value × User Frequency + Legal Requirement × 10 + Defect Detection)
            / (Creation Cost + Execution Cost + Maintenance Cost)

Cost Table

Test TypeCreateExecuteMaintainTotal
Unit1113
Integration35311
E2E1020838

Test Skeleton Specification

Required Comment Patterns

Each test MUST include the following annotations:

// AC: [Original acceptance criteria text]
// Behavior: [Trigger] → [Process] → [Observable Result]
// @category: core-functionality | integration | edge-case | e2e
// @dependency: none | [component names] | full-system
// @complexity: low | medium | high
// ROI: [score]

Verification Items (Optional)

When verification points need explicit enumeration:

// Verification items:
// - [Item 1]
// - [Item 2]

EARS Format Mapping

EARS KeywordTest TypeGeneration Approach
WhenEvent-drivenTrigger event → verify outcome
WhileState conditionSetup state → verify behavior
If-thenBranch coverageBoth condition paths verified
(none)Basic functionalityDirect invocation → verify result

Test File Naming Convention

  • Integration tests: *.int.test.* or *.integration.test.*
  • E2E tests: *.e2e.test.*

The test runner or framework in the project determines the appropriate file extension.

Review Criteria

Skeleton and Implementation Consistency

CheckFailure Condition
Behavior VerificationNo assertion for "observable result" in skeleton
Verification Item CoverageListed items not all covered by assertions
Mock BoundaryInternal components mocked in integration test

Implementation Quality

CheckFailure Condition
AAA StructureArrange/Act/Assert separation unclear
IndependenceState sharing between tests, order dependency
ReproducibilityDate/random dependency, varying results
ReadabilityTest name doesn't match verification content

Quality Standards

Required

  • Each test verifies one behavior
  • Clear AAA (Arrange-Act-Assert) structure
  • No test interdependencies
  • Deterministic execution

Prohibited

  • Testing implementation details
  • Multiple behaviors per test
  • Shared mutable state
  • Time-dependent assertions without mocking

Source

git clone https://github.com/shinpr/claude-code-workflows/blob/main/skills/integration-e2e-testing/SKILL.mdView on GitHub

Overview

Provides test type definitions and limits for integration and E2E tests, a behavior-first framework, ROI calculation, and skeleton/review criteria. It also covers EARS mapping, file naming conventions, and practical guidance for improving test quality across CI/CD.)

How This Skill Works

It defines limits per test type (integration vs E2E), presents the ROI formula, and prescribes a standard test skeleton with required comments. It also maps tests to EARS keywords, specifies naming conventions, and establishes review criteria to ensure observable, deterministic behavior in CI.

When to Use It

  • Designing integration tests to verify component interactions with defined limits per feature
  • Creating E2E tests to validate critical user journeys in the final phase
  • Calculating ROI to prioritize test cases by business value, user frequency, and defect detection
  • Drafting test skeletons with required annotations such as AC, Behavior, category, dependency, complexity, and ROI
  • Reviewing test quality and consistency using skeleton/implementation criteria and mock boundary guidance

Quick Start

  1. Step 1: Define test type and limits for the feature (integration vs E2E) and note max counts per feature
  2. Step 2: Compute ROI using the ROI formula and record ROI in the test skeleton
  3. Step 3: Implement the test with the required skeleton comments and verify observable results using AAA

Best Practices

  • Verify one behavior per test to keep tests focused
  • Maintain a clear AAA (Arrange-Act-Assert) structure and observable result assertions
  • Use the required skeleton annotations including AC, Behavior, category, dependency, complexity, and ROI
  • Avoid external service real connections and UI layout specifics in CI by using contract verification and observable behavior
  • Ensure tests are deterministic, independent, and reproducible in CI environments

Example Use Cases

  • Integration test confirming business logic correctness, data integrity, and error handling for a data transformation feature
  • E2E test validating a full checkout flow from cart to payment in a production-like environment
  • ROI-driven prioritization example showing how ROI Score guides test selection between unit, integration, and E2E tests
  • Skeleton file containing AC and Behavior lines plus category, dependency, complexity, and ROI annotations
  • Review of test independence and mock boundary to ensure no state leakage between tests

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers