Get the FREE Ultimate OpenClaw Setup Guide →

reality-verification

Scanned
npx machina-cli add skill tzachbon/smart-ralph/reality-verification --openclaw
Files (1)
SKILL.md
3.3 KB

Reality Verification

For fix goals: reproduce the failure BEFORE work, verify resolution AFTER.

Goal Detection

Classify user goals to determine if diagnosis is needed. See references/goal-detection-patterns.md for detailed patterns.

Quick reference:

  • Fix indicators: fix, repair, resolve, debug, patch, broken, failing, error, bug
  • Add indicators: add, create, build, implement, new
  • Conflict resolution: If both present, treat as Fix

Command Mapping

Goal KeywordsReproduction Command
CI, pipelinegh run view --log-failed
test, testsproject test command
type, typescriptpnpm check-types or tsc --noEmit
lintpnpm lint
buildpnpm build
E2E, UIMCP playwright
API, endpointMCP fetch

For E2E/deployment verification, use MCP tools (playwright for UI, fetch for APIs).

BEFORE/AFTER Documentation

BEFORE State (Diagnosis)

Document in .progress.md under ## Reality Check (BEFORE):

## Reality Check (BEFORE)

**Goal type**: Fix
**Reproduction command**: `pnpm test`
**Failure observed**: Yes
**Output**:

FAIL src/auth.test.ts Expected: 200 Received: 401

**Timestamp**: 2026-01-16T10:30:00Z

AFTER State (Verification)

Document in .progress.md under ## Reality Check (AFTER):

## Reality Check (AFTER)

**Command**: `pnpm test`
**Result**: PASS
**Output**:

PASS src/auth.test.ts All tests passed

**Comparison**: BEFORE failed with 401, AFTER passes
**Verified**: Issue resolved

VF Task Format

Add as task 4.3 (after PR creation) for fix-type specs:

- [ ] 4.3 VF: Verify original issue resolved
  - **Do**:
    1. Read BEFORE state from .progress.md
    2. Re-run reproduction command: `<command>`
    3. Compare output with BEFORE state
    4. Document AFTER state in .progress.md
  - **Verify**: `grep -q "Verified: Issue resolved" ./specs/<name>/.progress.md`
  - **Done when**: AFTER shows issue resolved, documented in .progress.md
  - **Commit**: `chore(<name>): verify fix resolves original issue`

Test Quality Checks

When verifying test-related fixes, check for mock-only test anti-patterns. See references/mock-quality-checks.md for detailed patterns.

Quick reference red flags:

  • Mock declarations > 3x real assertions
  • Missing import of actual module under test
  • All assertions are mock interaction checks (toHaveBeenCalled)
  • No integration tests
  • Missing mock cleanup (afterEach)

Why This Matters

WithoutWith
"Fix CI" spec completes but CI still redCI verified green before merge
Tests "fixed" but original failure unknownBefore/after comparison proves fix
Silent regressionsExplicit failure reproduction
Manual verification requiredAutomated verification in workflow
Tests pass but only test mocksTests verify real behavior, not mock behavior
False sense of security from green testsConfidence that tests catch real bugs

Source

git clone https://github.com/tzachbon/smart-ralph/blob/main/plugins/ralph-specum/skills/reality-verification/SKILL.mdView on GitHub

Overview

Reality Verification guides you to reproduce a failure before work and verify the fix after. It uses goal detection, command mappings, and BEFORE/AFTER documentation to ensure changes actually resolve the issue rather than masking it.

How This Skill Works

It classifies the user goal (e.g., Fix, Add, or Conflict), maps it to a reproduction command (such as pnpm test, pnpm build, or MCP tooling), and instructs documenting the BEFORE state in .progress.md. Then it enforces a VF Task Format to codify the verification steps and compares AFTER results to BEFORE to confirm resolution.

When to Use It

  • When asked to verify a fix by reproducing the failure before and after changes
  • When diagnosing an issue to confirm if the failure is resolved
  • When documenting the BEFORE state and AFTER state in .progress.md
  • When performing a VF task to validate a fix after PR creation
  • When verifying test, CI, or API changes that affect behavior

Quick Start

  1. Step 1: Identify the user goal and pick the correct reproduction command (e.g., pnpm test, pnpm build, or MCP fetch).
  2. Step 2: Document the BEFORE state in .progress.md under ## Reality Check (BEFORE) with command, failure, and timestamp.
  3. Step 3: Run the reproduction command, compare outputs, document the AFTER state, and complete the VF Task Format verification.

Best Practices

  • Document BEFORE explicitly under ## Reality Check (BEFORE) with reproduction command and observed failure
  • Use the recommended reproduction commands per goal (e.g., pnpm test, pnpm build, MCP) in your report
  • Keep AFTER state in .progress.md and compare with BEFORE to demonstrate resolution
  • Apply the VF Task Format as a structured checklist for verification
  • Check for mock-quality issues and ensure tests verify real behavior (not mocks)

Example Use Cases

  • BEFORE: test failing with 401, AFTER: PASS after fix
  • CI pipeline showed a failure; reproduced, fixed, and verified via the reproduction command
  • E2E UI regression reproduced with Playwright; after fix, UI tests pass
  • API endpoint mismatch reproduced with MCP fetch; after fix, response matches expected
  • Typescript type-check corrected; BEFORE showed type errors, AFTER passes check-types

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers