Get the FREE Ultimate OpenClaw Setup Guide →

shipyard-verification

Scanned
npx machina-cli add skill lgbarn/shipyard/shipyard-verification --openclaw
Files (1)
SKILL.md
5.4 KB
<!-- TOKEN BUDGET: 180 lines / ~540 tokens -->

Verification Before Completion

<activation>

Activation Triggers

  • About to claim work is "done", "complete", "fixed", or "passing"
  • About to commit, create a PR, or merge
  • Before any success assertion -- must have fresh evidence first
  • Using words like "should", "probably", "seems to" about work state

Natural Language Triggers

  • "is this done", "verify this", "check my work", "are we done", "does this work"
</activation>

Overview

Claiming work is complete without verification is dishonesty, not efficiency.

Core principle: Evidence before claims, always.

Violating the letter of this rule is violating the spirit of this rule.

<instructions>

The Iron Law

NO COMPLETION CLAIMS WITHOUT FRESH VERIFICATION EVIDENCE

If you haven't run the verification command in this message, you cannot claim it passes.

The Gate Function

BEFORE claiming any status or expressing satisfaction:

1. IDENTIFY: What command proves this claim?
2. RUN: Execute the FULL command (fresh, complete)
3. READ: Full output, check exit code, count failures
4. VERIFY: Does output confirm the claim?
   - If NO: State actual status with evidence
   - If YES: State claim WITH evidence
5. ONLY THEN: Make the claim

Skip any step = lying, not verifying

Common Failures

ClaimRequiresNot Sufficient
Tests passTest command output: 0 failuresPrevious run, "should pass"
Linter cleanLinter output: 0 errorsPartial check, extrapolation
Build succeedsBuild command: exit 0Linter passing, logs look good
Bug fixedTest original symptom: passesCode changed, assumed fixed
Regression test worksRed-green cycle verifiedTest passes once
Agent completedVCS diff shows changesAgent reports "success"
Requirements metLine-by-line checklistTests passing

Teammate Mode

When SHIPYARD_IS_TEAMMATE=true:

  • Run verification locally and report results via task metadata (not STATE.json updates)
  • Do not update STATE.json — only the lead agent writes project state
  • TaskCompleted hook enforces evidence — the hook checks for verification artifacts before allowing task completion

In solo mode, this section has no effect.

</instructions> <rules>

Red Flags - STOP

  • Using "should", "probably", "seems to"
  • Expressing satisfaction before verification ("Great!", "Perfect!", "Done!", etc.)
  • About to commit/push/PR without verification
  • Trusting agent success reports
  • Relying on partial verification
  • Thinking "just this once"
  • Tired and wanting work over
  • ANY wording implying success without having run verification

Rationalization Prevention

ExcuseReality
"Should work now"RUN the verification
"I'm confident"Confidence != evidence
"Just this once"No exceptions
"Linter passed"Linter != compiler
"Agent said success"Verify independently
"I'm tired"Exhaustion != excuse
"Partial check is enough"Partial proves nothing
"Different words so rule doesn't apply"Spirit over letter

When To Apply

ALWAYS before:

  • ANY variation of success/completion claims
  • ANY expression of satisfaction
  • ANY positive statement about work state
  • Committing, PR creation, task completion
  • Moving to next task
  • Delegating to agents

Rule applies to:

  • Exact phrases
  • Paraphrases and synonyms
  • Implications of success
  • ANY communication suggesting completion/correctness
</rules> <examples>

Verification Examples

Good: Claim backed by evidence

Agent: I'll run the tests to verify.

$ npm test
> 34 passing (1.2s)
> 0 failing

All 34 tests pass. The refactored auth module is working correctly.

Why it works: The agent ran the command, read the output, and THEN made the claim. The evidence is inline and verifiable.

Bad: Assertion without evidence

Agent: I've updated the auth module to fix the token validation bug.
The tests should pass now. Let me move on to the next task.

Why it fails: "should pass" is not evidence. No command was run. The agent is guessing based on the code change, not verifying the outcome.

Good: Honest report of partial success

Agent: Running the full test suite:

$ pytest tests/
> 28 passed, 2 failed

28 of 30 tests pass. Two failures remain in test_auth_refresh.py:
- test_expired_token_refresh: expects 401 but gets 200
- test_concurrent_refresh: race condition in token store

The original bug (token validation) is fixed, but these two
related failures need investigation before this is complete.

Bad: Trusting an agent's self-report

Agent: The builder agent reported success. Phase complete!

Why it fails: Agent self-reports are not verification. Check the VCS diff and run the tests yourself.

</examples>

Why This Matters

From failure analysis: your human partner said "I don't believe you" -- trust broken. Undefined functions shipped. Missing requirements shipped. Time wasted on false completion then redirect then rework.

No shortcuts for verification. Run the command. Read the output. THEN claim the result. This is non-negotiable.

Source

git clone https://github.com/lgbarn/shipyard/blob/main/skills/shipyard-verification/SKILL.mdView on GitHub

Overview

Shipyard-verification enforces evidence-first completion: you may only claim work is done after running a verification command and obtaining fresh output. It emphasizes documenting verification results before any success statements and avoids implying completion without verifiable proof.

How This Skill Works

Follow the Iron Law: no completion claims without fresh verification evidence. The Gate Function—Identify, Run, Read, Verify, Only Then—guides you to execute the full command, inspect its output and exit status, confirm the claim, and only then state success with evidence.

When to Use It

  • Before claiming a task is done, complete, fixed, or passing
  • Before committing, creating a PR, or merging changes
  • Before any explicit success assertion about work state
  • When you are tempted to say 'should', 'probably', or 'seems to' about the work
  • Before moving to the next task or delegating to agents

Quick Start

  1. Step 1: Identify the verification command that proves the claim
  2. Step 2: Run the full verification command and capture fresh output
  3. Step 3: Read, verify, and only claim success with evidence

Best Practices

  • Identify the exact verification command that proves the claim before you run it
  • Run the full verification command and ensure you capture fresh output (not a cached result)
  • Read the entire output, check exit codes, and count failures
  • Verify that the output actually confirms the claim; if not, report the actual status with evidence
  • Only make the claim after verification; never skip steps or assert success without evidence

Example Use Cases

  • Agent runs the test suite and captures a fresh run showing the test command output (e.g., 0 failures or all tests passing) before claiming tests pass with evidence
  • Agent executes linter and build checks, confirms exit code 0 and no errors, then reports a clean build with evidence
  • Agent re-runs an original failing symptom to verify it's fixed, documenting the verification output and result
  • Agent updates code and validates VCS diffs and test results; only after verification does it claim agent completed
  • In teammate mode, verification results are reported via task metadata (not STATE.json); the lead agent writes project state after evidence is gathered

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers