Get the FREE Ultimate OpenClaw Setup Guide →

Assumption Testing Principles

Scanned
npx machina-cli add skill bofrese/bob/assumption-testing --openclaw
Files (1)
SKILL.md
5.3 KB

Assumption Testing Principles

Compact reference for identifying, ranking, and testing assumptions before building. Apply when working on validation planning, MVP definition, or early-stage product validation.


Core Truth

Every product idea is a stack of assumptions. Most fail because a core assumption was wrong, not execution. Find riskiest assumptions and test them before writing code.


Assumption Types

Desirability: Do customers want this? Will they pay? Is problem painful enough?

Feasibility: Can we build it? Scale it? Support it? Distribute it?

Test desirability first. No point building something feasible that no one wants.


The Assumption Stack

Build explicit list: "Our product succeeds IF..."

├─ People have problem X (desirability)
├─ Current solutions fail to solve X (desirability)
├─ Our approach solves X better (desirability)
├─ People will pay $Y (desirability)
├─ We can build in Z months (feasibility)
├─ CAC < $W (feasibility)
└─ Can scale to N users (feasibility)

Risk Ranking

Impact × Uncertainty = Risk

Impact Scale

  • Critical: Product fails completely if wrong
  • High: Major pivot required
  • Medium: Delays or scope reduction
  • Low: Minor adjustment

Uncertainty Scale

  • High: Pure guess, no evidence
  • Medium: Some signals, inconclusive
  • Low: Strong evidence, confident

Priority Matrix

UncertaintyCriticalHighMediumLow
HighTEST FIRSTTest earlyTest when convenientAssume
MediumTest earlyTest soonMonitorAssume
LowValidate onceMonitorAssumeAssume

Focus: high uncertainty + high impact.


Validation Experiments

Structure per assumption:

  1. Hypothesis: "We believe [assumption]"
  2. Test: "To verify, we will [experiment]"
  3. Success: "We're right if [specific result]"
  4. Timeline: "[X days/weeks]"

Validation Hierarchy (Cheapest → Most Expensive)

MethodCostTimeValidates
Customer conversationsFree1-2wkProblem severity, alternatives
Landing page test$500-2k1-2wkValue prop, demand
Prototype/mockup1-2wk design2-3wkUX, features
Concierge MVPHigh time, low devOngoingWillingness to pay, value
Wizard of Oz MVPMod dev + high time2-4wk + ongoingFull experience, WTP
Working MVPHighest4-12wkEverything

Start at #1. Only move down when cheaper tests validate.


Build-Measure-Learn

Build (Minimum): Smallest thing to test the assumption.

Measure (Specific): Not "see if people like it." Instead: "20% trial → paid in 30d."

Learn (Honest):

  • Validated: Evidence supports, proceed
  • Invalidated: Evidence contradicts, pivot/kill
  • Inconclusive: Need more data or different test

Weak signals ≠ validation. "They liked it" ≠ "they paid for it."


MVP Scope Definition

MVP = Minimum to validate riskiest assumptions.

The Formula

  1. List all assumptions
  2. Rank by risk (uncertainty × impact)
  3. Identify top 3 riskiest
  4. Define minimum product to test those 3
  5. That's your MVP

What to Cut

Include: Features to test core assumptions, minimum core value, enough quality for actual use

Cut: Features that don't test assumptions, nice-to-haves, optimizations, polish, scale beyond testing needs

Test: "If we remove this, can we still validate our riskiest assumption?"

  • Yes → Cut it
  • No → Keep it

Success Criteria

Define "validated" BEFORE running experiment.

Good Criteria

✅ Specific: "10 customers pre-pay for beta" ✅ Achievable: "5 trial signups in 2 weeks" ✅ Meaningful: "20% convert to paid" (revenue validates WTP)

❌ Vague: "People seem interested" ❌ Unrealistic: "1000 users in 2 weeks" (without distribution) ❌ Weak signal: "100 email signups" (low commitment)

Commitment Ladder

  1. Said they like it (weakest)
  2. Gave email (low)
  3. Signed up for trial (medium)
  4. Used repeatedly (high)
  5. Paid for it (strong)
  6. Referred others (PMF signal)

Optimize for climbing ladder, not stopping at bottom.


Red Flags

"We'll figure it out as we build" → No. Test assumptions before building.

"If we build it, they will come" → Distribution assumption = deadliest assumption.

"Everyone loved the idea" → Did they pay? If not, not validation.

"We need to launch to really know" → Launch = most expensive test. Run cheaper first.

"MVP needs 50 features" → Not MVP. Full product. Cut scope.


Validation Checklist

  • Listed all critical assumptions?
  • Ranked by uncertainty × impact?
  • Identified top 3 riskiest?
  • Designed experiments to test them?
  • Defined specific success criteria?
  • Run cheapest tests first?
  • Measuring commitment, not interest?

Key Insight

Goal isn't proving you're right. It's finding truth as cheaply as possible.

Being wrong quickly > being wrong slowly. Test fast, kill bad ideas faster.

Source

git clone https://github.com/bofrese/bob/blob/master/skills/assumption-testing/SKILL.mdView on GitHub

Overview

Assumption Testing Principles guide identifying, ranking, and validating the beliefs behind a product idea before building. It emphasizes that every product idea is a stack of assumptions and that the riskiest ones should be tested first. It supports validation planning, MVP definition, and early-stage product validation as used by bob:validation-plan and bob:product-coach.

How This Skill Works

Start by building an explicit Assumption Stack describing what must be true for success. Rank each assumption by its Impact and Uncertainty to prioritize tests with a Risk Priority Matrix. Design cheap, structured Validation Experiments (Hypothesis, Test, Success, Timeline) and follow a Build-Measure-Learn loop to decide whether to pivot, persevere, or cut.

When to Use It

  • During validation planning for a new idea
  • While defining an MVP to test top riskiest assumptions
  • When prioritizing product experiments by risk and impact
  • Before writing code to confirm desirability and feasibility
  • When planning validation experiments on bob:validation-plan or bob:product-coach workflows

Quick Start

  1. Step 1: Build an explicit assumption stack for your idea
  2. Step 2: Rank by risk (Impact × Uncertainty) and pick top 3 riskiest
  3. Step 3: Design cheap validation experiments and set clear success criteria

Best Practices

  • List all assumptions explicitly and document their relationships in an Assumption Stack
  • Rank assumptions by Impact × Uncertainty to expose the riskiest bets
  • Prioritize high-uncertainty, high-impact tests and start cheap
  • Use validation methods in order of cost and speed (conversations, landing pages, prototypes) before heavier bets
  • Define success criteria before running any experiment and act on the evidence

Example Use Cases

  • Customer conversations to validate problem severity and alternatives
  • Landing page test to measure value proposition and demand
  • Prototype/mockup to validate UX and features
  • Concierge MVP to test willingness to pay and value
  • Wizard of Oz MVP to validate full experience and willingness to pay

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers