Get the FREE Ultimate OpenClaw Setup Guide →

niopd-pd-experiment

Scanned
npx machina-cli add skill 8421bit/NioPD-Skills/NioPD-PD-experiment --openclaw
Files (1)
SKILL.md
1.4 KB

Experiment Design Skill

This skill designs rigorous product experiments for validation.

Theoretical Foundation

Experiment Components

  • Hypothesis: What we believe will happen
  • Variables: What we change (independent) and measure (dependent)
  • Control: Baseline for comparison
  • Sample Size: Statistical significance requirements
  • Duration: How long to run

Instructions

Step 1: Define Hypothesis

"We believe [change] will cause [effect] for [users]."

Step 2: Design Test

ElementControlTreatment
[Variable][Baseline][Change]

Step 3: Determine Metrics

  • Primary: [What success looks like]
  • Secondary: [Supporting metrics]
  • Guardrails: [What not to break]

Step 4: Calculate Sample Size

Based on:

  • Minimum detectable effect
  • Baseline conversion rate
  • Statistical significance (typically 95%)

Step 5: Plan Duration

Consider:

  • Sample accumulation rate
  • Weekly patterns
  • External factors

Step 6: Generate Document

File path: 04-plans/[YYYYMMDD]-experiment-v0.md

Output Specifications

  • File Naming: [YYYYMMDD]-experiment-v0.md
  • Location: 04-plans/
  • Template: references/experiment-template.md

Source

git clone https://github.com/8421bit/NioPD-Skills/blob/init/plugins/niopd/skills/NioPD-PD-experiment/SKILL.mdView on GitHub

Overview

This skill designs rigorous product experiments for validation. It frames hypotheses, variables, and controls to produce reliable results. It also emphasizes sample size and duration planning to support data-driven decisions.

How This Skill Works

Follow a structured workflow: define a test hypothesis, design a test with control and treatment elements, and select primary and secondary metrics. Then calculate the required sample size and plan the study duration before generating the documentation.

When to Use It

  • Validating a new feature before a full rollout
  • Optimizing a funnel by A/B testing changes
  • Making data-driven product bets with pilots
  • Testing performance or UX changes across a user segment
  • Documenting experiments with standard file paths for team templates

Quick Start

  1. Step 1: Define Hypothesis — 'We believe [change] will cause [effect] for [users].'
  2. Step 2: Design Test and Metrics — specify Control vs Treatment and choose primary/secondary metrics.
  3. Step 3: Calculate Sample Size, Plan Duration, and Generate Document — determine MDE, baseline, significance, and save to 04-plans/[YYYYMMDD]-experiment-v0.md using references/experiment-template.md.

Best Practices

  • Start with a clear hypothesis in the exact format: 'We believe [change] will cause [effect] for [users].'
  • Explicitly define the independent (variable) and dependent (metric) measures, plus a proper control.
  • Define primary and secondary metrics and guardrails to prevent unintended breaks.
  • Compute the minimum detectable effect and use 95% statistical significance for sample sizing.
  • Document your experiment in the prescribed location: 04-plans/[YYYYMMDD]-experiment-v0.md using the provided template.

Example Use Cases

  • A/B test: change homepage CTA color to impact conversion rate
  • Pilot: test a new onboarding flow with a small user segment
  • A/B test: adjust product pricing tier to measure signup rate
  • Optimization: reorder search results to boost click-through
  • Performance: test a faster page load to improve user satisfaction

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers