Get the FREE Ultimate OpenClaw Setup Guide →

ln-521-test-researcher

Scanned
npx machina-cli add skill levnikolaevich/claude-code-skills/ln-521-test-researcher --openclaw
Files (1)
SKILL.md
5.0 KB

Test Researcher

Researches real-world problems and edge cases before test planning to ensure tests cover actual user pain points, not just AC.

Purpose & Scope

  • Research common problems for the feature domain using Web Search, MCP Ref, Context7.
  • Analyze how competitors solve the same problem.
  • Find customer complaints and pain points from forums, StackOverflow, Reddit.
  • Post structured findings as Linear comment for downstream skills (ln-522, ln-523).
  • No test creation or status changes.

When to Use

This skill should be used when:

  • Invoked by ln-520-test-planner at start of test planning pipeline
  • Story has non-trivial functionality (external APIs, file formats, authentication)
  • Need to discover edge cases beyond AC

Skip research when:

  • Story is trivial (simple CRUD, no external dependencies)
  • Research comment already exists on Story
  • User explicitly requests to skip

Workflow

Phase 1: Discovery

Auto-discover Team ID from docs/tasks/kanban_board.md.

Input: Story ID from orchestrator (ln-520)

Phase 2: Extract Feature Domain

  1. Fetch Story from Linear
  2. Parse Story goal and AC to identify:
    • What technology/API/format is involved?
    • What is the user's goal? (e.g., "translate XLIFF files", "authenticate via OAuth")
  3. Extract keywords for research queries

Phase 3: Research Common Problems

Use available tools to find real-world problems:

  1. Web Search:

    • "[feature] common problems"
    • "[format] edge cases"
    • "[API] gotchas"
    • "[technology] known issues"
  2. MCP Ref:

    • ref_search_documentation("[feature] error handling best practices")
    • ref_search_documentation("[format] validation rules")
  3. Context7:

    • Query relevant library docs for known issues
    • Check API documentation for limitations

Phase 4: Research Competitor Solutions

  1. Web Search:

    • "[competitor] [feature] how it works"
    • "[feature] comparison"
    • "[product type] best practices"
  2. Analysis:

    • How do market leaders handle this functionality?
    • What UX patterns do they use?
    • What error handling approaches are common?

Phase 5: Research Customer Complaints

  1. Web Search:

    • "[feature] complaints"
    • "[product type] user problems"
    • "[format] issues reddit"
    • "[format] issues stackoverflow"
  2. Analysis:

    • What do users actually struggle with?
    • What are common frustrations?
    • What gaps exist between user expectations and typical implementations?

Phase 6: Compile and Post Findings

  1. Compile findings into categories:

    • Input validation issues (malformed data, encoding, size limits)
    • Edge cases (empty input, special characters, Unicode)
    • Error handling (timeouts, rate limits, partial failures)
    • Security concerns (injection, authentication bypass)
    • Competitor advantages (features we should match or exceed)
    • Customer pain points (problems users actually complain about)
  2. Post Linear comment on Story with research summary:

## Test Research: {Feature}

### Sources Consulted
- [Source 1](url)
- [Source 2](url)

### Common Problems Found
1. **Problem 1:** Description + test case suggestion
2. **Problem 2:** Description + test case suggestion

### Competitor Analysis
- **Competitor A:** How they handle this + what we can learn
- **Competitor B:** Their approach + gaps we can exploit

### Customer Pain Points
- **Complaint 1:** What users struggle with + test to prevent
- **Complaint 2:** Common frustration + how to verify we solve it

### Recommended Test Coverage
- [ ] Test case for problem 1
- [ ] Test case for competitor parity
- [ ] Test case for customer pain point

---
_This research informs both manual tests (ln-522) and automated tests (ln-523)._

Critical Rules

  • No test creation: Only research and documentation.
  • No status changes: Only Linear comment.
  • Source attribution: Always include URLs for sources consulted.
  • Actionable findings: Each problem should suggest a test case.
  • Skip trivial Stories: Don't research "Add button to page".

Definition of Done

  • Feature domain extracted from Story (technology/API/format identified)
  • Common problems researched (Web Search + MCP Ref + Context7)
  • Competitor solutions analyzed (at least 1-2 competitors)
  • Customer complaints found (forums, StackOverflow, Reddit)
  • Findings compiled into categories
  • Linear comment posted with "## Test Research: {Feature}" header
  • At least 3 recommended test cases suggested

Output: Linear comment with research findings for ln-522 and ln-523 to use.

Reference Files

  • Research methodology: Web Search, MCP Ref, Context7 tools
  • Comment format: Structured markdown with sources
  • Downstream consumers: ln-522-manual-tester, ln-523-auto-test-planner

Version: 1.0.0 Last Updated: 2026-01-15

Source

git clone https://github.com/levnikolaevich/claude-code-skills/blob/master/ln-521-test-researcher/SKILL.mdView on GitHub

Overview

Ln-521 Test Researcher investigates real-world problems, edge cases, and customer pain points before test planning. It gathers findings from web search, MCP Ref, and Context7, analyzes competitor solutions, and posts a structured summary as a Linear comment for downstream skills ln-522 and ln-523.

How This Skill Works

Phase-driven workflow auto-discovers the team context and extracts the feature goal from the story. It conducts research across Web Search, MCP Ref, and Context7 to identify common problems, competitive approaches, and customer complaints, then compiles findings into structured categories and posts them as a Linear comment for ln-522 and ln-523. No tests or status changes are performed.

When to Use It

  • Invoked at the start of test planning by ln-520-test-planner for non-trivial features.
  • When the feature uses external APIs, file formats, or authentication.
  • When edge cases beyond acceptance criteria need discovery.
  • Skip if the story is trivial, a research comment already exists, or the user requests to skip.
  • Use to align test planning with competitor insights and customer pain points.

Quick Start

  1. Step 1: Trigger when planning starts (ln-520).
  2. Step 2: Gather story goal, AC, and tech involved; run discovery and gathering.
  3. Step 3: Post a Linear comment summarizing findings for ln-522 and ln-523.

Best Practices

  • Auto-discover team context and extract feature goals early to identify involved tech.
  • Use Web Search, MCP Ref, and Context7 to surface common problems and edge cases.
  • Research competitor solutions and UX patterns to guide coverage.
  • Compile findings into categories: input validation, edge cases, error handling, security, and customer pain points.
  • Post a structured Linear comment with a clear research template; avoid creating tests or changing status.

Example Use Cases

  • Test Research: Payment API integration—edge cases, error handling, and competitor parity.
  • Test Research: CSV/Excel import edge cases and encoding issues.
  • Test Research: OAuth flow and token renewal robustness.
  • Test Research: Image upload with metadata and size limits.
  • Test Research: Webhook failure modes and retry strategies.

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers