bug-reproduction-test-generator
npx machina-cli add skill ArabelaTso/Skills-4-SE/bug-reproduction-test-generator --openclawBug Reproduction Test Generator
Generate executable tests that reproduce reported bugs based on issue reports and code repositories.
Workflow
Follow these steps to generate a bug reproduction test:
1. Analyze the Issue Report
Extract key information from the issue report:
- Symptoms: What goes wrong? (incorrect output, exception, crash, assertion failure, unexpected behavior)
- Affected components: Which modules, classes, or functions are involved?
- Triggering conditions: What inputs, states, or sequences trigger the bug?
- Stack traces: If provided, identify the call chain and failure point
- Expected vs. actual behavior: What should happen vs. what actually happens?
2. Inspect the Repository
Identify relevant code and context:
- Locate the affected components mentioned in the issue
- Find entry points (public APIs, main functions, test fixtures)
- Understand dependencies and required setup
- Identify the test framework used (pytest, unittest, JUnit, Jest, etc.)
- Check existing test patterns for consistency
3. Generate the Reproduction Test
Create a minimal, focused test that:
Test structure:
- Uses the repository's existing test framework and conventions
- Sets up minimal preconditions needed to trigger the bug
- Executes the code path that triggers the bug
- Asserts the symptom described in the issue report
Assertions:
- For exceptions: Assert the exception type and message match the report
- For incorrect output: Assert actual output matches the reported incorrect behavior
- For crashes: Assert the crash occurs at the expected point
- For assertion failures: Reproduce the failing assertion
Documentation:
- Add inline comments explaining the reproduction logic
- Reference the issue number/URL in the test name or docstring
- Document any assumptions made due to underspecified details
4. Output Format
Provide:
- Executable test code in the appropriate language and framework
- Inline comments explaining how the test reproduces the bug
- Markdown summary including:
- How the test reproduces the issue
- Which symptoms it validates
- Any assumptions made
- Instructions for running the test
Example Workflow
Issue Report:
Title: Division by zero in calculate_average()
Description: When calling calculate_average([]) with an empty list,
the function crashes with ZeroDivisionError instead of returning 0.
Stack trace:
File "stats.py", line 15, in calculate_average
return sum(values) / len(values)
ZeroDivisionError: division by zero
Generated Test (Python/pytest):
import pytest
from stats import calculate_average
def test_calculate_average_empty_list_reproduction():
"""
Reproduces bug: calculate_average([]) raises ZeroDivisionError
Issue: #123
Expected: Should return 0 for empty list
Actual: Raises ZeroDivisionError
"""
# Trigger the bug with empty list input
with pytest.raises(ZeroDivisionError):
result = calculate_average([])
# This test currently passes (reproduces the bug)
# After fix, change to: assert calculate_average([]) == 0
Summary:
## Bug Reproduction Test
**Issue**: Division by zero in calculate_average()
**How it reproduces the bug:**
- Calls `calculate_average([])` with an empty list
- Asserts that ZeroDivisionError is raised (the buggy behavior)
**Symptoms validated:**
- Exception type: ZeroDivisionError
- Location: stats.py line 15
**Assumptions:**
- The function should return 0 for empty lists (common convention)
**Running the test:**
```bash
pytest test_stats.py::test_calculate_average_empty_list_reproduction
After the bug is fixed:
Replace the pytest.raises assertion with:
assert calculate_average([]) == 0
## Language-Specific Patterns
### Python (pytest/unittest)
```python
import pytest
def test_bug_reproduction_issue_123():
"""Reproduces bug #123: [brief description]"""
# Setup: Create conditions that trigger the bug
# Execute: Run the code that exhibits the bug
# Assert: Verify the buggy behavior occurs
with pytest.raises(ExpectedException):
buggy_function()
Java (JUnit)
@Test
public void testBugReproduction_Issue123() {
// Reproduces bug #123: [brief description]
// Setup: Create conditions that trigger the bug
// Execute and Assert: Verify the buggy behavior
assertThrows(ExpectedException.class, () -> {
buggyMethod();
});
}
JavaScript (Jest)
test('reproduces bug #123: [brief description]', () => {
// Setup: Create conditions that trigger the bug
// Execute and Assert: Verify the buggy behavior
expect(() => {
buggyFunction();
}).toThrow(ExpectedException);
});
Constraints
- Do not modify production code - Only create test code
- Do not assume fixes - Test the buggy behavior, not the expected correct behavior (unless explicitly stated in the issue)
- Document assumptions - If the issue is underspecified, state assumptions clearly
- Prefer minimal tests - Focus on isolating the bug, avoid unnecessary setup
- Match existing patterns - Follow the repository's test conventions and style
Handling Underspecified Issues
When the issue report lacks details:
- State assumptions explicitly in test comments
- Document what's unclear in the summary
- Provide multiple test variants if multiple interpretations are possible
- Ask clarifying questions if critical information is missing
Example:
def test_bug_reproduction_issue_456():
"""
Reproduces bug #456: Null pointer exception in processData()
ASSUMPTION: The bug occurs when input is null (not specified in issue)
ASSUMPTION: Using default configuration (not specified in issue)
"""
# Test with null input (assumed trigger)
with pytest.raises(NullPointerException):
processData(None)
Tips for Effective Reproduction Tests
- Start simple - Begin with the most direct path to trigger the bug
- Isolate the bug - Remove unrelated setup and assertions
- Make it deterministic - Avoid flaky conditions (timing, randomness)
- Reference the issue - Include issue number in test name and comments
- Verify it fails - Run the test to confirm it reproduces the bug
- Plan for the fix - Comment on how the test should change after the bug is fixed
Source
git clone https://github.com/ArabelaTso/Skills-4-SE/blob/main/skills/bug-reproduction-test-generator/SKILL.mdView on GitHub Overview
Generates executable tests that reproduce bugs described in issue reports by analyzing the issue and the target repository. It creates minimal, framework-aligned reproduction tests that trigger the reported bug and can be added to the project’s test suite. Inline comments and issue references help reviewers understand the reproduction intent.
How This Skill Works
The workflow analyzes the issue report to extract symptoms, affected components, and triggering conditions, then inspects the repository for entry points and test conventions. It generates a focused reproduction test using the project's test framework, with assertions for the symptom and references to the issue. The output includes executable test code, inline comments, and a Markdown summary.
When to Use It
- When you have a clear issue report describing a bug symptom and a failure mode
- When you need to validate bug reports by generating reproducible test cases
- When you want to convert an issue report into executable regression tests
- When aligning new tests with the repository’s existing framework and conventions
- When you want tests that reliably trigger the reported bug across runs
Quick Start
- Step 1: Provide the repository path or URL and the issue report
- Step 2: Run the generator to produce executable test code with inline comments
- Step 3: Run the test suite to verify the bug reproduces and iterate as needed
Best Practices
- Follow the repository’s test framework and naming conventions
- Keep the reproduction test minimal and focused on the bug
- Include inline comments and reference the issue URL or number
- Assert the exact symptom (exception type/messages, outputs)
- Document assumptions when details in the report are underspecified
Example Use Cases
- Python/pytest: reproduce ZeroDivisionError in calculate_average([])
- Java/JUnit: reproduce IllegalArgumentException for invalid inputs
- JavaScript/Jest: reproduce mismatched API response payload
- Ruby/RSpec: reproduce assertion failure in data processing
- Go/go test: reproduce panic on nil pointer