testing-quality-standards
npx machina-cli add skill athola/claude-night-market/testing-quality-standards --openclawTesting Quality Standards
Shared quality standards and metrics for testing across all plugins in the Claude Night Market ecosystem.
When To Use
- Establishing test quality gates and coverage targets
- Validating test suite against quality standards
When NOT To Use
- Exploratory testing or spike work
- Projects with established quality gates that meet requirements
Table of Contents
Coverage Thresholds
| Level | Coverage | Use Case |
|---|---|---|
| Minimum | 60% | Legacy code |
| Standard | 80% | Normal development |
| High | 90% | Critical systems |
| detailed | 95%+ | Safety-critical |
Quality Metrics
Structure
- Clear test organization
- Meaningful test names
- Proper setup/teardown
- Isolated test cases
Coverage
- Critical paths covered
- Edge cases tested
- Error conditions handled
- Integration points verified
Maintainability
- DRY test code
- Reusable fixtures
- Clear assertions
- Minimal mocking
Reliability
- No flaky tests
- Deterministic execution
- No order dependencies
- Fast feedback loop
Detailed Topics
For implementation patterns and examples:
- Anti-Patterns - Common testing mistakes with before/after examples
- Best Practices - Core testing principles and exit criteria
- Content Assertion Levels - L1/L2/L3 taxonomy for testing LLM-interpreted markdown files
Integration with Plugin Testing
This skill provides foundational standards referenced by:
pensive:test-review- Uses coverage thresholds and quality metricsparseltongue:python-testing- Uses anti-patterns and best practicessanctum:test-*- Uses quality checklist and content assertion levels for test validationimbue:proof-of-work- Uses content assertion levels to enforce Iron Law on execution markdown
Reference in your skill's frontmatter:
dependencies: [leyline:testing-quality-standards]
Verification: Run pytest -v to verify tests pass.
Troubleshooting
Common Issues
Tests not discovered
Ensure test files match pattern test_*.py or *_test.py. Run pytest --collect-only to verify.
Import errors
Check that the module being tested is in PYTHONPATH or install with pip install -e .
Async tests failing
Install pytest-asyncio and decorate test functions with @pytest.mark.asyncio
Source
git clone https://github.com/athola/claude-night-market/blob/master/plugins/leyline/skills/testing-quality-standards/SKILL.mdView on GitHub Overview
Defines shared quality gates and metrics for testing across all Claude Night Market plugins. It codifies coverage thresholds, quality metrics, anti-patterns, and content-assertion levels to drive consistent test quality.
How This Skill Works
Formalizes test quality into actionable sections such as Coverage Thresholds, Quality Metrics, and Detailed Topics, plus anti-patterns and content assertion levels. Plugins reference these standards through dependencies and checklists, enabling cross-plugin test validation and a unified quality language.
When to Use It
- Establishing test quality gates and coverage targets
- Validating test suites against defined quality standards
- Enforcing anti-pattern avoidance and best practices in tests
- Ensuring content assertions align with L1/L2/L3 levels
- Coordinating testing expectations across Claude Night Market plugins
Quick Start
- Step 1: Review the testing-quality-standards frontmatter and dependencies reference
- Step 2: Align your plugin's tests with Coverage Thresholds, Quality Metrics, and Content Assertion Levels
- Step 3: Run pytest -v to verify tests pass and meet defined thresholds
Best Practices
- Ensure clear test organization
- Use meaningful test names
- Maintain DRY test code with reusable fixtures
- Write deterministic, flaky-free tests with fast feedback
- Validate coverage against defined thresholds (60/80/90%+, etc.)
Example Use Cases
- A plugin test suite achieving 90%+ coverage for critical paths with deterministic outcomes
- Refactoring to remove flaky tests in line with anti-pattern guidance
- Tests employing content assertion levels to validate markdown outputs
- Integration tests aligned with 90% coverage for safety-critical features
- Cross-plugin quality checks demonstrated in pensive:test-review and sanctum:test-* pipelines
Frequently Asked Questions
Related Skills
terraform
chaterm/terminal-skills
Terraform 基础设施即代码
makefile-generation
athola/claude-night-market
Generate language-specific Makefiles with testing, linting, and automation targets. Use for project initialization and workflow standardization. Skip if Makefile exists.
precommit-setup
athola/claude-night-market
Configure three-layer pre-commit system with linting, type checking, and testing hooks. Use for quality gate setup and code standards. Skip if pre-commit is optimally configured.
error-patterns
athola/claude-night-market
'Standardized error handling patterns with classification, recovery,
risk-classification
athola/claude-night-market
'Inline risk classification for agent tasks using a 4-tier model. Hybrid
quota-management
athola/claude-night-market
'Quota tracking, threshold monitoring, and graceful degradation for rate-limited