math-review
Scannednpx machina-cli add skill athola/claude-night-market/math-review --openclawTable of Contents
- Quick Start
- When to Use
- Required TodoWrite Items
- Core Workflow
- 1. Context Sync
- 2. Requirements Mapping
- 3. Derivation Verification
- 4. Stability Assessment
- 5. Evidence Logging
- Progressive Loading
- Essential Checklist
- Output Format
- Summary
- Context
- Requirements Analysis
- Derivation Review
- Stability Analysis
- Issues
- Recommendation
- Exit Criteria
Mathematical Algorithm Review
Intensive analysis ensuring numerical stability and alignment with standards.
Quick Start
/math-review
Verification: Run the command with --help flag to verify availability.
When To Use
- Changes to mathematical models or algorithms
- Statistical routines or probabilistic logic
- Numerical integration or optimization
- Scientific computing code
- ML/AI model implementations
- Safety-critical calculations
When NOT To Use
- General algorithm review - use architecture-review
- Performance optimization - use parseltongue:python-performance
- General algorithm review - use architecture-review
- Performance optimization - use parseltongue:python-performance
Required TodoWrite Items
math-review:context-syncedmath-review:requirements-mappedmath-review:derivations-verifiedmath-review:stability-assessedmath-review:evidence-logged
Core Workflow
1. Context Sync
pwd && git status -sb && git diff --stat origin/main..HEAD
Verification: Run git status to confirm working tree state.
Enumerate math-heavy files (source, tests, docs, notebooks). Classify risk: safety-critical, financial, ML fairness.
2. Requirements Mapping
Translate requirements → mathematical invariants. Document pre/post conditions, conservation laws, bounds. Load: modules/requirements-mapping.md
3. Derivation Verification
Re-derive formulas using CAS. Challenge approximations. Cite authoritative standards (NASA-STD-7009, ASME VVUQ). Load: modules/derivation-verification.md
4. Stability Assessment
Evaluate conditioning, precision, scaling, randomness. Compare complexity. Quantify uncertainty. Load: modules/numerical-stability.md
5. Evidence Logging
pytest tests/math/ --benchmark
jupyter nbconvert --execute derivation.ipynb
Verification: Run pytest -v tests/math/ to verify.
Log deviations, recommend: Approve / Approve with actions / Block. Load: modules/testing-strategies.md
Progressive Loading
Default (200 tokens): Core workflow, checklists +Requirements (+300 tokens): Invariants, pre/post conditions, coverage analysis +Derivation (+350 tokens): CAS verification, standards, citations +Stability (+400 tokens): Numerical properties, precision, complexity +Testing (+350 tokens): Edge cases, benchmarks, reproducibility
Total with all modules: ~1600 tokens
Essential Checklist
Correctness: Formulas match spec | Edge cases handled | Units consistent | Domain enforced Stability: Condition number OK | Precision sufficient | No cancellation | Overflow prevented Verification: Derivations documented | References cited | Tests cover invariants | Benchmarks reproducible Documentation: Assumptions stated | Limitations documented | Error bounds specified | References linked
Output Format
## Summary
[Brief findings]
## Context
Files | Risk classification | Standards
## Requirements Analysis
| Invariant | Verified | Evidence |
## Derivation Review
[Status and conflicts]
## Stability Analysis
Condition number | Precision | Risks
## Issues
[M1] [Title]: Location | Issue | Fix
## Recommendation
Approve / Approve with actions / Block
Verification: Run the command with --help flag to verify availability.
Exit Criteria
- Context synced, requirements mapped, derivations verified, stability assessed, evidence logged with citations
Troubleshooting
Common Issues
Command not found Ensure all dependencies are installed and in PATH
Permission errors Check file permissions and run with appropriate privileges
Unexpected behavior
Enable verbose logging with --verbose flag
Source
git clone https://github.com/athola/claude-night-market/blob/master/plugins/pensive/skills/math-review/SKILL.mdView on GitHub Overview
Math-review performs intensive analysis to verify math-heavy code, ensuring numerical stability and alignment with mathematical standards. It focuses on correctness of derivations, invariants, and sensitivity, logging evidence to support trustworthy numerical software.
How This Skill Works
It follows a core workflow: Context Sync, Requirements Mapping, Derivation Verification, Stability Assessment, and Evidence Logging, aided by tools like derivation-checker, stability-analyzer, and reference-finder to ensure rigorous verification of mathematical correctness.
When to Use It
- Changes to mathematical models or algorithms
- Statistical routines or probabilistic logic
- Numerical integration or optimization
- Scientific computing code
- Safety-critical calculations
Quick Start
- Step 1: Run /math-review
- Step 2: Verification: Run the command with --help to verify availability
- Step 3: Review the core workflow outputs and corresponding logs
Best Practices
- Use CAS-backed derivation checks to re-derive formulas and challenge approximations.
- Explicitly document preconditions, postconditions, and invariants for each module.
- Load and align with standards and references (e.g., NASA-STD-7009, ASME VVUQ) during verification.
- Assess conditioning, precision, scaling, and randomness to quantify uncertainty.
- Log deviations and verdicts clearly (Approve / Approve with actions / Block) with actionable notes.
Example Use Cases
- Verifying numerical solvers for differential equations against analytical benchmarks.
- Rechecking probabilistic model formulas for consistency with statistical assumptions.
- Validating stability and conditioning of eigenvalue computations in control systems.
- Reviewing numerical integration schemes to ensure accuracy and convergence guarantees.
- Auditing optimization routines to confirm convergence criteria and boundary handling.