Get the FREE Ultimate OpenClaw Setup Guide →
E

Sui Auto Test

Scanned

@EasonC13

npx machina-cli add skill @EasonC13/sui-auto-test --openclaw
Files (1)
SKILL.md
8.6 KB

Sui Coverage Skill

Analyze and automatically improve Sui Move test coverage with security analysis.

Quick Reference

# Location of tools
SKILL_DIR=~/clawd/skills/sui-coverage

# Full workflow
cd /path/to/move/package
sui move test --coverage --trace
python3 $SKILL_DIR/analyze_source.py -m <module> -o coverage.md

Workflow: Auto-Improve Test Coverage

Step 1: Run Coverage Analysis

cd <package_path>
sui move test --coverage --trace
python3 ~/clawd/skills/sui-coverage/analyze_source.py -m <module_name> -o coverage.md

Step 2: Read the Coverage Report

Read the generated coverage.md to identify:

  • 🔴 Uncalled functions - Functions never executed
  • 🔴 Uncovered assertions - assert!() failure paths not tested
  • 🔴 Uncovered branches - if/else paths not taken

Step 3: Write Missing Tests

For each uncovered item, write a test:

A. Uncalled Function

#[test]
fun test_<function_name>() {
    // Setup
    let mut ctx = tx_context::dummy();
    // Call the uncovered function
    <function_name>(...);
    // Assert expected behavior
}

B. Assertion Failure Path (expect_failure)

#[test]
#[expected_failure(abort_code = <ERROR_CODE>)]
fun test_<function>_fails_when_<condition>() {
    let mut ctx = tx_context::dummy();
    // Setup state that triggers the assertion failure
    <function_call_that_should_fail>();
}

C. Branch Coverage (if/else)

#[test]
fun test_<function>_when_<condition_true>() { ... }

#[test]  
fun test_<function>_when_<condition_false>() { ... }

Step 4: Verify Coverage Improved

sui move test --coverage --trace
python3 ~/clawd/skills/sui-coverage/analyze_source.py -m <module_name>

Tools

1. analyze_source.py (Primary Tool)

python3 ~/clawd/skills/sui-coverage/analyze_source.py --module <name> [options]

Options:
  -m, --module    Module name (required)
  -p, --path      Package path (default: .)
  -o, --output    Output file (e.g., coverage.md)
  --json          JSON output
  --markdown      Markdown to stdout

2. analyze.py (LCOV Statistics)

sui move coverage lcov
python3 ~/clawd/skills/sui-coverage/analyze.py lcov.info -f "<package>" -s sources/

Options:
  -f, --filter       Filter by path pattern
  -s, --source-dir   Source directory for context
  -i, --issues-only  Only show files with issues
  -j, --json         JSON output

3. parse_bytecode.py (Low-level)

sui move coverage bytecode --module <name> | python3 ~/clawd/skills/sui-coverage/parse_bytecode.py

Common Patterns

Testing Assertion Failures

// Source code:
public fun withdraw(balance: &mut u64, amount: u64) {
    assert!(*balance >= amount, EInsufficientBalance);  // ← This failure path
    *balance = *balance - amount;
}

// Test for the failure path:
#[test]
#[expected_failure(abort_code = EInsufficientBalance)]
fun test_withdraw_insufficient_balance() {
    let mut balance = 50;
    withdraw(&mut balance, 100);  // Should fail: 50 < 100
}

Testing All Branches

// Source code:
public fun classify(value: u64): u8 {
    if (value == 0) {
        0
    } else if (value < 100) {
        1
    } else {
        2
    }
}

// Tests for all branches:
#[test]
fun test_classify_zero() {
    assert!(classify(0) == 0, 0);
}

#[test]
fun test_classify_small() {
    assert!(classify(50) == 1, 0);
}

#[test]
fun test_classify_large() {
    assert!(classify(100) == 2, 0);
}

Testing Object Lifecycle

#[test]
fun test_full_lifecycle() {
    let mut ctx = tx_context::dummy();
    
    // Create
    let obj = create(&mut ctx);
    assert!(get_value(&obj) == 0, 0);
    
    // Modify
    increment(&mut obj);
    assert!(get_value(&obj) == 1, 0);
    
    // Destroy
    destroy(obj);
}

Error Code Reference

When writing #[expected_failure] tests, use the error constant name:

// If the module defines:
const EInvalidInput: u64 = 1;
const ENotAuthorized: u64 = 2;

// Use in test:
#[expected_failure(abort_code = EInvalidInput)]
fun test_invalid_input() { ... }

// Or use the module-qualified name:
#[expected_failure(abort_code = my_module::EInvalidInput)]
fun test_invalid_input() { ... }

Example: Full Auto-Coverage Session

# 1. Analyze current coverage
cd ~/project/my_package
sui move test --coverage --trace
python3 ~/clawd/skills/sui-coverage/analyze_source.py -m my_module -o coverage.md

# 2. Review what's missing
cat coverage.md
# Shows:
# - decrement() not called
# - assert!(value > 0, EValueZero) failure not tested

# 3. Add tests to sources/my_module.move or tests/my_module_tests.move
# (write the missing tests)

# 4. Verify improvement
sui move test --coverage --trace
python3 ~/clawd/skills/sui-coverage/analyze_source.py -m my_module

# 5. Repeat until 100% coverage

Integration with Agent Workflow

When asked to improve test coverage:

  1. Run analysis - Get current coverage state
  2. Read source - Understand the module's logic
  3. Identify gaps - List uncovered functions/branches/assertions
  4. Security review - Analyze for vulnerabilities while writing tests
  5. Write tests - Create tests for each gap + security edge cases
  6. Report findings - Document any security concerns discovered
  7. Verify - Re-run coverage to confirm improvement

Always commit test improvements:

git add sources/ tests/
git commit -m "Improve test coverage for <module>"

Security Analysis During Testing

Writing tests = Understanding the contract = Finding vulnerabilities

When writing tests, actively look for these issues:

1. Access Control

Questions to ask:
- Who can call this function?
- Should there be owner/admin checks?
- Can unauthorized users manipulate state?

Red flags:
- Public functions that modify critical state without checks
- Missing capability/witness patterns

2. Integer Overflow/Underflow

Questions to ask:
- What happens at u64::MAX?
- What happens when subtracting from 0?
- Are arithmetic operations checked?

Test pattern:
#[test]
fun test_overflow_boundary() {
    // Test with max values
}

3. State Manipulation

Questions to ask:
- Can state be left in inconsistent state?
- Are all state changes atomic?
- Can partial failures corrupt data?

Red flags:
- Multiple state changes without rollback
- Shared objects without proper locking

4. Economic Exploits

Questions to ask:
- Can someone extract more value than deposited?
- Are there rounding errors that can be exploited?
- Flash loan attack vectors?

Red flags:
- Price calculations without slippage protection
- Unbounded loops over user-controlled data

5. Denial of Service

Questions to ask:
- Can someone block legitimate users?
- Are there unbounded operations?
- Can storage be filled maliciously?

Red flags:
- Vectors that grow unbounded
- Loops over external data

Security Report Template

When analyzing a module, generate a security report:

## Security Analysis: <module_name>

### Summary
- Risk Level: [Low/Medium/High/Critical]
- Issues Found: X

### Findings

#### [SEVERITY] Issue Title
- **Location:** Line XX
- **Description:** What the issue is
- **Impact:** What could happen
- **Recommendation:** How to fix

### Tested Edge Cases
- [ ] Overflow at max values
- [ ] Underflow at zero
- [ ] Unauthorized access attempts
- [ ] Empty/null inputs
- [ ] Reentrancy scenarios

Example: Security-Aware Test

// SECURITY: Testing that non-owner cannot withdraw
#[test]
#[expected_failure(abort_code = ENotOwner)]
fun test_unauthorized_withdraw() {
    // Setup: Create vault owned by ALICE
    // Action: BOB tries to withdraw
    // Expected: Should fail with ENotOwner
}

// SECURITY: Testing overflow protection
#[test]
fun test_deposit_overflow_protection() {
    // Deposit near u64::MAX
    // Verify no overflow occurs
}

// SECURITY: Testing economic invariant
#[test]
fun test_total_supply_invariant() {
    // After any operations:
    // sum(all_balances) == total_supply
}

Full Workflow with Security

# 1. Coverage analysis
sui move test --coverage --trace
python3 ~/clawd/skills/sui-coverage/analyze_source.py -m <module> -o coverage.md

# 2. While writing tests, document security findings
# Create SECURITY.md alongside coverage.md

# 3. After tests pass, summarize:
# - Coverage: X% → 100%
# - Security issues found: N
# - Recommendations: ...

Source

git clone https://clawhub.ai/EasonC13/sui-auto-testView on GitHub

Overview

Analyzes Sui Move test coverage to pinpoint untested code, missing assertions, and uncovered branches. The skill guides you to write targeted tests and perform security audits using Python tools to parse coverage output and generate reports.

How This Skill Works

Run coverage with sui move test --coverage --trace, then use analyze_source.py to produce coverage.md. The tools identify gaps (uncalled functions, missing assertion paths, and untested branches) and provide templates for new tests, followed by re-running coverage to verify improvements.

When to Use It

  • Identify gaps: uncalled functions, missing assertions, and untested branches.
  • After running coverage analysis to locate untested code in a Move module.
  • When adding tests to a new Move module to achieve full coverage.
  • During security audits to ensure coverage paths include failure conditions.
  • When you want a reproducible workflow with coverage reports and optional JSON outputs.

Quick Start

  1. Step 1: Run coverage analysis: cd <package_path> && sui move test --coverage --trace
  2. Step 2: Generate/read report: python3 ~/clawd/skills/sui-coverage/analyze_source.py -m <module_name> -o coverage.md
  3. Step 3: Write tests for uncovered items per the templates, then re-run: sui move test --coverage --trace and analyze_source.py again

Best Practices

  • Run with trace to get granular coverage data.
  • Use analyze_source.py to generate coverage.md and JSON outputs.
  • Prioritize uncalled functions, then missing assertion paths, then branches.
  • Follow the provided Move test templates for each uncovered item.
  • Re-run sui move test and re-analyze until coverage improves; compare before/after.

Example Use Cases

  • Improve coverage for a DeFi module by covering withdraw and repay paths.
  • Identify an untested assertion path in a vault contract and validate it with tests.
  • Generate and review coverage.md to spot per-function gaps in a module.
  • Use parse_bytecode.py to map coverage to bytecode for low-level auditing.
  • Re-run coverage after adding tests to confirm increased coverage and fewer issues.

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers