Get the FREE Ultimate OpenClaw Setup Guide →

severity

Scanned
npx machina-cli add skill lklimek/claudius/severity --openclaw
Files (1)
SKILL.md
2.4 KB

Severity Classification

Use these levels when rating findings in reviews, audits, and assessments.

Inspired by CVSS v4.0 qualitative ratings and OWASP Risk Rating, adapted for general code review findings beyond pure security.

Levels

CRITICAL — Must fix before merge. Exploitable vulnerability, data loss, correctness bug causing wrong results, or system breakage. Production incident if deployed. CVSS equivalent: 9.0-10.0. Examples: RCE, SQL injection, data breach, silent data corruption.

HIGH — Should fix before merge. Significant risk or correctness issue that will likely cause problems. Workaround may exist but is not acceptable long-term. CVSS equivalent: 7.0-8.9. Examples: privilege escalation, race condition causing data loss, broken authentication, missing input validation on untrusted data.

MEDIUM — Fix before production. Real issue that requires additional factors to manifest, or a design flaw that increases future risk. Acceptable to merge with a tracked follow-up. CVSS equivalent: 4.0-6.9. Examples: information disclosure, missing rate limiting, code duplication creating maintenance risk, error handling that swallows context.

LOW — Improvement recommended. Minor issue, defense in depth, code hygiene, or deviation from best practices. No immediate risk but worth addressing. CVSS equivalent: 0.1-3.9. Examples: non-idiomatic code, missing documentation, inconsistent naming, suboptimal algorithm for current scale.

INFO — Positive observation. Something done well, a good pattern worth noting, or context that helps readers understand the codebase. No action required. CVSS equivalent: None (0.0). Examples: well-structured error handling, good test coverage, clean separation of concerns, effective use of type system.

Scale

CRITICAL > HIGH > MEDIUM > LOW > INFO

Rules

  • Everything that may require action must be LOW or higher
  • INFO is exclusively for praise and context — never for suggestions or improvements
  • When in doubt between two levels, choose the higher one
  • Severity reflects impact and likelihood, not effort to fix
  • A trivial one-line fix can still be CRITICAL if the impact is severe

Source

git clone https://github.com/lklimek/claudius/blob/main/skills/severity/SKILL.mdView on GitHub

Overview

Severity Classification provides a five-level scale for rating findings in reviews, audits, and assessments. It adapts CVSS v4.0 and OWASP Risk Rating concepts to general code-review findings, guiding when and how to fix issues before merge or production.

How This Skill Works

Findings are labeled from CRITICAL to INFO, reflecting impact and likelihood. The scale is applied with practical rules: anything requiring action must be LOW or higher, and INFO is reserved for praise and context. When unsure between levels, choose the higher one to avoid underestimating risk.

When to Use It

  • During code reviews and pull requests to determine urgency before merge.
  • In security, reliability, or data-handling audits of the project.
  • When assessing design flaws that could increase future risk or maintenance burden.
  • For prioritizing bug fixes and remediation based on impact and likelihood.
  • When documenting findings for stakeholders to understand severity consistently.

Quick Start

  1. Step 1: Inspect findings and assign an initial level based on impact and likelihood.
  2. Step 2: Validate context and apply the CRITICAL > HIGH > MEDIUM > LOW > INFO scale to finalize the rating.
  3. Step 3: Document the rationale and, if applicable, ensure severity is preloaded on the finding-producing agent.

Best Practices

  • Use the five levels CRITICAL > HIGH > MEDIUM > LOW > INFO consistently across all findings.
  • Preload severity levels on finding-producing agents to ensure consistent labeling.
  • If unsure between two levels, choose the higher one to avoid underestimation.
  • Reserve INFO for praise and context; do not use it to prompt improvements.
  • Tie each level to concrete impact and likelihood examples to guide future reviews.

Example Use Cases

  • CRITICAL: Remote code execution leading to a data breach.
  • HIGH: Privilege escalation or race condition causing data loss.
  • MEDIUM: Information disclosure or missing rate limiting that increases risk.
  • LOW: Non-idiomatic code or inconsistent naming that complicates maintenance.
  • INFO: Well-structured error handling and good test coverage.

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers