Get the FREE Ultimate OpenClaw Setup Guide →

guard

Scanned
npx machina-cli add skill PJuniszewski/cook/guard --openclaw
Files (1)
SKILL.md
2.1 KB

Context Guard Skill

Epistemic safety analysis for JSON data in prompts. Prevents LLMs from reasoning with unjustified certainty when input data is incomplete.

Features

  • Lossless reduction - Minify, columnar transform, remove nulls
  • Token counting - API or heuristic fallback
  • Decision engine - ALLOW / SAMPLE / BLOCK
  • Intelligent trimming - First + last + evenly-spaced sampling
  • Forensic detection - Warns when specific record queries detected

Usage

When /guard is invoked, execute the guard script:

For file paths:

python3 "${CLAUDE_PLUGIN_ROOT}/scripts/guard_cmd.py" "<file_path>" [options]

For inline JSON data:

python3 "${CLAUDE_PLUGIN_ROOT}/scripts/guard_cmd.py" - [options] <<'GUARD_INPUT'
<json_data>
GUARD_INPUT

Options

OptionDescription
--modeanalysis|summary|forensics (default: auto-detect)
--forceBypass blocks, emit warnings only
--allow-samplingPermit sampling for forensic queries
--no-reduceSkip lossless reduction phase
--budget-tokens NToken budget (default: 3500)
--print-onlyOutput report only, never auto-send
--jsonOutput result as JSON

Semantic Modes

ModeSamplingUse Case
analysisAllowed"What categories exist?", "Price range?"
summaryAggressive"Describe the data structure"
forensicsBLOCKED"Why did request id=X fail?"

Output

============================================================
CONTEXT GUARD ANALYSIS
============================================================

Decision: [OK] ALLOW | [~] SAMPLE | [X] BLOCK
Mode: analysis | summary | forensics

TOKEN ANALYSIS:
  Original:     5,234 tokens
  After reduce: 4,891 tokens (-343)
  Budget:       3,500 tokens
============================================================

Requirements

  • Python 3.8+
  • ANTHROPIC_API_KEY environment variable (for token counting)

Source

git clone https://github.com/PJuniszewski/cook/blob/main/.claude/skills/guard/SKILL.mdView on GitHub

Overview

Context Guard analyzes JSON data in prompts to prevent LLMs from reasoning with unjustified certainty when input data is incomplete. It performs lossless reduction, token counting, and a decision engine that can ALLOW, SAMPLE, or BLOCK, plus forensic detection to flag risky prompts.

How This Skill Works

When invoked, the tool applies lossless reduction to the JSON (minify, columnar transform, remove nulls) and counts tokens via API or a heuristic fallback. It then runs the decision engine to produce ALLOW, SAMPLE, or BLOCK, supports intelligent trimming of the data, and flags forensic signals to warn about targeted record queries.

When to Use It

  • Before submitting a JSON payload to an LLM to minimize data leakage and ensure safety
  • When you need a quick safety audit of input data structure and completeness
  • When prompts might query specific records or sensitive fields and you want forensic alerts
  • When you want to shrink large payloads while preserving essential fields for model reasoning
  • When documenting or integrating prompts in regulated workflows requiring strict data governance

Quick Start

  1. Step 1: Prepare your JSON input as a file or inline JSON
  2. Step 2: Run guard_cmd.py with options such as --mode analysis --json
  3. Step 3: Review the output and adjust your payload before sending to the model

Best Practices

  • Run with --json to get structured results and parse the decision and token data
  • Start in analysis mode to understand categories before moving to summary or forensic checks
  • Use --allow-sampling with caution when forensic style prompts are involved
  • Disable reduction with --no-reduce only if exact fields are required
  • Tune token budget with --budget-tokens to fit your model and avoid token overage

Example Use Cases

  • Prechecking a customer JSON payload before routing to a pricing model to avoid leakage
  • Redacting or trimming personal data from a large dataset before LLM ingestion
  • Forensic check to detect attempts to query a specific user ID or sensitive field
  • Summarizing a dataset structure for API documentation without exposing raw data
  • Auditing a multi record prompt to ensure privacy compliance in a chat assistant

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers