writing-hooks
Scannednpx machina-cli add skill wayne930242/Reflexive-Claude-Code/writing-hooks --openclawWriting Hooks
Overview
Writing hooks IS creating automated quality gates.
Hooks run on Claude Code events (PreToolUse, PostToolUse, etc.) and can block actions with exit code 2.
Core principle: Hooks enforce what humans forget. Fast checks only—slow hooks kill productivity.
Violating the letter of the rules is violating the spirit of the rules.
Task Initialization (MANDATORY)
Before ANY action, create task list using TaskCreate:
TaskCreate for EACH task below:
- Subject: "[writing-hooks] Task N: <action>"
- ActiveForm: "<doing action>"
Tasks:
- Analyze requirements
- RED - Test without hook
- GREEN - Write hook script
- Configure settings.json
- Validate behavior
- Test blocking
Announce: "Created 6 tasks. Starting execution..."
Execution rules:
TaskUpdate status="in_progress"BEFORE starting each taskTaskUpdate status="completed"ONLY after verification passes- If task fails → stay in_progress, diagnose, retry
- NEVER skip to next task until current is completed
- At end,
TaskListto confirm all completed
TDD Mapping for Hooks
| TDD Phase | Hook Creation | What You Do |
|---|---|---|
| RED | Test without hook | Write code, observe quality issues slip through |
| Verify RED | Document violations | Note specific issues that should be caught |
| GREEN | Write hook | Create script that catches those issues |
| Verify GREEN | Test blocking | Verify exit code 2 blocks violating code |
| REFACTOR | Optimize speed | Reduce hook runtime, filter file types |
Task 1: Analyze Requirements
Goal: Understand what quality gate to create.
Questions to answer:
- What violation should be blocked?
- Which files should be checked?
- What tool/command performs the check?
- What event triggers the hook?
Event Selection:
| Event | When | Use For |
|---|---|---|
PreToolUse | Before tool runs | Block bad writes before they happen |
PostToolUse | After tool runs | Validate written code |
UserPromptSubmit | Before prompt processed | Add context to prompts |
Verification: Can describe the violation and the command to detect it.
Task 2: RED - Test Without Hook
Goal: Write code WITHOUT the hook. Observe violations that slip through.
Process:
- Ask agent to write code in the target file type
- Intentionally introduce the violation (bad format, type error, etc.)
- Observe that Claude Code doesn't catch it
- Document the specific violation
Verification: Documented at least 1 violation that should have been caught.
Task 3: GREEN - Write Hook Script
Goal: Create Python script that catches the violations you documented.
Hook Structure
.claude/hooks/
├── eslint_check.py
├── prettier_check.py
└── typecheck.py
Exit Code Contract
| Code | Meaning | Effect |
|---|---|---|
| 0 | Pass | Continue, stdout shown in verbose |
| 2 | Block | Action blocked, stderr fed to Claude |
| Other | Warning | Continue, stderr shown in verbose |
Hook Template
#!/usr/bin/env python3
"""Hook: [Description]"""
import json
import subprocess
import sys
# Read hook input
data = json.load(sys.stdin)
file_path = data.get('tool_input', {}).get('file_path', '')
# Filter by extension
if not file_path.endswith(('.ts', '.tsx', '.js', '.jsx')):
sys.exit(0)
# Run check
result = subprocess.run(
['npx', 'eslint', '--format', 'compact', file_path],
capture_output=True,
text=True
)
# Block on failure
if result.returncode != 0:
print(f"Lint errors:\\n{result.stdout}", file=sys.stderr)
sys.exit(2) # BLOCK
sys.exit(0) # PASS
Critical Requirements
- Fast - Under 5 seconds, hooks run synchronously
- Filter files - Only check relevant extensions
- Limit output - First 5-10 errors, not all
- Use Python - Cross-platform, wrapped shell commands
Verification:
- Script is executable (
chmod +x) - Returns exit code 0 for valid files
- Returns exit code 2 for violations
- Runs under 5 seconds
Task 4: Configure settings.json
Goal: Register hook in .claude/settings.json.
CRITICAL: Use .claude/settings.json, NOT settings.local.json. Settings are team-shared.
Configuration Format
{
"hooks": {
"PostToolUse": [
{
"matcher": "Write|Edit",
"hooks": [{
"type": "command",
"command": "\"$CLAUDE_PROJECT_DIR\"/.claude/hooks/eslint_check.py",
"timeout": 30
}]
}
]
}
}
Matcher Patterns
| Matcher | Matches |
|---|---|
Write|Edit | Write or Edit tools |
Bash | Bash tool only |
.* | All tools |
Verification:
- Hook registered in correct event
- Matcher targets correct tools
- Command path uses
$CLAUDE_PROJECT_DIR - Timeout is reasonable (5-30 seconds)
Task 5: Validate Behavior
Goal: Verify hook script works correctly.
Test command:
echo '{"tool_input":{"file_path":"test.ts"}}' | .claude/hooks/your_hook.py
echo $? # Should be 0 (pass) or 2 (block)
Checklist:
- Valid file returns exit code 0
- Invalid file returns exit code 2
- Errors go to stderr, info to stdout
- Completes under 5 seconds
Verification: Manual test passes for both valid and invalid inputs.
Task 6: Test Blocking
Goal: Verify hook actually blocks bad code in Claude Code.
Process:
- Ask Claude to write code with the violation
- Observe hook blocks the action
- Verify error message is fed back to Claude
- Claude should fix and retry
Verification:
- Hook blocks violating code
- Claude receives feedback and fixes the issue
Common Hook Patterns
Linting (ESLint/Ruff)
result = subprocess.run(
['npx', 'eslint', '--format', 'compact', file_path],
capture_output=True, text=True
)
if result.returncode != 0:
print(result.stdout[:1000], file=sys.stderr)
sys.exit(2)
Type Checking
result = subprocess.run(
['npx', 'tsc', '--noEmit'],
capture_output=True, text=True
)
errors = [l for l in result.stdout.split('\n') if file_path in l]
if errors:
print('\n'.join(errors[:5]), file=sys.stderr)
sys.exit(2)
Auto-Fix (Format)
# Auto-fix instead of blocking
subprocess.run(['npx', 'prettier', '--write', file_path])
print(f"Auto-formatted: {file_path}")
sys.exit(0) # Don't block, just fix
Red Flags - STOP
These thoughts mean you're rationalizing. STOP and reconsider:
- "Hooks are overkill for this project"
- "I'll just remember to run linting"
- "5 seconds is too strict"
- "Check all files, not just changed ones"
- "Skip testing, the logic is simple"
- "Use settings.local.json for team hooks"
All of these mean: You're about to create a weak hook. Follow the process.
Common Rationalizations
| Excuse | Reality |
|---|---|
| "I'll remember to lint" | You won't. Hooks enforce what humans forget. |
| "Slow hooks are thorough" | Slow hooks = disabled hooks. Keep it fast. |
| "Check everything" | Full project check = 30+ seconds. Check changed file only. |
| "settings.local.json" | Local = not shared. Team hooks go in settings.json. |
| "Simple logic doesn't need tests" | Hook failures are silent. Always test. |
Flowchart: Hook Creation
digraph hook_creation {
rankdir=TB;
start [label="Need hook", shape=doublecircle];
analyze [label="Task 1: Analyze\nrequirements", shape=box];
baseline [label="Task 2: RED\nTest without hook", shape=box, style=filled, fillcolor="#ffcccc"];
write [label="Task 3: GREEN\nWrite hook script", shape=box, style=filled, fillcolor="#ccffcc"];
config [label="Task 4: Configure\nsettings.json", shape=box];
validate [label="Task 5: Validate\nbehavior", shape=box];
validate_pass [label="Tests\npassed?", shape=diamond];
test [label="Task 6: Test\nblocking", shape=box];
done [label="Hook complete", shape=doublecircle];
start -> analyze;
analyze -> baseline;
baseline -> write;
write -> config;
config -> validate;
validate -> validate_pass;
validate_pass -> test [label="yes"];
validate_pass -> write [label="no"];
test -> done;
}
References
- references/static-checks.md - Complete hook examples
- references/events.md - Hook events reference
Source
git clone https://github.com/wayne930242/Reflexive-Claude-Code/blob/main/plugins/rcc/skills/writing-hooks/SKILL.mdView on GitHub Overview
Writing hooks create automated quality gates that run on Claude Code events (PreToolUse, PostToolUse, etc.) and can block actions with exit code 2. They enforce what humans forget, delivering fast checks to protect code quality. This skill outlines a TDD-based workflow, mandatory task initialization, and practical hook templates.
How This Skill Works
Hooks live under the .claude/hooks folder and execute during Claude Code events. Each hook returns exit codes: 0 to pass, 2 to block, and other codes for warnings. The provided templates (eslint_check.py, prettier_check.py, typecheck.py) illustrate the structure and the rule that hooks must be fast (under 5 seconds) and run synchronously.
When to Use It
- Before any tool action to block bad writes (PreToolUse) based on a defined violation.
- After a tool runs (PostToolUse) to validate results and potentially block downstream actions.
- When the user asks to add a hook, enforce linting, or block bad code as part of a workflow.
- To implement pre-commit checks that stop commits with violations.
- To automate lightweight static checks as part of Claude Code workflows for faster feedback.
Quick Start
- Step 1: Plan the quality gate with TaskCreate for each action (e.g., analyze requirements, RED test, GREEN hook, etc.).
- Step 2: Implement a hook script in .claude/hooks (eslint_check.py, prettier_check.py, typecheck.py).
- Step 3: Validate with RED (test without the hook), GREEN (write the hook), and configure settings.json; confirm exit code 2 blocks violations.
Best Practices
- Clearly define the exact violation to block and scope it to relevant file types.
- Keep the hook fast and deterministic; aim for under 5 seconds and simple checks.
- Use exit code 2 to block and provide actionable error messages via stderr.
- Validate behavior using a RED/GREEN workflow and document results during development.
- Organize hooks under .claude/hooks and version control them alongside settings.json.
Example Use Cases
- eslint_check.py blocks JavaScript/TypeScript lint violations during PreToolUse or PostToolUse.
- prettier_check.py enforces consistent formatting to avoid style defects in commits.
- typecheck.py catches type errors in TS/JS code before further processing.
- block_bad_imports.py prevents disallowed imports or risky patterns.
- pre_commit_hook.py integrates with a Git-like pre-commit flow to stop bad writes.