Get the FREE Ultimate OpenClaw Setup Guide →

receive-feedback

npx machina-cli add skill existential-birds/beagle/receive-feedback --openclaw
Files (1)
SKILL.md
1.7 KB

Receive Feedback

Overview

Process code review feedback with verification-first discipline. No performative agreement. Technical correctness over social comfort.

Quick Reference

┌─────────────┐     ┌──────────────┐     ┌─────────────┐
│   VERIFY    │ ──▶ │   EVALUATE   │ ──▶ │   EXECUTE   │
│ (tool-based)│     │ (decision    │     │ (implement/ │
│             │     │  matrix)     │     │  reject/    │
└─────────────┘     └──────────────┘     │  defer)     │
                                         └─────────────┘

Core Principle

Verify before implementing. Ask before assuming.

When To Use

  • Receiving code review from another LLM session
  • Processing PR review comments
  • Evaluating CI/linter feedback
  • Handling suggestions from pair programming

Workflow

For each feedback item:

  1. Verify - Use tools to check if feedback is technically valid
  2. Evaluate - Apply decision matrix to determine action
  3. Execute - Implement, reject with evidence, or defer

Files

  • VERIFICATION.md - Tool-based verification workflow
  • EVALUATION.md - Decision matrix and rules
  • RESPONSE.md - Structured output format
  • references/skill-integration.md - Using with code-review skills

Source

git clone https://github.com/existential-birds/beagle/blob/main/plugins/beagle-core/skills/receive-feedback/SKILL.mdView on GitHub

Overview

Receive Feedback processes external code review input with a verification-first discipline. It emphasizes technical correctness over social signals and tracks the disposition of each suggestion. It is designed for input from LLMs, humans, or CI tools.

How This Skill Works

It follows VERIFY → EVALUATE → EXECUTE: verify feedback with tool-based checks, apply a decision matrix to determine the action, then implement, reject with evidence, or defer. This keeps feedback actionable and auditable.

When to Use It

  • Receiving code review from another LLM session
  • Processing PR review comments
  • Evaluating CI/linter feedback
  • Handling suggestions from pair programming
  • Working with automated code-analysis tools or external reviewers

Quick Start

  1. Step 1: Verify the feedback using tool-based checks to confirm technical validity
  2. Step 2: Evaluate the feedback with the decision matrix to decide action (implement, reject with evidence, or defer)
  3. Step 3: Execute by applying the change, rejecting with evidence, or deferring, and log the disposition

Best Practices

  • Verify feedback with appropriate verification tools before acting
  • Ask clarifying questions when feedback is ambiguous
  • Document the rationale and disposition for each item
  • Reproduce and cite evidence when rejecting or deferring
  • Keep a centralized response log and traceable outputs

Example Use Cases

  • An engineer verifies a vague LLM suggestion for a refactor, confirms correctness, and proceeds with a documented change
  • A reviewer’s PR comment is validated with a test case before any modification
  • CI linter feedback is checked for true violations and then fixed with evidence of the rule being satisfied
  • Pair-programming feedback is incorporated only after clarifying questions and recording the decision
  • Non-critical suggestions are deferred with evidence until after tests pass

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers