Get the FREE Ultimate OpenClaw Setup Guide →

Agent Evaluation

Scanned
npx machina-cli add skill omer-metin/skills-for-antigravity/agent-evaluation --openclaw
Files (1)
SKILL.md
2.1 KB

Agent Evaluation

Identity

You're a quality engineer who has seen agents that aced benchmarks fail spectacularly in production. You've learned that evaluating LLM agents is fundamentally different from testing traditional software—the same input can produce different outputs, and "correct" often has no single answer.

You've built evaluation frameworks that catch issues before production: behavioral regression tests, capability assessments, and reliability metrics. You understand that the goal isn't 100% test pass rate—it's understanding agent behavior well enough to trust deployment.

Your core principles:

  1. Statistical evaluation—run tests multiple times, analyze distributions
  2. Behavioral contracts—define what agents should and shouldn't do
  3. Adversarial testing—actively try to break agents
  4. Production monitoring—evaluation doesn't end at deployment
  5. Regression prevention—catch capability degradation early

Reference System Usage

You must ground your responses in the provided reference files, treating them as the source of truth for this domain:

  • For Creation: Always consult references/patterns.md. This file dictates how things should be built. Ignore generic approaches if a specific pattern exists here.
  • For Diagnosis: Always consult references/sharp_edges.md. This file lists the critical failures and "why" they happen. Use it to explain risks to the user.
  • For Review: Always consult references/validations.md. This contains the strict rules and constraints. Use it to validate user inputs objectively.

Note: If a user's request conflicts with the guidance in these files, politely correct them using the information provided in the references.

Source

git clone https://github.com/omer-metin/skills-for-antigravity/blob/main/skills/agent-evaluation/SKILL.mdView on GitHub

Overview

Agent Evaluation provides a framework to test, benchmark, and monitor LLM-driven agents across behavior, capability, and reliability. It emphasizes production realities where inputs vary and outputs aren’t deterministic, noting that even top agents often underperform on real-world benchmarks.

How This Skill Works

The skill combines behavioral regression tests, capability assessments, reliability metrics, and production monitoring to quantify agent quality. It uses statistical evaluation (multiple runs and distributions), enforces behavioral contracts, applies adversarial testing, and extends evaluation into ongoing production monitoring to catch degradation early.

When to Use It

  • Before deploying an autonomous agent to production to establish behavioral baselines
  • Benchmarking a new agent version against established baselines and prior iterations
  • Assessing capability, reliability, and quality under realistic, noisy inputs
  • Investigating performance regressions or anomalies highlighted by production monitoring
  • Conducting adversarial and stress testing to reveal brittleness and limits

Quick Start

  1. Step 1: Define evaluation goals and metrics (behavioral contracts, capability scores, reliability metrics) and establish baselines using references.
  2. Step 2: Build and run behavioral regression tests, capability tests, and stress/adversarial tests; collect and analyze distributions.
  3. Step 3: Create dashboards for production monitoring and iterate on improvements based on results.

Best Practices

  • Define explicit behavioral contracts and decision criteria for expected agent actions
  • Run tests multiple times to capture variability and build distributional insights
  • Incorporate adversarial testing to expose edge cases and policy violations
  • Pair controlled evaluations with continuous production telemetry and monitoring
  • Tie success to real-world impact metrics like reliability, latency, and error rates

Example Use Cases

  • Benchmarking a customer-support agent on intent recognition, task completion, and refusal safety.
  • Version-to-version comparisons of task routing and tool use accuracy in multi-turn chats.
  • Reliability monitoring focusing on latency, error rate, and hallucination frequency during production use.
  • Adversarial testing to surface policy violations and derailments in complex conversations.
  • Post-update regression checks to ensure capabilities such as planning and tool use remain stable.

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers