Get the FREE Ultimate OpenClaw Setup Guide →

perf-analyzer

Scanned
npx machina-cli add skill ComposioHQ/awesome-claude-plugins/analyzer --openclaw
Files (1)
SKILL.md
779 B

perf-analyzer

Synthesize performance investigation results into clear recommendations.

Follow docs/perf-requirements.md as the canonical contract.

Inputs

  • Baseline data
  • Experiment results
  • Profiling evidence
  • Hypotheses tested
  • Breaking point results

Output Format

summary: <2-3 sentences>
recommendations:
  - <actionable recommendation 1>
  - <actionable recommendation 2>
abandoned:
  - <hypothesis or experiment that failed>
next_steps:
  - <if user should continue or stop>

Constraints

  • Only cite evidence that exists in logs or code.
  • If data is insufficient, say so and request a re-run.

Source

git clone https://github.com/ComposioHQ/awesome-claude-plugins/blob/master/perf/skills/analyzer/SKILL.mdView on GitHub

Overview

perf-analyzer synthesizes baseline data, experiment results, profiling evidence, and hypotheses into concise, evidence-backed recommendations. It follows the canonical contract in docs/perf-requirements.md to ensure consistency and traceability. If data is insufficient, it flags gaps and requests a re-run.

How This Skill Works

Provide inputs (baseline data, experiment results, profiling evidence, hypotheses tested, and breaking point results). The tool outputs a structured report with summary, actionable recommendations, any abandoned hypotheses, and next steps, strictly citing evidence from logs or code.

When to Use It

  • Diagnosing a performance regression after a release
  • Evaluating proposed optimizations to verify impact
  • Validating hypotheses against logs and profiling data
  • Benchmarking a new feature against baseline
  • Presenting findings to stakeholders for evidence-based decisions

Quick Start

  1. Step 1: Gather baseline data, experiment results, profiling evidence, hypotheses, and breaking point results.
  2. Step 2: Run perf-analyzer to produce a summary, recommendations, abandoned hypotheses, and next steps.
  3. Step 3: Review the output and iterate with a re-run if data gaps are detected.

Best Practices

  • Cite only evidence that exists in logs or code
  • Align findings with docs/perf-requirements.md canonical contract
  • Predefine success criteria and hypotheses before analysis
  • Document assumptions and data gaps clearly
  • Tag and isolate which experiments support or refute each recommendation

Example Use Cases

  • Latency regression after release: baseline vs experiment showed 25% slower response times; recommendations target critical hot paths.
  • GC pauses identified: profiling showed long pause times; advised heap tuning and allocator changes.
  • Memory growth observed under load: suspected fragmentation; recommended refactor and pooling strategy.
  • Throughput improvement with feature flag: experiments confirmed higher throughput in optimized path.
  • I/O wait spike due to DB contention: recommended indexing and query plan improvements.

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers