build-review-interface
npx machina-cli add skill Goodeye-Labs/truesight-mcp-skills/build-review-interface --openclawBuild Review Interface
Use this skill to design and implement a stack-agnostic custom trace review web UI.
Interactive Q&A protocol (mandatory)
Start with custom web UI scoping questions.
Rules:
- Ask one question at a time.
- Prefer lettered options.
- Ask one follow-up when requirements are ambiguous.
Example question:
What should the primary annotation action set be in v1?
A) Pass / Fail only
B) Pass / Fail + free-text notes
C) Pass / Fail + notes + Defer
D) Custom label set
Custom UI workflow
- Define annotation contract:
- pass/fail controls
- free-text notes
- defer action
- autosave behavior
- Define data source and persistence:
- JSON/CSV input or API-backed source
- CSV/JSON/SQLite output for labels
- Build trace review views:
- render markdown as markdown
- syntax highlight code
- pretty-print JSON
- collapse low-value verbose sections
- Add navigation and productivity controls:
- next/previous
- position and progress counters
- keyboard shortcuts
- Validate end to end:
- functional checks
- data persistence checks
- Playwright verification for core annotation loop
Design checklist
- Consistent layout and terminology across traces
- Pass/fail actions visually distinct
- Full trace visible or expandable
- Autosave on primary actions
- Keyboard shortcuts for frequent actions
- Trace-level annotation as default
Guardrails
- Keep technology guidance stack-agnostic unless the user asks for a specific stack.
- Review the existing codebase to determine if any existing stack is available and offer that as an option.
- Keep sampling logic separate from UI implementation.
Source
git clone https://github.com/Goodeye-Labs/truesight-mcp-skills/blob/main/skills/build-review-interface/SKILL.mdView on GitHub Overview
Build Review Interface helps you design a stack-agnostic web UI for annotating and reviewing traces. It guides you to create a bespoke review surface tailored to your workflow, including an interactive Q&A scoping protocol and configurable annotation contracts.
How This Skill Works
Begin with the mandatory Interactive Q&A protocol to scope the UI. Then define an annotation contract (pass/fail, notes, defer, autosave behavior) and specify data sources (JSON/CSV input or API-backed) with outputs for labels (CSV/JSON/SQLite). Build trace review views that render markdown, highlight code, pretty-print JSON, and allow collapsing verbose sections, followed by navigation controls and keyboard shortcuts. Finally perform end-to-end validation with functional checks, data persistence checks, and Playwright verification for the core annotation loop.
When to Use It
- When you need a bespoke, workflow-specific review surface for trace annotation.
- When your project requires a stack-agnostic UI that can connect to JSON/CSV inputs or API-backed sources with label persistence.
- When you must render traces with markdown, syntax-highlighted code, and pretty-printed JSON.
- When you want productivity features like next/previous navigation, progress counters, and keyboard shortcuts.
- When you need end-to-end validation (functional, persistence) including Playwright verification for the annotation loop.
Quick Start
- Step 1: Scope the UI using the Interactive Q&A protocol to define the annotation actions and requirements.
- Step 2: Choose data sources and implement persistence (JSON/CSV input or API-backed; outputs in JSON/CSV/SQLite).
- Step 3: Build the review views (markdown rendering, code highlighting, JSON pretty-print; add navigation, autosave, keyboard shortcuts) and validate end-to-end.
Best Practices
- Maintain a consistent layout and terminology across all traces.
- Make pass/fail actions visually distinct and support free-text notes and defer where appropriate.
- Ensure the full trace is visible or collapsible sections are available for low-value details.
- Enable autosave on primary actions to prevent data loss.
- Provide keyboard shortcuts for frequent actions and default to trace-level annotation.
Example Use Cases
- A bespoke trace review UI integrated into an ML model monitoring workflow with pass/fail and notes.
- A markdown-rendering trace viewer that highlights code and pretty-prints JSON for rapid inspection.
- An API-backed review surface with CSV/JSON exports for labels and an autosave-enabled workflow.
- A productivity-focused scan interface with next/previous navigation and progress counters.
- An end-to-end Playwright-backed validation setup to verify the core annotation loop.