Vision Sandbox
@johanesalxd
npx machina-cli add skill @johanesalxd/vision-sandbox --openclawVision Sandbox 🔭
Leverage Gemini's native code execution to analyze images with high precision. The model writes and runs Python code in a Google-hosted sandbox to verify visual data, perfect for UI auditing, spatial grounding, and visual reasoning.
Installation
clawhub install vision-sandbox
Usage
uv run vision-sandbox --image "path/to/image.png" --prompt "Identify all buttons and provide [x, y] coordinates."
Pattern Library
📍 Spatial Grounding
Ask the model to find specific items and return coordinates.
- Prompt: "Locate the 'Submit' button in this screenshot. Use code execution to verify its center point and return the [x, y] coordinates in a [0, 1000] scale."
🧮 Visual Math
Ask the model to count or calculate based on the image.
- Prompt: "Count the number of items in the list. Use Python to sum their values if prices are visible."
🖥️ UI Audit
Check layout and readability.
- Prompt: "Check if the header text overlaps with any icons. Use the sandbox to calculate the bounding box intersections."
🖐️ Counting & Logic
Solve visual counting tasks with code verification.
- Prompt: "Count the number of fingers on this hand. Use code execution to identify the bounding box for each finger and return the total count."
Integration with OpenCode
This skill is designed to provide Visual Grounding for automated coding agents like OpenCode.
- Step 1: Use
vision-sandboxto extract UI metadata (coordinates, sizes, colors). - Step 2: Pass the JSON output to OpenCode to generate or fix CSS/HTML.
Configuration
- GEMINI_API_KEY: Required environment variable.
- Model: Defaults to
gemini-3-flash-preview.
Overview
Vision Sandbox enables the model to analyze images by writing and running Python code in a Google-hosted sandbox to verify visual data. It excels at UI auditing, spatial grounding, and visual reasoning, providing precise coordinates, bounding boxes, and computed metrics from screenshots.
How This Skill Works
Leverage Gemini's native code execution to run Python in a sandbox against the given image. The sandbox returns structured results (e.g., [x, y] coordinates, bounding boxes, and derived measurements) that you can feed into agents like OpenCode for UI fixes.
When to Use It
- Identify specific UI elements in a screenshot and return their [x, y] coordinates and size.
- Count items or compute sums depicted in the image using Python verification.
- Check layout readability and detect overlapping elements via bounding box intersections.
- Extract UI metadata (coordinates, sizes, colors) for CSS/HTML adjustments via OpenCode.
- Validate visual relationships and spatial positioning to support automated UI auditing.
Quick Start
- Step 1: Install: clawhub install vision-sandbox
- Step 2: Run: uv run vision-sandbox --image 'path/to/image.png' --prompt 'Identify all buttons and provide [x, y] coordinates.'
- Step 3: Review: use the sandbox output to drive OpenCode or your pipeline.
Best Practices
- Write explicit prompts naming UI elements and the desired coordinate scale.
- Declare the exact output format (e.g., [x, y], width, height) to simplify parsing.
- Prefer code-backed validation over single-image inference; verify results in sandbox.
- Test across different resolutions and UI states to ensure robustness.
- Secure GEMINI_API_KEY and keep image paths valid and accessible.
Example Use Cases
- Identify the 'Submit' button in a screenshot and return its center as [x, y].
- Count items in a visual list and sum their visible prices using Python.
- Check header overlap with icons and report any bounding box intersections.
- Count the number of fingers by deriving bounding boxes for each finger.
- Extract UI metadata (coordinates, sizes, colors) to feed OpenCode for CSS updates.