Get the FREE Ultimate OpenClaw Setup Guide →

gemini-peer-review

npx machina-cli add skill jezweb/claude-skills/gemini-peer-review --openclaw
Files (1)
SKILL.md
4.3 KB

Gemini Peer Review

Consult Gemini as a coding peer for a second opinion on code quality, architecture decisions, debugging, or security reviews.

Setup

API Key: Set GEMINI_API_KEY as an environment variable. Get a key from https://aistudio.google.com/apikey if you don't have one.

export GEMINI_API_KEY="your-key-here"

Workflow

  1. Determine mode from user request (review, architect, debug, security, quick)

  2. Read target files into context

  3. Build prompt using the AI-to-AI template from references/prompt-templates.md

  4. Write prompt to file at .claude/artifacts/gemini-prompt.txt (avoids shell escaping issues)

  5. Call the API — generate a Python script that:

    • Reads GEMINI_API_KEY from environment
    • Reads the prompt from .claude/artifacts/gemini-prompt.txt
    • POSTs to https://generativelanguage.googleapis.com/v1beta/models/{model}:generateContent
    • Payload: {"contents": [{"parts": [{"text": prompt}]}], "generationConfig": {"temperature": 0.3, "maxOutputTokens": 8192}}
    • Extracts text from candidates[0].content.parts[0].text
    • Prints result to stdout

    Write the script to .claude/scripts/gemini-review.py and run it.

  6. Synthesize — present Gemini's findings, add your own perspective (agree/disagree), let the user decide what to implement

Modes

Code Review

Review specific files for bugs, logic errors, security vulnerabilities, performance issues, and best practice violations.

Read the target files, build a prompt using the Code Review template, call with gemini-2.5-flash.

Architecture Advice

Get feedback on design decisions with trade-off analysis. Include project context (CLAUDE.md, relevant source files).

Read project context, build a prompt using the Architecture template, call with gemini-2.5-pro.

Debugging Help

Analyse errors when stuck after 2+ failed fix attempts. Gemini sees the code fresh without your debugging context bias.

Read the problematic files, build a prompt using the Debug template (include error message and previous attempts), call with gemini-2.5-flash.

Security Scan

Scan code for security vulnerabilities (injection, auth bypass, data exposure).

Read the target directory's source files, build a prompt using the Security template, call with gemini-2.5-pro.

Quick Question

Fast question without file context. Build prompt inline, write to file, call with gemini-2.5-flash.

Model Selection

ModeModelWhy
review, debug, quickgemini-2.5-flashFast, good for straightforward analysis
architect, security-scangemini-2.5-proBetter reasoning for complex trade-offs

Check current model IDs if errors occur — they change frequently:

curl -s "https://generativelanguage.googleapis.com/v1beta/models?key=$GEMINI_API_KEY" | python3 -c "import sys,json; [print(m['name']) for m in json.load(sys.stdin)['models'] if 'gemini' in m['name']]"

When to Use

Good use cases:

  • Before committing major changes (final review)
  • When stuck debugging after multiple attempts
  • Architecture decisions with multiple valid options
  • Security-sensitive code review

Avoid using for:

  • Simple syntax checks (Claude handles these faster)
  • Every single edit (too slow, unnecessary)
  • Questions with obvious answers

Prompt Construction

Critical: Always use the AI-to-AI prompting format. Write the full prompt to a file — never pass code inline via bash arguments (shell escaping will break it).

When building the prompt:

  1. Start with the AI-to-AI header from references/prompt-templates.md
  2. Append the mode-specific template
  3. Append the file contents with clear --- filename --- separators
  4. Write to .claude/artifacts/gemini-prompt.txt
  5. Generate and run the API call script

Reference Files

WhenRead
Building prompts for any modereferences/prompt-templates.md

Source

git clone https://github.com/jezweb/claude-skills/blob/main/plugins/dev-tools/skills/gemini-peer-review/SKILL.mdView on GitHub

Overview

Gemini Peer Review lets you solicit a second opinion from Gemini on code quality, architecture decisions, debugging, or security reviews. It uses direct Gemini API calls with no CLI dependencies and is triggered via commands like 'ask gemini', 'gemini review', 'second opinion', 'peer review', or 'consult gemini'. Setup involves exporting GEMINI_API_KEY; the workflow writes prompts to a file and runs a Python script to call the API, then presents Gemini's findings with your own perspective.

How This Skill Works

Read the target files into context, build a prompt with the AI-to-AI template, and write it to .claude/artifacts/gemini-prompt.txt to avoid shell escaping issues. A Python script reads GEMINI_API_KEY from the environment, posts the prompt to the Generative Language API at the appropriate model endpoint, and prints the resulting content to stdout. The assistant then synthesizes Gemini's findings with your own stance (agree/disagree) for final recommendations.

When to Use It

  • Before committing major changes (final review)
  • When stuck debugging after multiple attempts
  • Architecture decisions with multiple valid options
  • Security-sensitive code review
  • Quick question without file context

Quick Start

  1. Step 1: Export GEMINI_API_KEY and decide the mode (Code Review, Architecture, Debug, Security, or Quick)
  2. Step 2: Read the target files into context and build a prompt using the appropriate AI-to-AI template
  3. Step 3: Write the prompt to .claude/artifacts/gemini-prompt.txt and run the script at .claude/scripts/gemini-review.py to call the Gemini API

Best Practices

  • Export GEMINI_API_KEY securely and keep it out of logs
  • Read and define the exact scope by loading target files into context before prompting
  • Choose the appropriate mode (Code Review, Architecture, Debug, Security, Quick) to guide the prompt
  • Write the prompt to .claude/artifacts/gemini-prompt.txt to avoid shell escaping issues
  • Synthesize Gemini's findings with your own stance and clearly state what you will implement

Example Use Cases

  • Review a new microservice API for correctness, performance, and security implications
  • Assess architectural trade-offs when migrating from a monolith to microservices
  • Debug a flaky race condition in an async data pipeline with fresh analysis
  • Security scan of user authentication flow for injection, auth bypass, or data exposure risks
  • Ask a quick, context-free question about best practices for database connection handling

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers