Get the FREE Ultimate OpenClaw Setup Guide →

Gemini Cli

npx machina-cli add skill hackermanishackerman/claude-skills-vault/gemini-cli --openclaw
Files (1)
SKILL.md
1.4 KB

Gemini CLI

Interact w/ Google's Gemini CLI locally. Run queries, get responses, compare outputs.

Prerequisites

Gemini CLI must be installed & configured:

  1. Install: https://github.com/google-gemini/gemini-cli
  2. Auth: Run gemini & sign in w/ Google account
  3. Verify: gemini --version

When to Use

  • User asks to "run/ask/use gemini"
  • Compare Claude vs Gemini responses
  • Get second AI opinion
  • Delegate task to Gemini

Usage

# One-shot query
gemini "Your prompt"

# Specific model
gemini -m gemini-3-pro-preview "prompt"

# JSON output
gemini -o json "prompt"

# YOLO mode (auto-approve)
gemini -y "prompt"

# File analysis
cat file.txt | gemini "Analyze this"

Comparison Workflow

  1. Provide Claude's response first
  2. Run same query via Gemini CLI
  3. Present both for comparison

CLI Options

FlagDesc
-mModel (gemini-3-pro)
-oOutput: text/json/stream-json
-yAuto-approve (YOLO)
-dDebug mode
-sSandbox mode
-rResume session
-iInteractive after prompt

Best Practices

  • Quote prompts w/ double quotes
  • Use -o json for parsing
  • Pipe files for context
  • Specify model for specific capabilities

Source

git clone https://github.com/hackermanishackerman/claude-skills-vault/blob/main/.claude/skills/gemini-cli/SKILL.mdView on GitHub

Overview

Gemini CLI lets you run AI queries against Google's Gemini from your terminal. It supports one-shot prompts, model specification, JSON output, YOLO mode, and file analysis, enabling quick comparisons with Claude and easy task delegation.

How This Skill Works

Install Gemini CLI, authenticate with your Google account, and verify the tool with gemini --version. Then you can run prompts directly (gemini "prompt"), specify a model (-m), request JSON output (-o json), enable auto-approve (-y), or pipe data for analysis (e.g., cat file.txt | gemini "Analyze this"). A common workflow is to present Claude's response first, then run the same query via Gemini CLI for side-by-side comparison.

When to Use It

  • User asks to run/ask/use Gemini from the terminal.
  • You need to compare Claude vs Gemini responses.
  • You want a second AI opinion or alternative perspective.
  • You want to delegate a task to Gemini.
  • You need to analyze a file or long text with context from a prompt (file analysis).

Quick Start

  1. Step 1: Install Gemini CLI and authenticate (install; run gemini; sign in; verify with gemini --version).
  2. Step 2: Run a basic query (gemini \"Your prompt\" or gemini -m gemini-3-pro-preview \"prompt\").
  3. Step 3: Get structured output or analyze files (gemini -o json \"prompt\" or cat file.txt | gemini \"Analyze this\").

Best Practices

  • Quote prompts with double quotes.
  • Use -o json for predictable parsing.
  • Pipe files for context when analyzing content.
  • Specify model with -m for targeted capabilities.
  • Compare outputs with Claude when possible to validate results.

Example Use Cases

  • One-shot query: gemini \"Your prompt\"
  • Specific model: gemini -m gemini-3-pro-preview \"prompt\"
  • JSON output: gemini -o json \"prompt\"
  • YOLO mode: gemini -y \"prompt\"
  • File analysis: cat file.txt | gemini \"Analyze this\"

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers