Gemini Cli
npx machina-cli add skill hackermanishackerman/claude-skills-vault/gemini-cli --openclawGemini CLI
Interact w/ Google's Gemini CLI locally. Run queries, get responses, compare outputs.
Prerequisites
Gemini CLI must be installed & configured:
- Install: https://github.com/google-gemini/gemini-cli
- Auth: Run
gemini& sign in w/ Google account - Verify:
gemini --version
When to Use
- User asks to "run/ask/use gemini"
- Compare Claude vs Gemini responses
- Get second AI opinion
- Delegate task to Gemini
Usage
# One-shot query
gemini "Your prompt"
# Specific model
gemini -m gemini-3-pro-preview "prompt"
# JSON output
gemini -o json "prompt"
# YOLO mode (auto-approve)
gemini -y "prompt"
# File analysis
cat file.txt | gemini "Analyze this"
Comparison Workflow
- Provide Claude's response first
- Run same query via Gemini CLI
- Present both for comparison
CLI Options
| Flag | Desc |
|---|---|
-m | Model (gemini-3-pro) |
-o | Output: text/json/stream-json |
-y | Auto-approve (YOLO) |
-d | Debug mode |
-s | Sandbox mode |
-r | Resume session |
-i | Interactive after prompt |
Best Practices
- Quote prompts w/ double quotes
- Use
-o jsonfor parsing - Pipe files for context
- Specify model for specific capabilities
Source
git clone https://github.com/hackermanishackerman/claude-skills-vault/blob/main/.claude/skills/gemini-cli/SKILL.mdView on GitHub Overview
Gemini CLI lets you run AI queries against Google's Gemini from your terminal. It supports one-shot prompts, model specification, JSON output, YOLO mode, and file analysis, enabling quick comparisons with Claude and easy task delegation.
How This Skill Works
Install Gemini CLI, authenticate with your Google account, and verify the tool with gemini --version. Then you can run prompts directly (gemini "prompt"), specify a model (-m), request JSON output (-o json), enable auto-approve (-y), or pipe data for analysis (e.g., cat file.txt | gemini "Analyze this"). A common workflow is to present Claude's response first, then run the same query via Gemini CLI for side-by-side comparison.
When to Use It
- User asks to run/ask/use Gemini from the terminal.
- You need to compare Claude vs Gemini responses.
- You want a second AI opinion or alternative perspective.
- You want to delegate a task to Gemini.
- You need to analyze a file or long text with context from a prompt (file analysis).
Quick Start
- Step 1: Install Gemini CLI and authenticate (install; run gemini; sign in; verify with gemini --version).
- Step 2: Run a basic query (gemini \"Your prompt\" or gemini -m gemini-3-pro-preview \"prompt\").
- Step 3: Get structured output or analyze files (gemini -o json \"prompt\" or cat file.txt | gemini \"Analyze this\").
Best Practices
- Quote prompts with double quotes.
- Use -o json for predictable parsing.
- Pipe files for context when analyzing content.
- Specify model with -m for targeted capabilities.
- Compare outputs with Claude when possible to validate results.
Example Use Cases
- One-shot query: gemini \"Your prompt\"
- Specific model: gemini -m gemini-3-pro-preview \"prompt\"
- JSON output: gemini -o json \"prompt\"
- YOLO mode: gemini -y \"prompt\"
- File analysis: cat file.txt | gemini \"Analyze this\"