gemini
MCP Server that enables Claude code to interact with Gemini
claude mcp add --transport stdio rlabs-inc-gemini-mcp npx -y @rlabs-inc/gemini-mcp \ --env GEMINI_API_KEY="YOUR_GEMINI_API_KEY"
How to use
Gemini MCP provides a bridge between Google's Gemini 3 models and Claude Code, enabling collaborative tooling and automated workflows through a rich set of capabilities. The server exposes a wide range of tools you can invoke via the MCP CLI, including direct Gemini queries with controlled thinking levels, image and video generation, code and document analysis, and web-based data extraction and browsing with citations. This setup makes it possible to perform deep research, analyze code or text, generate and edit images or videos, and fetch live information from the web, all while maintaining structured, JSON-friendly outputs for downstream automation. To use it, ensure your environment has a valid Gemini API key and run the MCP server as configured (for example via npx). Once running, you can access the Gemini tools through the MCP interface and compose prompts that leverage Gemini’s multi-modal capabilities, with optional thinking depth controls, and Claude Code’s collaborative workflow features.
How to install
Prerequisites:
- Node.js and npm installed on your system
- Access to the Gemini API (API key)
- Install the MCP server package globally or use npx to run on demand.
Option A - Run with npx (no local install):
- Ensure you have an API key from Gemini and set it in your environment when launching.
Example:
Run the MCP server via npx (preferred for quick start)
npx -y @rlabs-inc/gemini-mcp
Option B - Install locally (optional):
- Install the package locally in your project and run the server script.
npm install @rlabs-inc/gemini-mcp
node node_modules/@rlabs-inc/gemini-mcp/server.js
-
Configure your API key and any optional settings via environment variables as needed (see additional notes for details).
-
Verify the server starts and is reachable through your MCP tooling or CLI (Claude integration steps are described in the README of the repository).
Additional notes
Environment variables and configuration:
- GEMINI_API_KEY: Your Gemini API key required for authenticating requests to Gemini services.
- VERBOSE: Optional; enable verbose logging for debugging (if supported by the CLI invocation).
- GEMINI_OUTPUT_DIR: Optional path to direct output for generated images/videos.
Common issues:
- Missing or invalid GEMINI_API_KEY will prevent tool invocations from succeeding.
- Ensure network access to Gemini services if using live web-oriented tools (e.g., Gemini search, YouTube analysis).
- If using npx, make sure your Node/npm setup allows executing remote packages without caching issues.
Tips:
- Use the CLI’s per-tool options to tailor thinking levels, outputs, and formats for structured JSON responses.
- For long-running tasks (video generation, large document analysis), monitor progress and poll for results as the MCP tools indicate.
Related MCP Servers
systemprompt-code-orchestrator
MCP server for orchestrating AI coding agents (Claude Code CLI & Gemini CLI). Features task management, process execution, Git integration, and dynamic resource discovery. Full TypeScript implementation with Docker support and Cloudflare Tunnel integration.
iron-manus
Iron Manus MCP
google-scholar
An MCP server for Google Scholar written in TypeScript with Streamable HTTP
mcp-install-instructions-generator
Generate MCP Server Installation Instructions for Cursor, Visual Studio Code, Claude Code, Claude Desktop, Windsurf, ChatGPT, Gemini CLI and more
mcp-jira-stdio
MCP server for Jira integration with stdio transport. Issue management, project tracking, and workflow automation via Model Context Protocol.
RLM-Memory
A Model Context Protocol (MCP) server that provides AI agents with persistent memory and semantic file discovery.