Get the FREE Ultimate OpenClaw Setup Guide →

Claude-Gemini -Integration

Claude-Gemini MCP Bridge - Seamlessly integrates Google's Gemini AI with Claude Code via Model Context Protocol. Get dual AI perspectives on code review, brainstorming, and development. One-command setup with automatic configuration. Bridge two powerful AI models for enhanced workflows.

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio i3t4an-claude-gemini-mcp-integration-server python ~/.claude-mcp-servers/claude-gemini-integration/server.py \
  --env GEMINI_API_KEY="your-gemini-api-key"

How to use

The Claude-Gemini MCP Integration Server enables direct collaboration between Claude Code and Google's Gemini AI models using the MCP protocol. The server acts as a bridge: Claude Code can send MCP requests to Gemini for code reviews, brainstorming, or answering questions, and Gemini responses are returned within Claude Code's context for seamless decision making. The server supports three primary tools: ask_gemini for direct Gemini responses to questions, gemini_code_review for structured code analysis focusing on general quality, security, performance, and maintainability, and gemini_brainstorm for creative problem solving and architectural brainstorming. To use these tools, start Claude Code, ensure the MCP server is running, and invoke the MCP tools from Claude Code using the /mcp command to access the available options. When asking Gemini for code reviews or brainstorming, you can request context-aware feedback and tailor the model’s behavior using model and temperature settings if exposed by the server.

Typical workflow:

  • Start the MCP server and Claude Code CLI.
  • In Claude Code, run /mcp to view available MCP tools.
  • Choose ask_gemini to pose a direct question, or gemini_code_review to request targeted code review feedback, or gemini_brainstorm to brainstorm solutions with Gemini’s input. Claude will present Gemini’s responses within the same chat context for easy follow-up and refinement.

How to install

Prerequisites

  • Python 3.8+
  • Claude Code CLI installed (npm install -g @anthropic-ai/claude-code)
  • Gemini API Key from Google AI Studio

Option A: Automated Install (Recommended)

  1. Run the install script to set up the MCP server and config:
curl -sSL https://raw.githubusercontent.com/i3T4AN/Claude-Gemini-MCP-Integration-Server/main/install.sh | bash
  1. During setup, provide or export your Gemini API key when prompted. The script will configure the server under ~/.claude-mcp-servers/claude-gemini-integration/server.py and set necessary environment variables.

Option B: Manual Setup

  1. Clone the repository and navigate to it:
git clone https://github.com/i3T4AN/Claude-Gemini-MCP-Integration-Server.git
cd Claude-Gemini-MCP-Integration-Server
  1. Prepare a virtual environment and install dependencies (if any in requirements.txt):
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
  1. Place your Gemini API key in the environment or the server configuration:
export GEMINI_API_KEY="your-gemini-api-key"
  1. Start the server directly (example path; adjust if your layout differs):
python ~/.claude-mcp-servers/claude-gemini-integration/server.py
  1. Ensure Claude Code CLI can reach the MCP server and that the server process remains running in the background as needed.

Additional notes

Environment Variables and configuration:

  • GEMINI_API_KEY is required for authenticating with Gemini.
  • You can override the API key at runtime by exporting GEMINI_API_KEY before starting the server.
  • The MCP API uses JSON-RPC 2.0 over stdin/stdout; ensure the server process remains running and accessible by Claude Code.

Common issues:

  • Invalid or missing GEMINI_API_KEY will cause authentication failures. Verify key validity and permissions.
  • If the server cannot be reached from Claude Code, check network access, firewall rules, and correct server path in the mcp_config.
  • If you update models or endpoints in Gemini, you may need to adjust the model selection in server.py (e.g., gemini-2.0-flash vs gemini-1.5-pro).

Tips:

  • For creative outputs, adjust the temperature in the ask_gemini tool if supported by the server to balance determinism vs. creativity.
  • Regularly monitor logs for MCP protocol messages to diagnose communication issues early.

Related MCP Servers

Sponsor this space

Reach thousands of developers