gemini-code-assist
Model Context Protocol server integrating Google Gemini CLI with Claude Code for AI-powered development assistance
claude mcp add --transport stdio vinnyvangogh-gemini-code-assist-mcp uv run python src/main.py \ --env GEMINI_DEBUG="false"
How to use
Gemini Code Assist MCP integrates the Google Gemini CLI with Claude Code to provide AI-powered development assistance directly within Claude Code. It offers a suite of tools for code analysis, security reviews, feature planning, bug analysis, debugging support, and code understanding. After starting the MCP server, you can invoke tools like gemini_review_code, gemini_analyze_security, gemini_proofread_feature_plan, gemini_suggest_implementation, gemini_analyze_bug, gemini_debug_assistance, gemini_explain_code, and gemini_generate_tests to get structured feedback, recommendations, and explanations tailored to your Python projects. The server relies on your existing Google Cloud authentication (no API key required) and works with both Claude Code and Claude Desktop through the MCP interface.
How to install
Prerequisites:
- Python 3.11+ installed on your environment
- UV package manager installed (uv)
- Gemini CLI installed and configured
- Google Cloud authentication configured (gcloud auth login)
Installation steps:
# 1) Clone the MCP server repository
git clone https://github.com/VinnyVanGogh/gemini-code-assist-mcp.git
cd gemini-code-assist-mcp
# 2) Install dependencies (if needed)
# This project uses Python; ensure dependencies are installed via pip if a requirements file exists
# Example: python -m pip install -r requirements.txt
Configure and run the MCP server locally:
# 3) Start the MCP server using UV
uv sync
uv run python src/main.py
Claude Code setup (recommended):
# 4) Install in Claude Code (local MCP server)
uv sync
uv run mcp install src/main.py
# 5) Or add a custom named server via CLI configuration
claude mcp add '{
"name": "Gemini Code Assist",
"command": "uv",
"args": ["run", "python", "src/main.py"],
"cwd": "/path/to/gemini-code-assist-mcp",
"env": {
"GEMINI_DEBUG": "false"
}
}'
Validation steps:
# 6) Test Gemini CLI access
gemini --help
# 7) Test the MCP server
uv run python src/main.py
Additional notes
Tips and common issues:
- Ensure Gemini CLI is authenticated and available in your PATH; the MCP server relies on Google Cloud authentication, not an API key.
- If you encounter environment or path issues in Claude Code, specify the working directory (cwd) in your MCP configuration.
- Set GEMINI_DEBUG to true during troubleshooting to get verbose logs.
- If the server doesn’t appear in Claude Code, restart Claude Code/Desktop and re-run the MCP sync.
- For remote deployments, you can expose your MCP server URL and connect Claude Code to it using standard mcpServers configuration.
Related MCP Servers
systemprompt-code-orchestrator
MCP server for orchestrating AI coding agents (Claude Code CLI & Gemini CLI). Features task management, process execution, Git integration, and dynamic resource discovery. Full TypeScript implementation with Docker support and Cloudflare Tunnel integration.
stt -linux
Local speech-to-text MCP server for Tmux on Linux (for use not only with Claude Code)
gemini-webapi
MCP server for Google Gemini — free image generation, editing & chat via browser cookies. No API keys needed.
HydraMCP
Connect agents to agents. MCP server for querying any LLM through your existing subscriptions: compare, vote, and synthesize across GPT, Gemini, Claude, and local models from one terminal.
video-research
Give Claude Code 41 research & video tools with one command. Video analysis, deep research, content extraction, explainer video creation, and Weaviate vector search — powered by Gemini 3.1 Pro.
RLM-Memory
A Model Context Protocol (MCP) server that provides AI agents with persistent memory and semantic file discovery.