Gemini-Vuln-Scanner
Vulnerability Scanning and Reconnaissance App with Gemini integrated workflow
claude mcp add --transport stdio seyrup1987-gemini-vuln-scanner python -m rizzler_server \ --env GOOGLE_API_KEY="Your Google Gemini API key" \ --env MCP_SERVER_BASE_URL="http://localhost:8000 or your_server_url"
How to use
Gemini Vuln Scanner is a Python-based MCP server that orchestrates Gemini-powered guidance to perform automated reconnaissance and vulnerability assessment workflows. The server exposes a FastAPI backend that coordinates a suite of tools (port scanning, subdomain enumeration, DNS analysis, web fetching, advanced crawling, and active vulnerability scanning) and streams progress to a client UI. By leveraging the Gemini LLM, you can formulate multi-step plans that specify which tools to run, in what order, and how to interpret results, enabling repeatable, AI-assisted security workflows. To get started, configure your Google Gemini API key, launch the server, and connect the client GUI to the server base URL to begin crafting and executing plans.
How to install
Prerequisites:
- Git
- Python 3.12 (or the version specified in the project requirements)
- Chrome/Chromium (for Selenium-based crawling)
- Google Gemini API access (via Google Cloud with Vertex AI)
- (Optional) Docker and Docker Compose for containerized setup
Step 1: Clone the repository
git clone <your-repository-url> cd <repository-name>
Step 2: Set up a Python virtual environment (recommended)
python -m venv venv source venv/bin/activate # on Linux/macOS venv\Scripts\activate # on Windows
Step 3: Install dependencies
pip install --upgrade pip pip install -r requirements.txt
Step 4: Configure environment variables
Create or edit the environment file and add your keys:
.env (example)
GOOGLE_API_KEY=YOUR_GOOGLE_API_KEY MCP_SERVER_BASE_URL=http://localhost:8000
Step 5: Run the server
python -m rizzler_server
Step 6: (Optional) Run with Docker Compose If a docker-compose.yml is provided, you can start with:
docker-compose up -d
Step 7: Verify and access Open your browser or client GUI and point it to the MCP server base URL (e.g., http://localhost:8000) to begin creating and running plans.
Additional notes
Notes and tips:
- Ensure your Google API key has Vertex AI Gemini access enabled.
- The server heavily relies on Selenium and ChromeDriver for web crawling; make sure ChromeDriver is installed and compatible with your Chrome version.
- For GUI/display, ensure an X11 server is available if running the client inside a container or headless environment.
- If using Docker, ensure proper port mappings (default 8000) and environment variables are propagated to the container.
- The MCP supports streaming tool output via Server-Sent Events (SSE); expect live logs in the client during long-running scans.
- If you encounter SSL or CORS issues in development, verify the MCP_SERVER_BASE_URL matches where the server is actually reachable from the client.
Related MCP Servers
npcpy
The python library for research and development in NLP, multimodal LLMs, Agents, ML, Knowledge Graphs, and more.
AgentChat
AgentChat 是一个基于 LLM 的智能体交流平台,内置默认 Agent 并支持用户自定义 Agent。通过多轮对话和任务协作,Agent 可以理解并协助完成复杂任务。项目集成 LangChain、Function Call、MCP 协议、RAG、Memory、Milvus 和 ElasticSearch 等技术,实现高效的知识检索与工具调用,使用 FastAPI 构建高性能后端服务。
python -client
支持查询主流agent框架技术文档的MCP server(支持stdio和sse两种传输协议), 支持 langchain、llama-index、autogen、agno、openai-agents-sdk、mcp-doc、camel-ai 和 crew-ai
aws-cost-explorer
MCP server for understanding AWS spend
mcp-in-action
极客时间MCP新课已经上线!超2000同学一起开启MCP学习之旅!
LLaMa -Streamlit
AI assistant built with Streamlit, NVIDIA NIM (LLaMa 3.3:70B) / Ollama, and Model Control Protocol (MCP).