Real-time-web-search-RAG-with
A conversational agent that combines static knowledge (RAG over your docs) with real-time web search using an MCP server.
claude mcp add --transport stdio pranithchowdary-real-time-web-search-rag-with-mcp python -m src.mcp_server.server \ --env OPENAI_API_KEY="your-openai-api-key" \ --env SERPAPI_API_KEY="your-serpapi-key (if using SerpAPI for web search)" \ --env CUSTOM_VECTORMAP="path or identifier for local vector store, if applicable" \ --env AZURE_OPENAI_API_KEY="optional, if using Azure OpenAI"
How to use
This MCP server powers the real-time hybrid retrieval system that combines local document search (RAG) with real-time web search when needed. Start the MCP server to enable external tool or web API calls as fallback sources for answers that cannot be fully resolved from your local knowledge base. The server coordinates between the local vector store (e.g., FAISS/Chroma) and external search tools, returning a unified response with citations from both sources. To use it, run the MCP server and then query via your preferred interface (REST API or your Python client); if the local data is insufficient, the MCP layer will automatically invoke the web search tool or scraper to fetch fresh information and enrich the answer.
How to install
Prerequisites:
- Python 3.8+
- Git
- Optional: Docker (for containerized deployment)
-
Clone the repository git clone https://github.com/PranithChowdary/Real-time-web-search-RAG-with-MCP.git cd Real-time-web-search-RAG-with-MCP
-
Create a virtual environment and activate it python -m venv venv // On Windows use: venv\Scripts\activate source venv/bin/activate
-
Install dependencies pip install -r requirements.txt
-
Configure environment variables
- Copy the example env file: cp .env.example .env
- Add your API keys and any required settings (OPENAI_API_KEY, SERPAPI_API_KEY, etc.)
-
Run the MCP server (as described in the mcp_config section or using the CLI) python -m src.mcp_server.server
-
(Optional) Run Docker (if preferred) docker-compose up -d
Notes:
- See docs/DEPLOYMENT.md for production deployment and security considerations.
- The sample environment file (.env.example) shows typical keys to configure.
Additional notes
Tips and notes:
- Ensure your API keys are kept secure and not committed to VCS.
- If you rely on vector stores, periodically refresh or update your local indexes to improve accuracy.
- The MCP integration will surface sources/citations for each answer; verify external sources when precision is critical.
- If web search results are noisy, you can adjust the orchestration logic in your local code to favor RAG results and only fallback to MCP for certain query patterns.
- For debugging, check logs for MCP calls and ensure that the web/search tools are reachable from your execution environment.
Related MCP Servers
chunkhound
Local first codebase intelligence
VectorCode
A code repository indexing tool to supercharge your LLM experience.
mcp-pinecone
Model Context Protocol server to allow for reading and writing from Pinecone. Rudimentary RAG
langgraph-ai
LangGraph AI Repository
Archive-Agent
Find your files with natural language and ask questions.
mcp-raganything
API/MCP wrapper for RagAnything