Get the FREE Ultimate OpenClaw Setup Guide →

SearChat

Search + Chat = SearChat(AI Chat with Search), Support OpenAI/Anthropic/VertexAI/Gemini, DeepResearch, SearXNG, Docker. AI对话式搜索引擎,支持DeepResearch, 支持OpenAI/Anthropic/VertexAI/Gemini接口、聚合搜索引擎SearXNG,支持Docker一键部署。

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio sear-chat-searchat node apps/server.js \
  --env PORT="3000" \
  --env BING_API_KEY="your-bing-api-key" \
  --env GOOGLE_CSE_ID="your-google-cse-id" \
  --env GOOGLE_API_KEY="your-google-api-key" \
  --env OPENAI_API_KEY="your-openai-api-key" \
  --env SEARXNG_HOSTNAME="http://searxng:8080" \
  --env ANTHROPIC_API_KEY="your-anthropic-api-key" \
  --env DEEP_MAX_RESEARCH_LOOPS="3" \
  --env DEEP_NUMBER_OF_INITIAL_QUERIES="3"

How to use

SearChat is an AI-powered conversational search engine that combines multi-model AI capabilities with multi-engine search integration to deliver real-time, context-aware search results. The server exposes endpoints to perform natural language queries, manage conversational context, and orchestrate deep research workflows across supported models (OpenAI, Anthropic, Google Gemini, etc.) and search engines (SearXNG, Bing, Google, Tavily, Exa, Bocha, and more). To get started, install and run the MCP server, then use the provided API surface to send user messages and receive structured, citation-enabled results. The platform supports multi-turn conversations, history persistence in the browser, and a Deep Research mode that iteratively expands queries and compiles structured reports with citations. You can configure AI providers and search engines via a model configuration (model.json) and environment variables, enabling precise control over how results are sourced and formatted. When interacting through the MCP, you’ll typically issue a user message and receive a response that may include follow-up questions, suggested searches, and a compiled report when Deep Research is engaged.

How to install

Prerequisites:

  • Node.js and npm installed on your system
  • Access to required API keys (OpenAI, Anthropic, Google, Bing, etc.)
  • Optional: Docker if you prefer containerized deployment
  1. Clone the repository containing the SearChat MCP server git clone https://github.com/sear-chat/SearChat.git cd SearChat

  2. Install dependencies for the MCP server (Node.js backend) npm install

  3. Configure environment and models

    • Create or edit a configuration file (e.g., .env or model.json as referenced in deployment docs) to provide API keys and backend settings
    • Example environment variables to set (adjust to your setup): PORT=3000 OPENAI_API_KEY=your-openai-api-key ANTHROPIC_API_KEY=your-anthropic-api-key GOOGLE_API_KEY=your-google-api-key GOOGLE_CSE_ID=your-google-cse-id BING_API_KEY=your-bing-api-key SEARXNG_HOSTNAME=http://searxng:8080
  4. Run the server locally npm start // or if using a specific entry script node apps/server.js

  5. (Optional) Docker deployment

    • Ensure you have Docker and Docker Compose installed
    • Use the provided docker-compose.yaml as a starting point and customize environment variables accordingly
    • Start: docker-compose up -d
  6. Verify the server is running

    • Navigate to http://localhost:3000 (or the port you configured)
    • Use the MCP interface or API clients to send queries and receive results

Additional notes

Tips and common considerations:

  • API keys: Keep keys secure; don’t commit them to version control. Use environment variable injection or secret management.
  • Model selection: The Deep Research feature relies on selecting appropriate models for intent analysis and citation capability. Configure model.json with suitable providers and models.
  • Performance: Multi-engine searches can incur latency. Use caching where appropriate and consider reducing the number of initial queries for faster responses in low-lidelity drafts.
  • Citations: You can choose citation formats in the Deep Research results (URL-based or simple citation tag). Ensure the chosen format aligns with your front-end rendering expectations.
  • Updates: When updating dependencies or models, re-run dependency installation and validate all API integrations (OpenAI, Google, Bing, etc.).
  • Environment parity: For Docker deployments, mirror the environment variables and model configurations used in development to avoid runtime surprises.

Related MCP Servers

Sponsor this space

Reach thousands of developers