Get the FREE Ultimate OpenClaw Setup Guide →

kindly-web-search

Kindly Web Search MCP Server: Web search + robust content retrieval for AI coding tools (Claude Code, Codex, Cursor, GitHub Copilot, Gemini, etc.) and AI agents (Claude Desktop, OpenClaw, etc.). Supports Serper, Tavily, and SearXNG.

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio shelpuk-ai-technology-consulting-kindly-web-search-mcp-server uvx kindly-web-search-mcp-server \
  --env GITHUB_TOKEN="token with public repo access (optional)" \
  --env SERPER_API_KEY="your Serper API key (optional)" \
  --env TAVILY_API_KEY="your Tavily API key (optional)" \
  --env SEARXNG_BASE_URL="base URL for self-hosted SearXNG (optional)"

How to use

Once running, you can query the Kindly Web Search MCP server through your MCP client or orchestrator using the provided tools. Use web_search(query, num_results) to fetch top results with structured fields such as title, link, snippet, and page_content (Markdown, best-effort). Use get_content(url) to retrieve the full page content (Markdown, best-effort) for deeper context. The server relies on Serper as the primary search provider when an API key is configured, with Tavily as a fallback, and can fall back to a self-hosted SearXNG setup via SEARXNG_BASE_URL if configured. For page extraction, a Chromium-based browser must be installed on the same machine, enabling universal page_content extraction, though specialized sources (like StackExchange, GitHub Issues, Wikipedia, arXiv) can still work without the browser. Optional GitHub token (GITHUB_TOKEN) improves the formatting and depth of GitHub Issues results by rendering questions, answers, comments, reactions, and metadata in a more LLM-friendly way.

How to install

Prerequisites

  • A machine with Python 3.13+ (3.14 supported; optional extras may be disabled on 3.14)
  • A Chromium-based browser installed on the same machine running the MCP client (Chrome/Chromium/Edge/Brave)
  • An API key for a preferred search provider (SERPER_API_KEY) or Tavily API key (TAVILY_API_KEY) or self-hosted SEARXNG base URL (SEARXNG_BASE_URL)
  • Optional GITHUB_TOKEN for improved GitHub content rendering

Step 1: Install uvx (Python-based MCP runner)

  • macOS / Linux:
curl -LsSf https://astral.sh/uv/install.sh | sh
  • Windows (PowerShell):
irm https://astral.sh/uv/install.ps1 | iex
  • Verify installation:
uvx --version

Step 2: Install a Chromium-based browser (required for full page_content extraction)

  • macOS:
brew install --cask chromium
  • Windows: Install Chrome or Edge (and note the path if needed by your setup)
  • Linux (Ubuntu/Debian): ensure Chrome/Chromium is installed via your package manager or from the official sources

Step 3: Install and run the Kindly Web Search MCP server

uvx run kindly-web-search-mcp-server
  • If you prefer to pre-install the package and then run explicitly, you can do:
uvx install kindly-web-search-mcp-server
uvx run kindly-web-search-mcp-server

Step 4: Configure environment variables (optional but recommended)

  • Set API keys and base URL as needed (see mcp_config for details) and restart the server if you change them.

Step 5: Verify the server is running

  • Check the startup logs for the listening address/port and try a simple web_search query from your MCP client.

Additional notes

Tips and considerations:

  • Provide SERPER_API_KEY for best results; Tavily is a solid fallback. If neither is configured, SEARXNG_BASE_URL can be used for a self-hosted search setup.
  • Enabling GITHUB_TOKEN often yields richer GitHub Issue results (questions, answers, comments, reactions, metadata) and can improve usefulness in coding contexts.
  • For universal page_content extraction, ensure a compatible Chromium-based browser is accessible on the same host as the MCP client.
  • If you encounter rate limits, rotate API keys or add a GitHub token to increase quota. Monitor logs for browser automation hints if page_content extraction fails on certain sites.
  • The server exposes high-quality content in a single call (no need for a separate scraping step), which helps reduce token usage when driving AI coding assistants.

Related MCP Servers

Sponsor this space

Reach thousands of developers