Get the FREE Ultimate OpenClaw Setup Guide →

WebSearch

[Self-hosted] A Model Context Protocol (MCP) server implementation that provides a web search capability over stdio transport. This server integrates with a WebSearch Crawler API to retrieve search results.

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio mnhlt-websearch-mcp npx websearch-mcp

How to use

WebSearch-MCP exposes a web search capability over the MCP stdio transport. It acts as a bridge between AI assistants that support MCP and a WebSearch crawler API. The server provides a single MCP tool named web_search that accepts a query and optional parameters to fetch real-time web results. By configuring API_URL, you direct the MCP server to the crawler service, and MAX_SEARCH_RESULT controls how many results are returned when the request does not specify a limit. Clients such as Claude Desktop, Cursor IDE, or other MCP-enabled tools can invoke the web_search tool to retrieve up-to-date information from the web within your conversations.

How to install

Prerequisites:

  • Node.js and npm installed on your machine
  • Optional: Smithery CLI for automated installation

Install via Smithery (recommended for Claude Desktop users):

npx -y @smithery/cli install @mnhlt/WebSearch-MCP --client claude

Manual installation (global npm install):

npm install -g websearch-mcp

Run directly with npx (no permanent install):

npx websearch-mcp

If you prefer to run locally after installing npm package globally, ensure the crawler service is reachable at the API_URL you configure (default http://localhost:3001).

Additional notes

Tips:

  • The MCP server uses the environment variable API_URL to reach the WebSearch crawler API. Set it to the crawler service URL (default http://localhost:3001).
  • MAX_SEARCH_RESULT controls the default number of results when a request does not specify numResults. Adjust to balance result depth and token usage.
  • To run the full setup, you must also start the Docker-based crawler service as documented in the README, or point API_URL to an already running crawler instance.
  • If you encounter Windows commandline issues, you may need to use the Windows-friendly MCP configuration shown in the README (cmd /c npx websearch-mcp).
  • Health and testing: verify the crawler API health with http://localhost:3001/health and test searches with the curl example provided in the setup guide.
  • This MCP server uses stdio transport; ensure your MCP client is configured to communicate over stdin/stdout.

Related MCP Servers

Sponsor this space

Reach thousands of developers