Get the FREE Ultimate OpenClaw Setup Guide →

browser-operator-core

Browser Operator - The AI browser with built in Multi-Agent platform! Open source alternative to ChatGPT Atlas, Perplexity Comet, Dia and Microsoft CoPilot Edge Browser

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio browseroperator-browser-operator-core node path/to/server.js \
  --env BO_LOG_LEVEL="info" \
  --env BO_CLI_DISABLE_PROMPTS="false"

How to use

This MCP server appears to be part of the Browser Operator Core project, which provides an automation and AI-assisted browser experience that runs locally on your machine. The server layer is intended to coordinate multi-agent workflows, enabling tasks such as research, data gathering, and automation within a privacy-first local environment. Once the MCP server is running, you can typically access its API or CLI tools to start agents, orchestrate tasks, and monitor progress. The system emphasizes local processing and compatibility with a range of AI providers, including local models via Ollama and cloud providers via OpenAI, Claude, Gemini, and LiteLLM.

How to install

Prerequisites:

  • Node.js (recommended LTS) or a compatible runtime for the server environment
  • Access to the repository sources or a built server.js entry point
  • If using local models, ensure Ollama or equivalent local model runtimes are installed per your chosen AI provider setup

Installation steps:

  1. Install Node.js from https://nodejs.org if you don’t already have it.
  2. Clone the repository: git clone https://github.com/BrowserOperator/browser-operator-core.git cd browser-operator-core
  3. Install dependencies (if package.json exists): npm install
  4. Build or prepare the server entry if needed (depends on project setup): npm run build
  5. Run the MCP server (example): node path/to/server.js
  6. Verify the server is listening on the expected port (e.g., http://localhost:PORT) and consult the docs for API endpoints or CLI commands.

Notes:

  • If a Docker image or Python/uvx entry is provided in your environment, prefer those execution paths as documented in the repo docs.
  • Update environment variables to configure providers, logging, and security as needed.

Additional notes

Tips and troubleshooting:

  • Ensure your environment variables for AI providers (OpenAI, Claude, Gemini, LiteLLM, Groq, etc.) are correctly set according to the provider’s requirements.
  • If the server won’t start, check for missing dependencies or incompatible Node.js versions. Look for a .env.sample or docs/ configuration guide in the repository.
  • For privacy and offline operation, consider using local Ollama or other local model runtimes and configure the MCP server to route model calls locally.
  • Start with a minimal configuration and gradually enable more providers and agents to isolate issues.
  • If you’re using Docker, ensure proper port mappings and volume mounts so your local data and logs persist across restarts.

Related MCP Servers

Sponsor this space

Reach thousands of developers