browser-operator-core
Browser Operator - The AI browser with built in Multi-Agent platform! Open source alternative to ChatGPT Atlas, Perplexity Comet, Dia and Microsoft CoPilot Edge Browser
claude mcp add --transport stdio browseroperator-browser-operator-core node path/to/server.js \ --env BO_LOG_LEVEL="info" \ --env BO_CLI_DISABLE_PROMPTS="false"
How to use
This MCP server appears to be part of the Browser Operator Core project, which provides an automation and AI-assisted browser experience that runs locally on your machine. The server layer is intended to coordinate multi-agent workflows, enabling tasks such as research, data gathering, and automation within a privacy-first local environment. Once the MCP server is running, you can typically access its API or CLI tools to start agents, orchestrate tasks, and monitor progress. The system emphasizes local processing and compatibility with a range of AI providers, including local models via Ollama and cloud providers via OpenAI, Claude, Gemini, and LiteLLM.
How to install
Prerequisites:
- Node.js (recommended LTS) or a compatible runtime for the server environment
- Access to the repository sources or a built server.js entry point
- If using local models, ensure Ollama or equivalent local model runtimes are installed per your chosen AI provider setup
Installation steps:
- Install Node.js from https://nodejs.org if you don’t already have it.
- Clone the repository: git clone https://github.com/BrowserOperator/browser-operator-core.git cd browser-operator-core
- Install dependencies (if package.json exists): npm install
- Build or prepare the server entry if needed (depends on project setup): npm run build
- Run the MCP server (example): node path/to/server.js
- Verify the server is listening on the expected port (e.g., http://localhost:PORT) and consult the docs for API endpoints or CLI commands.
Notes:
- If a Docker image or Python/uvx entry is provided in your environment, prefer those execution paths as documented in the repo docs.
- Update environment variables to configure providers, logging, and security as needed.
Additional notes
Tips and troubleshooting:
- Ensure your environment variables for AI providers (OpenAI, Claude, Gemini, LiteLLM, Groq, etc.) are correctly set according to the provider’s requirements.
- If the server won’t start, check for missing dependencies or incompatible Node.js versions. Look for a .env.sample or docs/ configuration guide in the repository.
- For privacy and offline operation, consider using local Ollama or other local model runtimes and configure the MCP server to route model calls locally.
- Start with a minimal configuration and gradually enable more providers and agents to isolate issues.
- If you’re using Docker, ensure proper port mappings and volume mounts so your local data and logs persist across restarts.
Related MCP Servers
dify
Production-ready platform for agentic workflow development.
ruflo
🌊 The leading agent orchestration platform for Claude. Deploy intelligent multi-agent swarms, coordinate autonomous workflows, and build conversational AI systems. Features enterprise-grade architecture, distributed swarm intelligence, RAG integration, and native Claude Code / Codex Integration
bytebot
Bytebot is a self-hosted AI desktop agent that automates computer tasks through natural language commands, operating within a containerized Linux desktop environment.
solace-agent-mesh
An event-driven framework designed to build and orchestrate multi-agent AI systems. It enables seamless integration of AI agents with real-world data sources and systems, facilitating complex, multi-step workflows.
agentic-radar
A security scanner for your LLM agentic workflows
concierge
🚀 Universal SDK for building next-gen MCP servers