mcts
Bayesian MCTS Model Context Protocol Server allowing Claude to control Ollama local models for Advanced MCTS and analysis.
claude mcp add --transport stdio angrysky56-mcts-mcp-server uv run --directory /path/to/mcts-mcp-server/src/mcts_mcp_server server.py \ --env PYTHONPATH="/path/to/mcts-mcp-server"
How to use
This MCP server exposes an Advanced Bayesian Monte Carlo Tree Search (MCTS) engine for AI-assisted analysis. Designed to work with Claude and similar LLMs, it enables multi-iteration analysis, Bayesian evaluation, Thompson sampling or UCT for node selection, and multi-LLM support (Ollama, OpenAI, Anthropic, Google Gemini). The server persists state across turns, classifies thoughts into philosophical approaches, and surfaces surprising directions to help you explore topics deeply. To use it, deploy the MCP server and connect via the MCP protocol; the system will automatically initialize and run MCTS across multiple iterations and simulations per iteration, returning the best analysis found during the search. When you request deep analysis, Claude will drive the MCTS process, iterate through multiple simulations, and then synthesize the final insights using the provided tooling.
How to install
Prerequisites:
- Python 3.10+ installed
- Access to a shell/terminal
- UV (Astral UV) recommended for dependency resolution and speed
Installation steps:
- Clone the repository (replace with the actual repo URL):
git clone https://github.com/angrysky56-mcts-mcp-server.git
cd angrysky56-mcts-mcp-server
- Install UV if not already installed (preferred method):
# Install UV globally
curl -fsSL https://astral.sh/uv/install.sh | bash
- Set up and install dependencies using UV:
# Create and activate a virtual environment using UV
uv venv .venv
source .venv/bin/activate
# Install required Python packages
uv pip install -r requirements.txt
- Run the setup (if a setup.sh is provided):
./setup.sh
- Alternatively, run the server via UV directly (example):
uv run --directory /path/to/mcts-mcp-server/src/mcts_mcp_server server.py
Notes:
- Ensure your environment variables (e.g., API keys) are set in a .env file at the repository root as described in the README.
- If you prefer a manual setup without UV, you can install dependencies with your system Python and pip, but UV is recommended for dependency resolution.
Additional notes
Environment and configuration tips:
- API keys for LLM providers should reside in a .env file at the repository root. Example keys include OPENAI_API_KEY, ANTHROPIC_API_KEY, and GEMINI_API_KEY.
- You can customize the MCTS run via the provided tooling (initialize_mcts, run_mcts, set_active_llm, etc.). Parameters like max_iterations, simulations_per_iteration, and exploration_weight govern performance and depth of search; adjust them to balance speed and thoroughness.
- If Claude Desktop integration is used, update the paths in claude_desktop_config.json to reflect your local server location.
- For persistence across turns, ensure the state directory specified by the server is writable and backed up if needed.
- If you encounter issues with model providers, verify the .env defaults (DEFAULT_LLM_PROVIDER, DEFAULT_MODEL_NAME) and the provider-specific configuration in your session.
Related MCP Servers
mcp-google-ads
An MCP tool that connects Google Ads with Claude AI/Cursor and others, allowing you to analyze your advertising data through natural language conversations. This integration gives you access to campaign information, performance metrics, keyword analytics, and ad management—all through simple chat with Claude, Cursor or Windsurf.
mcp-rquest
A MCP server providing realistic browser-like HTTP request capabilities with accurate TLS/JA3/JA4 fingerprints for bypassing anti-bot measures. It also supports converting PDF and HTML documents to Markdown for easier processing by LLMs.
google-search-console
It connects directly to your Google Search Console account via the official API, letting you access key data right from AI tools like Claude Desktop or OpenAI Agents SDK and others .
rest-to -adapter
A Python library for converting REST API specifications into MCP (Model Context Protocol) tools for AI agents.
coder_db
An intelligent code memory system that leverages vector embeddings, structured databases, and knowledge graphs to store, retrieve, and analyze code patterns with semantic search capabilities, quality metrics, and relationship modeling. Designed to enhance programming workflows through contextual recall of best practices, algorithms, and solutions.
system_information_mcp
DevEnvInfoServer - Cursor MCP Server for Development Environment Information