tome
a magical LLM desktop client that makes it easy for *anyone* to use LLMs and MCP
claude mcp add --transport stdio runebookai-tome node path/to/server.js \ --env PORT="8000" \ --env LOG_LEVEL="info"
How to use
Tome acts as a desktop hub that lets you connect local or remote language models (LLMs) and manage MCP servers through an integrated interface. The Tome MCP server entry enables you to expose a local or remote LLM within the MCP ecosystem, so you can call tools, fetch data, and chain reasoning across multiple MCP servers. In practice, you install Tome on your computer, connect your preferred LLM provider (OpenAI, Ollama, Gemini, or local/alternative endpoints), and then use Tome's MCP tab to add and manage MCP servers. The built-in support covers npm, uvx, node, and Python-based MCP servers, so you can mix and match server implementations based on your environment and preferences. Once the Tome MCP server is configured, you can issue tool calls, manage context windows, and schedule tasks that leverage the combined power of your LLMs and MCP-enabled tools.
How to install
Prerequisites:
- A supported OS (Windows or macOS; Linux support is noted as coming soon in the project scope).
- Tome desktop app installed from the official releases.
- Optional: local LLMs (e.g., Ollama) or access to remote LLM providers (OpenAI, Gemini, etc.).
Install steps:
- Download and install Tome from the official releases page.
- Launch Tome and go to the MCP tab.
- Install your first MCP server: in the MCP tab, choose to add a new MCP server and follow the on-screen steps. You can use a sample fetch command to populate a server reference, for example: uvx mcp-server-fetch
- If you prefer manual installation, ensure you have the required runtime for your chosen MCP server type (Node.js for node/npm-based servers, Python for uvx/python-based servers, etc.).
- Configure the server in Tome by providing the command, arguments, and any necessary environment variables (as shown in the mcp_config example). Start the server and verify it appears online in Tome.
- Open the MCP registry/console in Tome to install or connect additional MCP servers as needed.
Additional notes
Tips and common issues:
- Ensure your chosen LLM provider is reachable from Tome (network access or proper API keys/credentials).
- When using local LLMs (like Ollama) with MCP, verify the local endpoints (URLs and ports) are correctly configured in Tome.
- If you encounter port conflicts, adjust the PORT value in the server's environment variables and update the corresponding command/args in mcp_config.
- The registry integration (Smithery.ai) provides thousands of MCP servers; use the MCP tab to search and install additional servers without manual setup.
- For advanced workflows, you can schedule tasks to run MCP-enabled prompts hourly or at fixed times, enabling automated data gathering or maintenance tasks.
- If you switch LLM providers or change endpoints, reconfigure the MCP server to point to the new endpoint and retest connectivity.
Related MCP Servers
anything-llm
The all-in-one Desktop & Docker AI application with built-in RAG, AI agents, No-code agent builder, MCP compatibility, and more.
AstrBot
Agentic IM Chatbot infrastructure that integrates lots of IM platforms, LLMs, plugins and AI feature, and can be your openclaw alternative. ✨
mcp-agent
Build effective agents using Model Context Protocol and simple workflow patterns
SearChat
Search + Chat = SearChat(AI Chat with Search), Support OpenAI/Anthropic/VertexAI/Gemini, DeepResearch, SearXNG, Docker. AI对话式搜索引擎,支持DeepResearch, 支持OpenAI/Anthropic/VertexAI/Gemini接口、聚合搜索引擎SearXNG,支持Docker一键部署。
paperbanana
Open source implementation and extension of Google Research’s PaperBanana for automated academic figures, diagrams, and research visuals, expanded to new domains like slide generation.
mcp-client-for-ollama
A text-based user interface (TUI) client for interacting with MCP servers using Ollama. Features include agent mode, multi-server, model switching, streaming responses, tool management, human-in-the-loop, thinking mode, model params config, MCP prompts, custom system prompt and saved preferences. Built for developers working with local LLMs.