Get the FREE Ultimate OpenClaw Setup Guide →

tome

a magical LLM desktop client that makes it easy for *anyone* to use LLMs and MCP

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio runebookai-tome node path/to/server.js \
  --env PORT="8000" \
  --env LOG_LEVEL="info"

How to use

Tome acts as a desktop hub that lets you connect local or remote language models (LLMs) and manage MCP servers through an integrated interface. The Tome MCP server entry enables you to expose a local or remote LLM within the MCP ecosystem, so you can call tools, fetch data, and chain reasoning across multiple MCP servers. In practice, you install Tome on your computer, connect your preferred LLM provider (OpenAI, Ollama, Gemini, or local/alternative endpoints), and then use Tome's MCP tab to add and manage MCP servers. The built-in support covers npm, uvx, node, and Python-based MCP servers, so you can mix and match server implementations based on your environment and preferences. Once the Tome MCP server is configured, you can issue tool calls, manage context windows, and schedule tasks that leverage the combined power of your LLMs and MCP-enabled tools.

How to install

Prerequisites:

  • A supported OS (Windows or macOS; Linux support is noted as coming soon in the project scope).
  • Tome desktop app installed from the official releases.
  • Optional: local LLMs (e.g., Ollama) or access to remote LLM providers (OpenAI, Gemini, etc.).

Install steps:

  1. Download and install Tome from the official releases page.
  2. Launch Tome and go to the MCP tab.
  3. Install your first MCP server: in the MCP tab, choose to add a new MCP server and follow the on-screen steps. You can use a sample fetch command to populate a server reference, for example: uvx mcp-server-fetch
  4. If you prefer manual installation, ensure you have the required runtime for your chosen MCP server type (Node.js for node/npm-based servers, Python for uvx/python-based servers, etc.).
  5. Configure the server in Tome by providing the command, arguments, and any necessary environment variables (as shown in the mcp_config example). Start the server and verify it appears online in Tome.
  6. Open the MCP registry/console in Tome to install or connect additional MCP servers as needed.

Additional notes

Tips and common issues:

  • Ensure your chosen LLM provider is reachable from Tome (network access or proper API keys/credentials).
  • When using local LLMs (like Ollama) with MCP, verify the local endpoints (URLs and ports) are correctly configured in Tome.
  • If you encounter port conflicts, adjust the PORT value in the server's environment variables and update the corresponding command/args in mcp_config.
  • The registry integration (Smithery.ai) provides thousands of MCP servers; use the MCP tab to search and install additional servers without manual setup.
  • For advanced workflows, you can schedule tasks to run MCP-enabled prompts hourly or at fixed times, enabling automated data gathering or maintenance tasks.
  • If you switch LLM providers or change endpoints, reconfigure the MCP server to point to the new endpoint and retest connectivity.

Related MCP Servers

Sponsor this space

Reach thousands of developers