Get the FREE Ultimate OpenClaw Setup Guide →

aleph

Skill + MCP server to turn your agent into an RLM. Load context, iterate with search/code/think tools, converge on answers.

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio hmbown-aleph aleph --enable-actions --workspace-mode any --tool-docs concise

How to use

Aleph is an MCP server that runs as a Python process and exposes a set of tools to the hosting model so it can load large files or codebases, search and inspect contexts, execute Python code over loaded content, and orchestrate sub-queries and recipe pipelines. The server keeps working data in memory and provides actions like load_file, load_context, search_context, peek_context, exec_python, and save_session, among others. In an MCP-enabled workflow, you configure the Aleph server in your MCP client, then call its tools via the standard MCP interface to progressively analyze data without pasting raw content into prompts. Typical usage involves loading a target file or content into a context, performing searches or inspections, executing Python snippets to derive insights, and saving or restoring sessions for long investigations. The included command line entry point (aleph) is intended to be wired into your MCP client setup; other tools may call into load_file, search_context, and exec_python as needed to drive the analysis loop.

Usage patterns often begin with loading a target file (or raw text) into a dedicated context, then running searches to locate relevant sections, peeking at specific ranges for quick summaries, and finally executing Python code to compute metrics or extract structured results. Recipes and sub-query workflows enable chaining multiple steps (search, map_sub_query, aggregate, finalize) to produce a concise answer or report while keeping prompts compact. If you are integrating with Claude Code or Codex CLI, you can use the /aleph or $aleph skill prompts to trigger this workflow and then rely on the MCP-governed tool calls to do the heavy lifting in the Aleph Python process.

How to install

Prerequisites:

  • Python 3.10+ installed on the host
  • Access to install Python packages (pip)
  • Optional: a preferred MCP client setup to wire the server into your assistant

Installation steps:

  1. Install the Aleph package with MCP support via PyPI:
pip install "aleph-rlm[mcp]"
  1. Verify installation and available commands:
python -m pip show aleph-rlm
aleph --help
  1. Configure MCP on the client side:
  • Create or update your MCP config file (see the example below) and ensure the client can start the Aleph server with the provided command and arguments.
  1. Run or wire up the server via your MCP client workflow. For example, ensure your MCP client references the server using the mcp_config block shown in this doc.

Note: If you are deploying in a container or orchestration environment, you can adapt the command and arguments to your runtime, but keep the same server identifier (aleph) for MCP wiring.

Additional notes

Tips and notes:

  • The Aleph server loads data into memory to optimize iterative analysis; rely on explicit load_file/load_context calls to bring data into a context rather than pasting raw content into prompts.
  • Use save_session to persist work; ensure paths stay under the workspace root to avoid permission or security issues.
  • The default MCP wiring prefers the codex backend for sub-queries when available; you can override with configure(sub_query_backend=...) if needed.
  • If you encounter performance concerns with very large files, consider chunking data into contexts and processing in batches with sub-queries.
  • Environment and file paths should be absolute when possible to avoid ambiguity in load_file operations.
  • The concise tool-docs option in the server config reduces verbose tool documentation surfaced to the model; adjust to verbose if needed for debugging.

Related MCP Servers

Sponsor this space

Reach thousands of developers