AnyLoom-AnythingLLM-Local-AI-agentic-DyTopo-swarm
ChatGPT-like AI that runs 100% locally on your hardware. No subscriptions, no cloud, complete privacy. Multi-agent swarm + 10 MCP tools + hybrid RAG vector DB + . Runs on one GPU (RTX 5090 recommended)
claude mcp add --transport stdio intradyne-anyloom-anythingllm-local-ai-agentic-dytopo-swarm docker compose up -d \ --env DOCKER_BUILDKIT="1" \ --env COMPOSE_PROJECT_NAME="anyloom"
How to use
AnyLoom provides a fully local, privacy-preserving multi-agent AI stack that runs as a Docker Compose deployment. It orchestrates eight MCP servers for memory graph, web search, file operations, RAG, swarm coordination (DyTopo), diagnostics, and more, all working together with a locally hosted Qdrant vector store, llama.cpp LLM, and an AnythingLLM web UI. The stack enables hybrid RAG (dense plus sparse retrieval), persistent memory across sessions via the MCP knowledge graph, and a multi-agent swarm that routes tasks to specialized agents for collaborative problem solving. Start the entire stack with a single command, then access the UI at http://localhost:3001 and the LLM API at http://localhost:8008/v1/models. The system is designed to operate entirely on your hardware with no data leaving your machine.
Once running, you can configure AnythingLLM through its web setup wizard and the subsequent configuration script. The swarm coordinates agents to break down tasks, fetch relevant context, and compose cohesive results. Use the AnythingLLM UI for chat, document Q&A, and workspace management, while the embedded LLM and embedding services provide model inference and vector embeddings locally. The RAG module enhances search quality by combining dense embeddings with a sparse index, ensuring you retrieve information from your own data corpus efficiently.
How to install
Prerequisites:
- Docker Desktop with WSL2 support and GPU access if you use the GPU-accelerated models
- Docker Compose
- A machine with sufficient VRAM (recommended 32GB) and storage (~100GB) for models and data
- Basic shell access (bash/zsh)
Install steps:
-
Install Docker and Docker Compose per your OS:
- Windows/macOS: install Docker Desktop from https://www.docker.com/products/docker-desktop
- Linux: follow your distro-specific instructions to install docker and docker-compose
-
Clone the repository containing the AnyLoom setup (replace with your actual repo URL):
git clone https://github.com/your-org/AnyLoom
cd AnyLoom
- Ensure prerequisites are met (optional: set up a Python environment for scripts):
# If you need Python tooling for scripts
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
- Start the Docker stack (one command starts everything):
# If you have the provided script
bash scripts/docker_start.sh
Or manually with Docker Compose:
docker volume create anyloom_qdrant_storage
docker volume create anyloom_anythingllm_storage
docker volume create anyloom_anythingllm_hotdir
docker compose up -d
- Initial AnythingLLM configuration:
# Access the UI at http://localhost:3001 to complete the setup wizard
# Then configure AnythingLLM defaults
python scripts/configure_anythingllm.py
- Verify services:
# Llama.cpp LLM API
curl -s http://localhost:8008/v1/models
# AnythingLLM UI is available at http://localhost:3001
Prerequisites recap: Docker Desktop (v24+ with GPU support), an NVIDIA GPU if using GPU-accelerated models, Python 3.10+, and ~100GB disk space for models and data.
Additional notes
Tips and common issues:
- Ensure Docker has enough GPU access if you plan to run the llama.cpp model with CUDA support.
- The startup time can be a few minutes as the large models initialize and are loaded into GPU memory; the first user query may take longer.
- If the AnythingLLM setup wizard is not completing, check that the UI is reachable at http://localhost:3001 and that the API is locked until setup completes.
- For environment customization, the system uses a combination of docker volumes and config scripts; you can adjust default system prompts, vector store setup, and embedding settings via the provided scripts in scripts/.
- If you need to reset, you can tear down with docker compose down and then restart; re-running the configuration script is safe and will skip already-uploaded documents.
- Ensure you have sufficient privileges to run Docker and create volumes on your machine.
Related MCP Servers
py-gpt
Desktop AI Assistant powered by GPT-5, GPT-4, o1, o3, Gemini, Claude, Ollama, DeepSeek, Perplexity, Grok, Bielik, chat, vision, voice, RAG, image and video generation, agents, tools, MCP, plugins, speech synthesis and recognition, web search, memory, presets, assistants,and more. Linux, Windows, Mac
awesome-openclaw
A curated list of OpenClaw resources, tools, skills, tutorials & articles. OpenClaw (formerly Moltbot / Clawdbot) — open-source self-hosted AI agent for WhatsApp, Telegram, Discord & 50+ integrations.
goclaw
Multi-agent AI gateway with teams, delegation & orchestration. Single Go binary, 11+ LLM providers, 5 channels.
ollama -bridge
Extend the Ollama API with dynamic AI tool integration from multiple MCP (Model Context Protocol) servers. Fully compatible, transparent, and developer-friendly, ideal for building powerful local LLM applications, AI agents, and custom chatbots
RivalSearchMCP
Deep Research & Competitor Analysis MCP for Claude & Cursor. No API Keys. Features: Web Search, Social Media (Reddit/HN), Trends & OCR.
izan.io
Turn Any Browser Action & Data Extraction into an AI Tool in 60 Seconds