AGENT
Local-first multi-agent platform built on DeepAgents. Gateway, Agent Worker, and Web UI for orchestrating autonomous AI agents.
claude mcp add --transport stdio ap3x-dev-agent python -m ag3nt_agent.worker \ --env KIMI_API_KEY="Kimi/Moonshot API key" \ --env GOOGLE_API_KEY="Google Gemini API key" \ --env OPENAI_API_KEY="OpenAI API key" \ --env AG3NT_MODEL_NAME="Model name" \ --env ANTHROPIC_API_KEY="Anthropic API key" \ --env OPENROUTER_API_KEY="OpenRouter API key" \ --env AG3NT_MODEL_PROVIDER="LLM provider (anthropic, openai, openrouter, kimi, google)"
How to use
AG3NT is a local-first personal AI agent platform that integrates multiple language models and supports several adapters for different interaction channels. It is designed to run as a cohesive set of components (gateway, agent worker, UI, and optionally TUI) that communicate over HTTP/WebSocket. The agent worker executes modular skills and can control a browser, perform web automation, and manage memory, planning and HITL-based security workflows. You can use the UI to interact with the agent in real time, or use CLI/TUI adapters for terminal-based control and automation. With multi-model support (Anthropic, OpenAI, OpenRouter, Kimi, Google Gemini) and multi-channel access (CLI, TUI, Telegram, Discord), AG3NT provides a flexible environment for building personal AI assistants that can browse the web, run tasks, and respond with streaming outputs. The MCP server setup shown here launches the agent worker that runs the DeepAgents runtime, wired to a gateway and a web UI for monitoring and control. Start the components, connect to the dashboard, and configure skills in SKILL.md format to extend the agent’s capabilities.
How to install
Prerequisites:
- Python 3.8+ and a virtual environment tool (venv).
- Node.js and npm/yarn for the UI and gateway components if you plan to run the dashboard components.
- Access/keys for the chosen model providers (Anthropic/OpenAI/OpenRouter/Kimi/Google Gemini).
- Clone the repository and navigate to the project root.
- Create and activate a Python virtual environment for the agent worker:
- On Windows: python -m venv .venv
- On macOS/Linux: python3 -m venv .venv
- Activate: Windows: .venv\Scripts\activate macOS/Linux: source .venv/bin/activate
- Install Python dependencies for the agent: pip install -r apps/agent/requirements.txt
- (Optional) Install and run Gateway/UI components if you want the full web dashboard:
- Gateway/UI typically run via npm/pnpm scripts in their respective folders (see respective READMEs in apps/gateway and apps/ui).
- Prepare configuration:
- Copy default config to your home config location or place config.yaml as described in the repo docs.
- Run the agent worker via MCP server configuration (see mcp_config below) or directly:
- python -m ag3nt_agent.worker
Note: Ensure the environment variables for your chosen model providers are set (e.g., ANTHROPIC_API_KEY, OPENAI_API_KEY, etc.).
If you plan to use the UI or gateway, follow the repository’s documented steps for starting those components (Gateway, UI, and TUI) to have a full dashboard experience.
Additional notes
Tips and common issues:
- Ensure your API keys are correctly set in the environment. Missing keys will prevent the corresponding model provider from loading.
- If using Google Gemini or other providers with separate SDKs, ensure network access and any required region endpoints are configured.
- For local development, run the UI, gateway, and agent worker in separate terminals to monitor logs effectively.
- The Skills system uses the SKILL.md format; add or customize skills under skills/ and ensure they register with the agent runtime for discovery.
- If you encounter memory or performance issues, consider adjusting the DeepAgents runtime configuration and enabling HITL flow for sensitive actions.
- The web UI and gateway expose HTTP/WS APIs; ensure firewall rules and localhost bindings allow access during development.
Related MCP Servers
mcp-agent
Build effective agents using Model Context Protocol and simple workflow patterns
sandboxed.sh
Self-hosted orchestrator for AI autonomous agents. Run Claude Code & Open Code in isolated linux workspaces. Manage your skills, configs and encrypted secrets with a git repo.
flyto-core
The open-source execution engine for AI agents. 412 modules, MCP-native, triggers, queue, versioning, metering.
mcp
🤖 Taskade MCP · Official MCP server and OpenAPI to MCP codegen. Build AI agent tools from any OpenAPI API and connect to Claude, Cursor, and more.
janee
Secrets management for AI agents via MCP • @janeesecure
systemprompt-template
Production AI agent mesh in 3 commands. MCP servers, playbooks, and multi-agent orchestration built on systemprompt-core.