carmenta
A heart-centered AI for creating at the speed of thought
claude mcp add --transport stdio carmentacollective-carmenta node server.js
How to use
Carmenta is a Node.js-based MCP server that combines memory-enabled conversations with a rich, interface-driven AI experience. It brings memory across sessions, voice-first interactions, model selection, and an extensible agent framework into a single subscription. The server exposes an interface that coordinates memory management, AI agents, and service integrations to provide proactive, contextual assistance rather than simple chat replies. You can leverage built-in capabilities like memory persistence, voice interactions, agent scheduling, and browser automation through a unified API and UI layer. You’ll likely interact with the server through the web UI and its AG-UI protocol-driven components which return structured results (maps, reports with citations, task actions) instead of plain chat bubbles. The system is designed to let you pick models or adjust a speed/quality slider while the preprocessing layer handles model routing, prompt enhancement, and response strategy automatically.
How to install
Prerequisites:
- Node.js 24+ installed on your system
- pnpm 10.x+ or npm (depending on your preferred workflow)
- Git to clone the repository
Step-by-step:
-
Clone the repository git clone https://github.com/nicholaswilliams/carmenta.git cd carmenta
-
Install dependencies pnpm install
or if you prefer npm:
npm install
-
Configure environment (optional but recommended)
- Create a .env file at the project root with your preferred settings (e.g., memory backend, API keys, service integrations).
- Example placeholders: MEMORY_STORE=redis://localhost:6379 OPENAI_API_KEY=your-key-here CLARA_API_KEY=your-key-here
-
Start the development server pnpm dev
or with npm:
npm run dev
-
Run tests, type checks, and lint as needed pnpm test pnpm type-check pnpm lint
-
Access the server Open http://localhost:3000 in your browser (default port may vary based on config).
Notes:
- The project uses Next.js 16 with TypeScript and a modular knowledge-driven architecture. The knowledge/ directory contains the product specifications that drive code generation.
Additional notes
Tips and considerations:
- The Memory feature is central to Carmenta; ensure your memory store backend is configured and reliable for persistent conversations across sessions.
- If you plan to deploy, consider using a production-grade reverse proxy and a persistent database for logs, memory state, and agent data.
- The system includes an AI Team with proactive agents; you can customize roles (Researcher, Analyst, Creator, Reviewer) to fit your workflow.
- For model selection, you can leverage the built-in 1x/10x/100x framework to balance speed and quality according to the task.
- When integrating external services (storage, calendars, authentication), ensure proper scoping of API keys and least-privilege access.
- If encounters issues with memory loss or misrouted prompts, check the knowledge/architecture and memory configuration to confirm proper context retention.
Related MCP Servers
daymon
Daymon puts your favorite AI to work 24/7. It schedules, remembers, and orchestrates your own virtual team. Free.
AgentCrew
Chat application with multi-agents system supports multi-models and MCP
opnsense
Modular MCP server for OPNsense firewall management - 88 tools providing access to 2000+ methods through AI assistants
architect
A powerful, self-extending MCP server for dynamic AI tool orchestration. Features sandboxed JS execution, capability-based security, automated rate limiting, marketplace integration, and a built-in monitoring dashboard. Built for the Model Context Protocol (MCP).
mcp-ssh-orchestrator
Secure SSH access for AI agents via MCP. Execute commands across your server fleet with policy enforcement, network controls, and comprehensive audit logging.
ai-agents
Multi-agent system for software development