CogniLayer
Persistent memory for Claude Code & Codex CLI — save ~100K tokens/session. 13 MCP tools, hybrid search, TUI dashboard, crash recovery. Your AI finally remembers.
claude mcp add --transport stdio lakyfx-cognilayer python -m cognilayer \ --env API_KEY="your-api-key-if-needed" \ --env LOG_LEVEL="INFO" \ --env DATABASE_URL="postgresql://user:password@host:5432/dbname" \ --env MEMORY_DB_URL="redis://localhost:6379/0"
How to use
CogniLayer is a Python-based MCP server designed to provide persistent memory, code intelligence, and subagent context tooling to your AI workflows. Once running, it enables memory-backed knowledge across sessions, semantic search over facts, and code-context insights to help you understand dependencies, call graphs, and potential impact when refactoring. The server exposes its capabilities through the MCP toolset, so agents can perform memory searches, analyze code structure via tree-sitter-based context, and leverage subagent results stored in a central DB for faster, targeted queries. Use CogniLayer to reduce token burns during long sessions and accelerate debugging with precise, context-aware answers.
How to install
Prerequisites:
- Python 3.11 or newer
- Git
- Access to a PostgreSQL-compatible database (or adjust to your preferred METADATA store) and a Redis instance for memory caching
Installation steps:
-
Clone the repository: git clone https://github.com/lakyfx-cognilayer/cognilayer.git cd cognilayer
-
Create and activate a virtual environment (optional but recommended): python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies: pip install -r requirements.txt
-
Configure environment variables (example; adapt to your setup): export DATABASE_URL="postgresql://user:password@host:5432/dbname" export MEMORY_DB_URL="redis://localhost:6379/0" export LOG_LEVEL="INFO"
-
Run the server: python -m cognilayer
-
Verify the server starts and is reachable at the configured host/port (default port as per project configuration). If needed, customize the module name and entrypoint to suit your deployment.
Optional: For production, consider containerization (Docker) or a process manager (systemd, PM2-equivalent for Python) and ensure proper security hardening for database and memory store connections.
Additional notes
Tips and common considerations:
- Ensure your database and Redis instances are accessible to the CogniLayer process; network ACLs and firewall rules can block startup.
- Start with LOG_LEVEL=INFO or DEBUG during troubleshooting, then scale down to INFO or WARNING in production.
- If you already host memory and code graphs elsewhere, you can adjust MEMORY_DB_URL and related configs to point to your existing stores.
- The MCP config example uses a single CogniLayer server; you can extend the mcpServers section to run multiple named instances if needed.
- When upgrading Python or dependencies, re-run the install step and check for compatibility issues with tree-sitter-based parsing components.
- Store-sensitive keys (API keys, database credentials) securely using your infrastructure’s secret management rather than hard-coding in env files.
Related MCP Servers
mcp-claude-code
MCP implementation of Claude Code capabilities and more
roampal-core
Outcome-based memory for Claude Code and OpenCode
mem0 -selfhosted
Self-hosted mem0 MCP server for Claude Code. Run a complete memory server against self-hosted Qdrant + Neo4j + Ollama while using Claude as the main LLM.
spec-kit
MCP server enabling AI assistants to use GitHub's spec-kit methodology
mini_claude
Give Claude Code persistent memory across sessions. Track habits, log mistakes, prevent death spirals. Runs locally with Ollama.
Mira
Local MCP server that gives Claude Code persistent context, code intelligence, and background analysis. Runs on your machine, stored in SQLite.