cybermem
Remote Self-hosted Memory MCP Server
claude mcp add --transport stdio mikhailkogan17-cybermem npx -y @cybermem/mcp \ --env CYBERMEM_CONFIG="path/to/config.json (optional)"
How to use
CyberMem is a production-grade MCP server that provides a shared, persistent memory layer for AI tools like Claude, Cursor, and other memory-less agents. The MCP server is designed to run in multi-platform environments and integrates with infrastructure-as-code workflows (Ansible, Helm, Docker Compose) to deploy, monitor, and scale memory-powered AI workloads. Once running, you can manage the CyberMem MCP instance using the bundled CLI, observe activity via the built-in dashboards, and rely on the memory engine to centralize session context across tools. The server exposes core APIs for memory storage and retrieval, along with observability hooks that feed metrics into your monitoring stack. Typical workflows involve provisioning with the CLI, deploying via your IaC templates, and using the dashboard for operational visibility.
How to install
Prerequisites:
- Node.js and npm/yarn installed on your workstation
- Access to a host where you want to run the MCP server (local machine, VM, or cloud)
Installation steps:
-
Install the CyberMem CLI and initialize the MCP server stack
npx @cybermem/cli installFollow the prompts in the terminal to configure your target environment (Mac/RPi/VPS, or container-based deployment).
-
Start the MCP server via the CLI or direct NPM script
npx @cybermem/mcp # or follow CLI output to start the service -
If you prefer containerized deployment, generate the IaC artifacts (Docker Compose / Ansible / Helm) using the CLI and deploy to your target environment as described in the generated templates.
Prereqs recap:
- A supported runtime (Node.js) and npm/yarn
- Access to your deployment target (local, Raspberry Pi, or cloud)
- Optional: Traefik for reverse proxy and Tailscale Funnel for secure access, as suggested by the project
Additional notes
Tips and notes:
- The CyberMem MCP server is designed to be deployed using infrastructure-as-code templates generated by the CLI (Docker Compose, Ansible, Helm).
- For production, consider enabling Traefik as a reverse proxy and using Tailscale Funnel for zero-config access.
- The CLI supports lifecycle commands like install, uninstall, upgrade, and backup/restore of data; use them to manage deployments across environments.
- Environment variables and configuration can be supplied to customize memory storage backends, observability, and access controls. See the docs for specific keys and recommended defaults.
- If you encounter port or authentication issues, verify your network policy and ensure the MCP runtime can reach the Traefik/edge proxy layer and any configured audit/logging endpoints.
Related MCP Servers
kagent
Cloud Native Agentic AI | Discord: https://bit.ly/kagentdiscord
NagaAgent
A simple yet powerful agent framework for personal assistants, designed to enable intelligent interaction, multi-agent collaboration, and seamless tool integration.
aser
Aser is a lightweight, self-assembling AI Agent frame.
agentcontrolplane
ACP is the Agent Control Plane - a distributed agent scheduler optimized for simplicity, clarity, and control. It is designed for outer-loop agents that run without supervision, and make asynchronous tool calls like requesting human feedback on key operations. Full MCP support.
mcp-for-argocd
An implementation of Model Context Protocol (MCP) server for Argo CD.
wanaku
Wanaku MCP Router