ThinkMem
AI Memory Management MCP System for LLMs - 让LLM善用思考,善用记忆
claude mcp add --transport stdio rickonono3-thinkmem npx -y thinkmem --mode http --port 13809
How to use
ThinkMem is an MCP server designed to help LLMs manage memory efficiently. It exposes two main operation modes: a local stdio mode for embedding ThinkMem into a single AI assistant workflow, and a StreamableHTTP mode for hosting a memory service that multiple AI assistants can connect to over HTTP. ThinkMem supports multiple memory types including RawMemory for unstructured text and ListMemory for structured, ordered data like tasks or workflows. The HTTP mode enables centralized memory management with scalable access via StreamableHTTP, while the stdio mode is simpler and suitable for development and testing. You can interact with the memory system through its command-line interface, and you can integrate memory instances with external AI assistants by registering them with an MCP host using the provided JSON configuration. The documentation also covers memory operations such as writing, updating, and querying memory, as well as memory summarization and search features.
How to install
Prerequisites:
- Node.js 18+ and npm installed on your system
- Basic familiarity with MCP concepts (servers, hosts, and transports)
Installation steps:
- Install ThinkMem globally (as recommended by the project):
npm install -g thinkmem
- Verify installation:
thinkmem --version
- Start in HTTP mode (default port 13809) for MCP host integration:
npx -y thinkmem --mode http --port 13809
- If you prefer the stdio mode for local development, run:
thinkmem --mode stdio
- If you want to run via MCP configuration, create a config file (as in the mcp_config below) and start the server using the same command approach.
Note: Depending on your environment, you may need to adjust firewall rules to expose the HTTP port and ensure SSL/TLS termination is handled by a reverse proxy for production deployments.
Additional notes
Tips and common considerations:
- In HTTP mode, ensure the port you choose is accessible from your MCP Host and any clients. Consider placing a reverse proxy in front of ThinkMem for TLS termination in production.
- When using the stdio mode, explicitly specify the memory DB path to avoid collisions between multiple processes.
- The configuration supports multiple AI assistants by registering them in the host with the ThinkMem server as a streamable-http endpoint. Example registrations are shown in the README.
- For persistent memory, ThinkMem stores data in JSON files by default; ensure the filesystem has adequate permissions and space, and enable backups if necessary.
- If you run into issues, check the project's tests and documentation links in the repository for troubleshooting and compatibility notes.
Related MCP Servers
obsidian -tools
Add Obsidian integrations like semantic search and custom Templater prompts to Claude or any MCP client.
Matryoshka
MCP server for token-efficient large document analysis via the use of REPL state
pluggedin-app
The Crossroads for AI Data Exchanges. A unified, self-hostable web interface for discovering, configuring, and managing Model Context Protocol (MCP) servers—bringing together AI tools, workspaces, prompts, and logs from multiple MCP sources (Claude, Cursor, etc.) under one roof.
Agentic -Skill
Agentic-MCP, Progressive MCP client with three-layer lazy loading. Validates AgentSkills.io pattern for efficient token usage. Use MCP without pre-install & wasting full-loading
mongo
MCP server that provide tools to LLMs such as claude in cursor to interact with MongoDB
mcp-turso
MCP server for interacting with Turso-hosted LibSQL databases