aivectormemory
aivectormemory 是一款基于 Model Context Protocol (MCP) 开发的轻量级内存管理工具。它专门为 Claude、OpenCode、Cursor 和 主流IDE 编程工具设计,通过向量数据库技术解决 AI 在不同对话会话中「健忘」的问题。aivectormemory: A lightweight MCP Server enabling persistent, cross-session memory for AI-powered IDEs via vector search.
claude mcp add --transport stdio edlineas-aivectormemory python -m aivectormemory
How to use
AIVectorMemory is a MCP server that provides local, persistent memory for AI-assisted coding sessions. It stores project knowledge, decisions, and tasks in a local vector store and offers semantic search to recall relevant information across conversations. The server exposes tools such as remember to store memories, recall to search semantically, forget to delete memories, status to view or update session state, track to manage issue-like records, task to handle project tasks, readme to generate documentation, and auto_save to remember user preferences. It also includes a web dashboard for visualizing memories and tasks, with multi-project isolation and delta-style deduplication to keep the memory store clean. The server runs locally without requiring external API keys, depending on an ONNX embedding engine and SQLite indices for fast lookup.
How to install
Prerequisites:
- Python 3.8+ installed locally
- Access to a terminal/command prompt
- Optional: Python virtual environment (recommended)
Installation steps:
-
Create and activate a virtual environment (recommended):
- macOS/Linux: python3 -m venv venv source venv/bin/activate
- Windows: python -m venv venv .\venv\Scripts\activate
-
Install the AIVectorMemory package from PyPI: pip install aivectormemory
-
Run the MCP server (module-based entrypoint): python -m aivectormemory
-
(Optional) If you want to upgrade to the latest version: pip install --upgrade aivectormemory
-
To configure within a project, create the MCP config as shown in the mcp_config section of this document and run the server from your project directory. You can also follow interactive setup via the package’s built-in run/install flows if available in your environment.
Additional notes
Tips and notes:
- The server stores data locally (SQLite) and keeps memory isolated per project; enable multi-project separation for clean context management.
- Ensure your Python environment has SQLite support and that ONNX runtime dependencies for the embedding engine are available if you customize embeddings.
- The recommended startup is to run the module directly as shown; you can integrate it with IDE-specific MCP wiring via Steering rules generated by the package’s install flow.
- If you encounter environment-related issues on macOS (externally-managed-environment, sqlite extension loading), consider using a Homebrew Python installation as suggested in the README’s troubleshooting notes.
- The system supports semantic search via recall with top_k and tags for filtering; adjust top_k based on how much history you want to surface in a session.
Related MCP Servers
mcp-vegalite
MCP server from isaacwasserman/mcp-vegalite-server
github-chat
A Model Context Protocol (MCP) for analyzing and querying GitHub repositories using the GitHub Chat API.
nautex
MCP server for guiding Coding Agents via end-to-end requirements to implementation plan pipeline
pagerduty
PagerDuty's official local MCP (Model Context Protocol) server which provides tools to interact with your PagerDuty account directly from your MCP-enabled client.
futu-stock
mcp server for futuniuniu stock
mcp -boilerplate
Boilerplate using one of the 'better' ways to build MCP Servers. Written using FastMCP