OpenContext
A personal context store for AI agents and assistants—reuse your existing coding agent CLI (Codex/Claude/OpenCode) with built‑in Skills/tools and a desktop GUI to capture, search, and reuse project knowledge across agents and repos.
claude mcp add --transport stdio 0xranx-opencontext node path/to/mcp/server.js \ --env OPENCONTEXT_PORT="Default 3000 (adjust as needed)" \ --env OPENCONTEXT_CONFIG="Path to OpenContext MCP config or placeholder"
How to use
OpenContext provides an MCP server that allows coding agents (Cursor, Claude Code, Codex, or other MCP clients) to interact with a persistent knowledge base of contexts and documents. The server exposes tools that let agents load background context, search for relevant docs, create new entries, and iterate on the knowledge base while keeping state across sessions. Clients start by establishing a connection to the MCP server and then invoke the available tools/verbs (such as load, search, create, and iterate) through MCP-compatible requests. The goal is to enable agents to operate with a shared, persistent memory so they can act with background knowledge already loaded rather than re-explaining context every time.
Key capabilities you can expect from the MCP server:
- Loading background context into the agent’s working memory at the start of a session.
- Searching the knowledge base to surface relevant documents or notes.
- Creating new documents or entries that capture decisions, research, or outcomes.
- Iterating on knowledge by persisting updates back into the context store after actions.
To use the server with your MCP clients, connect via the client’s MCP interface and call the OpenContext tools as you would with other contextual tools. Since OpenContext reuses your existing coding agent CLI and adds a GUI and built-in skills, you can leverage the same commands you already use (initialization, search, create, etc.) while enabling the agent to read and write to the shared context library through the MCP endpoint.
How to install
Prerequisites:
- Node.js (LTS version) and npm installed on your machine
- Access to the repository containing the OpenContext MCP server code
Installation steps:
-
Clone the OpenContext repository (or the MCP server source within the repo): git clone https://github.com/0xranx/OpenContext.git cd OpenContext
-
Install dependencies for the MCP server (if a dedicated server package exists in the repo): npm install
-
Prepare configuration:
- Ensure a valid path to the MCP server entry point is available. If the server exposes a JS file at path/to/mcp/server.js, you can point the mcp_config accordingly.
- If there is a published package or a build step, follow the repository’s guidance to build or generate the server file.
-
Run the MCP server:
Basic local start (adjust path as needed)
node path/to/mcp/server.js
-
(Optional) If a script is provided for starting the MCP server in package.json, you can use: npm run mcp
-
Verify the server is listening on the configured port (default 3000) and accessible by MCP clients.
Notes:
- If you are using a Docker-based workflow, containerize the MCP server and expose the port to your client.
- Ensure environment variables (such as OPENCONTEXT_PORT) are set if the server expects them.
Additional notes
Tips and common issues:
- Make sure the MCP server is reachable by the MCP clients (check network/firewall rules and port configuration).
- If you migrate to a new agent CLI or update OpenContext, re-run initialization steps so MCP tool paths and skills stay in sync with the server.
- Environment variables can control behavior like ports, storage paths, and feature flags. Common variables to consider: OPENCONTEXT_CONFIG, OPENCONTEXT_PORT, OPENCONTEXT_STORAGE.
- If the server stores context locally, ensure the storage directory has appropriate read/write permissions and sufficient disk space.
- When debugging, start the server in a verbose/logging mode if available and monitor logs for requests from MCP clients to diagnose connection or tool invocation issues.
Related MCP Servers
awesome-claude-skills
A curated list of awesome Claude Skills, resources, and tools for customizing Claude AI workflows
cursor-talk-to-figma
TalkToFigma: MCP integration between AI Agent (Cursor, Claude Code) and Figma, allowing Agentic AI to communicate with Figma for reading designs and modifying them programmatically.
superset
IDE for the AI Agents Era - Run an army of Claude Code, Codex, etc. on your machine
octocode
MCP server for semantic code research and context generation on real-time using LLM patterns | Search naturally across public & private repos based on your permissions | Transform any accessible codebase/s into AI-optimized knowledge on simple and complex flows | Find real implementations and live docs from anywhere
codexia
Agent Workstation for Codex CLI + Claude Code — with task scheduler, git worktree & remote control
Context-Engine
Context-Engine MCP - Agentic Context Compression Suite