infio-copilot
A Cursor-inspired AI assistant for Obsidian that offers smart autocomplete and interactive chat with your selected notes
claude mcp add --transport stdio infiolab-infio-copilot npx -y infio-copilot
How to use
Infio-Copilot exposes its capabilities as an MCP (Model Context Protocol) server so you can connect your tooling and other services to leverage its AI-assisted features within your workflows. The server offers a combined surface of chat interactions, context-aware autocomplete, vault-wide search, and configurable AI modes that can be wired into Obsidian or other note collections. Through the MCP interface you can query and manage context, request inline edits, and route conversations and queries to the appropriate embedding and LLM backends that your environment supports. This enables you to drive dynamic document generation, note synthesis, and structured information retrieval directly from your tools using standardized MCP requests. It’s designed to work well with local embedding models and multi-provider API backends, so you can tailor the AI behavior to your privacy and latency requirements.
What you can do with the MCP server:
- Initiate chat sessions and perform edits guided by the assistant, all while maintaining a coherent context window.
- Invoke context-aware autocomplete for faster writing inside your notes or documents.
- Run semantic vault searches and retrieve semantically relevant results from your note collection.
- Leverage multi-dimensional queries across time, tasks, and metadata to uncover insights and connections.
- Configure and switch AI modes (custom modes) that define specific assistant behaviors for different workflows.
- Manage and test Model Context Protocol integrations to standardize how models and data are accessed across your tooling stack.
How to install
Prerequisites:
- Node.js (LTS) and npm installed on your machine
- Internet access to install npm packages or access to your MCP server hosting environment
Option A: Run via npx (no local install required)
- Ensure you have npm installed and logged in if necessary.
- Run the MCP server via npx using the package name:
npx -y infio-copilot
Note: The first run may download dependencies and initialize the server. Ensure network access to npm registry is available.
Option B: Install locally (if you prefer a persistent installation)
- Create a project directory and initialize npm:
mkdir infio-copilot-server
cd infio-copilot-server
npm init -y
- Install the Infio Copilot package:
npm install infio-copilot
- Start the server (example; adjust based on the package’s start script):
npx infio-copilot start
Option C: Docker (if you prefer containerized deployment)
- Pull and run the image (example; replace with the actual image tag if different):
docker run -it infio-copilot:latest
Prerequisites for deployment:
- Access keys for any external services you intend to use (e.g., embedding backends, OpenAI, SiliconFlow, etc.). See additional notes for environment variables.
- Network access to required APIs and model endpoints if not using local embedding models.
Additional notes
Environment and configuration tips:
- Environment variables: set keys for embedding providers and API endpoints as needed. For example, you may need API keys for SiliconFlow, OpenRouter, OpenAI, or other providers. Common placeholders include API_KEY, OPENAI_API_KEY, SILICONFLOW_API_KEY, etc. If you use a local embedding model (as the 0.7.2 release suggests, e.g., bge-micro-v2), ensure the model files are available locally and the server has access to them.
- Embedding models: the server supports a default local embedding model (bge-micro-v2). You can switch to other providers if desired, depending on performance and privacy considerations.
- Workspaces and insights: use the Workspaces feature to segment projects, research, and personal notes. Insights can help synthesize information and surface connections across your vault.
- Troubleshooting: if the MCP server fails to start, check network access, verify required dependencies are installed, and review logs for missing environment variables or misconfigured backends. Ensure your MCP client is configured to point to the correct host/port where the server is running.
Related MCP Servers
mcp-linear
MCP server that enables AI assistants to interact with Linear project management system through natural language, allowing users to retrieve, create, and update issues, projects, and teams.
ironcurtain
A secure* runtime for autonomous AI agents. Policy from plain-English constitutions. (*https://ironcurtain.dev)
kanban
MCP Kanban is a specialized middleware designed to facilitate interaction between Large Language Models (LLMs) and Planka, a Kanban board application. It serves as an intermediary layer that provides LLMs with a simplified and enhanced API to interact with Planka's task management system.
obsidian
MCP server for Obsidian vault management - enables Claude and other AI assistants to read, write, search, and organize your notes
GameMaker
GameMaker MCP server for Cursor - Build GM projects with AI
xgmem
Global Memory MCP server, that manage all projects data.