KunAvatar
基于ollama推理框架本地部署的Agent应用,实现MCP工具调用,短长期记忆等功能。|| A locally deployed agent application built on the Ollama, featuring MCP tool integration along with both short-term and long-term memory support.
claude mcp add --transport stdio kunlabai-kunavatar node start.js
How to use
KunAvatar is an MCP-enabled locally deployed AI assistant that uses Ollama as its inference engine and exposes MCP tooling for multi-model and multi-server orchestration. With KunAvatar, you can connect to one or more MCP-backed services, manage tools across servers, and enable real-time tool invocation through a unified interface. The server supports multiple transport mechanisms and integrates features like streaming responses (SSE/HTTP), task- and memory-aware agents, and RBAC-based access to tools. After starting the server, you can interact with the AI, choose from available Ollama models, manage agents (智能体), and call external tools via the MCP protocol to perform actions such as data retrieval, file operations, or external API calls. The MCP tooling layer also provides discovery, monitoring, and permission controls to fine-tune who can call which tools.
How to install
Prerequisites:
- Node.js 22.15+ and npm 11.3+ (as recommended by KunAvatar)
- Ollama installed and configured locally (available models pulled as needed)
Installation steps:
- Clone the repository
git clone https://github.com/KunLabAI/kun-avatar.git
cd kun-avatar
- Install dependencies
npm install
- Build the project (if applicable)
npm run build
- Start the MCP server (KunAvatar)
npm start
- Optional: run in development mode
npx next dev
Notes:
- Ensure Ollama is running and accessible before starting KunAvatar.
- If you customize MCP tooling or add additional servers, update the mcp configuration accordingly.
Additional notes
Tips and common issues:
- Ensure Ollama is up-to-date (recommended >= 0.9.6+) and that the models you plan to use are pulled (e.g., ollama pull gemma3).
- The MCP tool layer supports multiple transport methods (stdio, SSE, Streamable HTTP). If you encounter connectivity issues, verify network/firewall settings and that the appropriate endpoints are accessible.
- For RBAC and multi-tenant setups, define roles and permissions carefully to prevent unauthorized tool usage.
- If you encounter port conflicts, adjust the Next.js API server port in the development or production configuration and re-run the server.
- When using the MCP API, you can leverage the auto-discovery feature to find available tools across servers and subscribe to tool status updates via SSE.
Related MCP Servers
Dive
Dive is an open-source MCP Host Desktop Application that seamlessly integrates with any LLMs supporting function calling capabilities. ✨
evm
MCP server that provides LLMs with tools for interacting with EVM networks
mcp-use-ts
mcp-use is the framework for MCP with the best DX - Build AI agents, create MCP servers with UI widgets, and debug with built-in inspector. Includes client SDK, server SDK, React hooks, and powerful dev tools.
furi
CLI & API for MCP management
create -app
A CLI tool for quickly scaffolding Model Context Protocol (MCP) server applications with TypeScript support and modern development tooling
mcp-ollama-agent
A TypeScript example showcasing the integration of Ollama with the Model Context Protocol (MCP) servers. This project provides an interactive command-line interface for an AI agent that can utilize the tools from multiple MCP Servers..