Get the FREE Ultimate OpenClaw Setup Guide →

mcp-snippets

MCP server from micro-agent/mcp-snippets-server

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio micro-agent-mcp-snippets-server go run main.go \
  --env LIMIT="Similarity threshold (default: 0.6)" \
  --env MAX_RESULTS="Maximum search results (default: 2)" \
  --env MCP_HTTP_PORT="HTTP server port (default: 9090)" \
  --env EMBEDDING_MODEL="Embedding model name (default: ai/mxbai-embed-large:latest)" \
  --env JSON_STORE_FILE_PATH="Vector store file path (default: rag-memory-store.json)" \
  --env MODEL_RUNNER_BASE_URL="OpenAI-compatible API endpoint (default: http://localhost:12434/engines/llama.cpp/v1/)"

How to use

This MCP server processes Markdown files containing code snippets, builds a vector store from them, and exposes an MCP tool named search_snippet. It uses semantic embeddings to perform retrieval-augmented search over the code snippets and returns the most relevant matches. To use it, start the server and then call the MCP tool via the provided /mcp endpoint. The available tool is search_snippet, which accepts a topic string and returns code snippets that semantically relate to that topic.

How to install

Prerequisites:

  • Go installed (go1.20+ recommended)
  • Access to a starting directory containing a snippets/ folder and Markdown files

Installation steps:

  1. Clone the repository or download the source code.
  2. Ensure your environment variables are set (see mcp_config below) or create a .env file.
  3. Build/run the server:
# From the project root
go run main.go

If you prefer Docker, see the Docker instructions in the README and use the provided docker-compose setup with the mcp-snippets-server image. The server will process Markdown files on first run and persist embeddings to the configured JSON store.

Additional notes

Tips and common considerations:

  • The vector store is persisted to JSON (default rag-memory-store.json) to speed up subsequent startups.
  • Update the snippets/ directory by adding or removing Markdown files and restart the server to reprocess and refresh embeddings.
  • Configure OPENAI-compatible embeddings via EMBEDDING_MODEL and MODEL_RUNNER_BASE_URL for local or cloud embeddings.
  • If using Docker, ensure the MCP_HTTP_PORT mapping matches your environment and consider using the provided compose.yml for quick deployment.
  • The MCP tool endpoint is exposed at /mcp, and you can call search_snippet with a topic string to retrieve relevant code snippets.

Related MCP Servers

Sponsor this space

Reach thousands of developers