mcp-local-spec
MCP Servers discovery spec
claude mcp add --transport stdio jonnyzzz-mcp-local-spec python -m mcp_local_spec \ --env MCP_LOCAL_SPEC_HOME="Path to your ~/.mcp directory or override as needed"
How to use
The mcp-local-spec server implements a local MCP server specification workflow. It reads Markdown description files placed under your MCP folder (typically ~/.mcp) and exposes the server's capabilities to MCP clients. Each file describes an MCP server's basic information, authentication method, capabilities, regions, health checks, and metadata in a structured markdown format. Clients scan the local directory to discover available MCP servers, refresh the list periodically, and query their capabilities when composing requests for tools like code assistants, IDE integrations, or chat-based agents. To use it, ensure the server process is running and that your MCP clients are configured to query the local MCP server registry by path or URL as supported by your client.
Once running, you can prepare a Markdown file for your server with the standard structure described in the local spec documentation. The client will parse that file to learn the server's identity, authentication requirements, and available capabilities (e.g., compute, storage, networking). Depending on the client, you may be prompted to supply credentials or tokens if the server requires authentication. The local MCP server supports refreshing its registry from disk so updates to the Markdown files are picked up without restarting the server.
How to install
Prerequisites:
- Python 3.8+ (or the Python runtime compatible with your environment)
- Access to install Python packages in your environment
Installation steps:
-
Create and activate a Python virtual environment (optional but recommended): python -m venv venv source venv/bin/activate # on Unix or macOS .\venv\Scripts\activate # on Windows
-
Install the MCP local spec package (adjust if a different package name is used): pip install mcp-local-spec
-
Run the MCP local spec server: python -m mcp_local_spec
-
Verify the server starts and listens on the expected port/interface (follow any printed logs). If your setup requires a specific directory for MCP files, set the MCP_LOCAL_SPEC_HOME or equivalent environment variable to point to ~/.mcp or your preferred location.
-
Place your MCP server Markdown file under the MCP directory (e.g., ~/.mcp/my-mcp-server-tool-id.md) following the documented structure.
Additional notes
Tips and caveats:
- The MCP registry is driven by Markdown files stored locally. Ensure the files follow the documented front-matter and content structure so the client can parse them reliably.
- If authentication is required, configure environment variables or client-provided credentials as per your server's requirements.
- The discovery process is driven by the client, which should regularly refresh the local MCP directory to pick up changes.
- Keep the server's Markdown metadata up to date (ID, URL, API version, capabilities, health check) to ensure accurate routing and tool availability.
- If you encounter issues with file permissions, verify that the process user has read access to the ~/.mcp directory and the Markdown files within.
- For debugging, enable verbose logs in the Python module (consult the library's docs for log level configuration) and check standard output for discovery and parsing messages.
Related MCP Servers
osaurus
AI edge infrastructure for macOS. Run local or cloud models, share tools across apps via MCP, and power AI workflows with a native, always-on runtime.
mcp-llm
An MCP server that provides LLMs access to other LLMs
mem0 -selfhosted
Self-hosted mem0 MCP server for Claude Code. Run a complete memory server against self-hosted Qdrant + Neo4j + Ollama while using Claude as the main LLM.
ToolRAG
Unlimited LLM tools, zero context penalties — ToolRAG serves exactly the LLM tools your user-query demands.
mcp-cron
MCP server for scheduling and running shell commands and AI prompts
mcp-chat-widget
Configure, host and embed MCP-enabled chat widgets for your website or product. Lightweight and extensible Chatbase clone to remotely configure and embed your agents anywhere.