mcp -llmling
MCP (Model context protocol) server with LLMling backend
claude mcp add --transport stdio phil65-mcp-server-llmling uvx mcp-server-llmling start path/to/your/config.yml
How to use
mcp-server-llmling is a Python-based MCP server that runs LLMLing, a YAML-driven configuration system for building embedded LLM environments. It exposes resources, prompts, and tools that the language model can interact with, all defined in YAML rather than code. The server integrates with the MCP protocol and supports multiple transport options, including standard stdio or HTTP-based streaming transports, enabling you to wire the LLM to content, templates, and Python-callable tools. You can configure resources (files, text, CLI outputs, code, images), prompts, and tools in YAML, and LLMLing will expose these elements to the LLM in a consistent, well-structured interface. The typical flow is to define a YAML configuration that specifies resources, tools, and prompts, then run the server (via uvx) so your client (e.g., Claude, Zed, or another MCP consumer) can interact with the LLM through the standard MCP protocol.
How to install
Prerequisites:
- Python 3.8+ (recommended latest stable)
- Access to a shell/terminal
- Optional: uvx command-line tool for starting MCP servers
Install the server package:
-
Using uvx (recommended for LLMLing-based setups):
- Install or upgrade the uvx tool if you don't have it: [follow uvx installation docs for your platform]
- Start the server with the latest release: uvx mcp-server-llmling@latest start path/to/your/config.yml
-
Or via Python/pip (if the package publishes a Python entry point):
- Ensure pip is up to date: python -m pip install --upgrade pip
- Install the MCP server package: pip install mcp-server-llmling
- Run the server using the Python entry point if provided by the package (see package docs):
- python -m mcp_server_llmling start path/to/your/config.yml
Configuration:
- Prepare a YAML config file (config.yml) that defines global_settings, resources, tools, and toolsets as described in the repository documentation.
- Ensure the path to config.yml is accessible by the server process.
Example run (using uvx):
- uvx mcp-server-llmling@latest start ./config.yml
Example run (manual Python invocation):
- python -m mcp_server_llmling start ./config.yml
Additional notes
Notes and tips:
- If you plan to run behind a web client, consider enabling a streaming transport such as SSE by configuring LLMLing’s transport to 'sse' and setting host/port/cors origins as needed.
- Environment variables you may encounter or want to configure include LOG_LEVEL (e.g., INFO, DEBUG), CONFIG_PATH (path to config.yml), and any provider-specific settings required by your resources or tools.
- When updating YAML definitions (resources, prompts, or tools), enable file watching or hot reload if supported to avoid restarting the server frequently.
- If you experience issues with tool invocation, verify that Python callables are importable and that the tools are registered correctly in the YAML under the tools and toolsets sections.
- OpenAPI tool integrations can be configured under toolsets; ensure external schemas are reachable and, if necessary, bundle schemas with a Redocly-like tool for bundling.
- For debugging, start the server with a higher log level (e.g., log_level: "DEBUG") in global_settings to surface detailed information.
Related MCP Servers
mcp-vegalite
MCP server from isaacwasserman/mcp-vegalite-server
github-chat
A Model Context Protocol (MCP) for analyzing and querying GitHub repositories using the GitHub Chat API.
nautex
MCP server for guiding Coding Agents via end-to-end requirements to implementation plan pipeline
pagerduty
PagerDuty's official local MCP (Model Context Protocol) server which provides tools to interact with your PagerDuty account directly from your MCP-enabled client.
futu-stock
mcp server for futuniuniu stock
mcp -boilerplate
Boilerplate using one of the 'better' ways to build MCP Servers. Written using FastMCP