mcp-llm
An MCP server that provides LLMs access to other LLMs
claude mcp add --transport stdio sammcj-mcp-llm node dist/server.js
How to use
This MCP server exposes an LLM-powered toolkit built on the LlamaIndexTS library. It provides four main tools: generate_code, generate_code_to_file, generate_documentation, and ask_question. You can use these tools to generate new code snippets from natural language descriptions, write generated code directly to a file at a specified location, create documentation for given code, or ask questions to the integrated language model for explanations or guidance. The server is designed to be accessed programmatically or via curl commands issued against the MCP server’s API endpoints, enabling seamless integration within your development workflow.
How to install
Prerequisites:
- Node.js (recommended: LTS, e.g., Node 18+)
- npm (comes with Node.js)
- Access to a compatible MCP runtime environment (via Smithery or your own MCP deployment)
Install from source:
-
Clone the repository git clone https://github.com/sammcj/mcp-llm.git cd mcp-llm
-
Install dependencies npm install
-
Build the project npm run build
-
Update your MCP configuration to include this server. Example: { "mcp_config": { "mcpServers": { "mcp-llm": { "command": "node", "args": ["dist/server.js"] } } } }
-
Start the MCP server (depends on your MCP runtime). If using a local dev setup: npm start
Using Smithery (optional):
- Install via Smithery with your desired client, e.g., Claude: npx -y @smithery/cli install @sammcj/mcp-llm --client claude
Additional notes
Tips and common considerations:
- After building, the server is typically started via your MCP runtime using the configured entry point (e.g., dist/server.js).
- The four tools expect certain input formats (see examples in the README). Ensure you pass valid JSON payloads when calling generate_code, generate_code_to_file, generate_documentation, or ask_question.
- For generate_code_to_file, relative paths are resolved relative to the MCP server’s working directory; you can also provide absolute paths.
- If you encounter memory or latency issues, adjust your LlamaIndexTS model settings or increase the available RAM where the LLM is loaded.
- Check logs for API endpoints, request shapes, and any model-specific configuration required by your deployment.
- Ensure your environment provides access to the LLM models you intend to use (local models or remote API endpoints) as configured by the server.
Related MCP Servers
ClueoMCP
🎭 The Personality Layer for LLMs- Transform any MCP-compatible AI with rich, consistent personalities powered by Clueo's Big Five personality engine.
boilerplate
TypeScript Model Context Protocol (MCP) server boilerplate providing IP lookup tools/resources. Includes CLI support and extensible structure for connecting AI systems (LLMs) to external data sources like ip-api.com. Ideal template for creating new MCP integrations via Node.js.
mcp-chat-studio
A powerful MCP testing tool with multi-provider LLM support (Ollama, OpenAI, Claude, Gemini). Test, debug, and develop MCP servers with a modern UI.
mcp-cron
MCP server for scheduling and running shell commands and AI prompts
mcp -templates
A flexible platform that provides Docker & Kubernetes backends, a lightweight CLI (mcpt), and client utilities for seamless MCP integration. Spin up servers from templates, route requests through a single endpoint with load balancing, and support both deployed (HTTP) and local (stdio) transports — all with sensible defaults and YAML-based configs.
outlook
MCP server for Microsoft Office 365 Outlook – email, calendar & SharePoint integration for Claude, ChatGPT, and AI assistants via Microsoft Graph API