Get the FREE Ultimate OpenClaw Setup Guide →

mcp-llm

An MCP server that provides LLMs access to other LLMs

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio sammcj-mcp-llm node dist/server.js

How to use

This MCP server exposes an LLM-powered toolkit built on the LlamaIndexTS library. It provides four main tools: generate_code, generate_code_to_file, generate_documentation, and ask_question. You can use these tools to generate new code snippets from natural language descriptions, write generated code directly to a file at a specified location, create documentation for given code, or ask questions to the integrated language model for explanations or guidance. The server is designed to be accessed programmatically or via curl commands issued against the MCP server’s API endpoints, enabling seamless integration within your development workflow.

How to install

Prerequisites:

  • Node.js (recommended: LTS, e.g., Node 18+)
  • npm (comes with Node.js)
  • Access to a compatible MCP runtime environment (via Smithery or your own MCP deployment)

Install from source:

  1. Clone the repository git clone https://github.com/sammcj/mcp-llm.git cd mcp-llm

  2. Install dependencies npm install

  3. Build the project npm run build

  4. Update your MCP configuration to include this server. Example: { "mcp_config": { "mcpServers": { "mcp-llm": { "command": "node", "args": ["dist/server.js"] } } } }

  5. Start the MCP server (depends on your MCP runtime). If using a local dev setup: npm start

Using Smithery (optional):

  • Install via Smithery with your desired client, e.g., Claude: npx -y @smithery/cli install @sammcj/mcp-llm --client claude

Additional notes

Tips and common considerations:

  • After building, the server is typically started via your MCP runtime using the configured entry point (e.g., dist/server.js).
  • The four tools expect certain input formats (see examples in the README). Ensure you pass valid JSON payloads when calling generate_code, generate_code_to_file, generate_documentation, or ask_question.
  • For generate_code_to_file, relative paths are resolved relative to the MCP server’s working directory; you can also provide absolute paths.
  • If you encounter memory or latency issues, adjust your LlamaIndexTS model settings or increase the available RAM where the LLM is loaded.
  • Check logs for API endpoints, request shapes, and any model-specific configuration required by your deployment.
  • Ensure your environment provides access to the LLM models you intend to use (local models or remote API endpoints) as configured by the server.

Related MCP Servers

Sponsor this space

Reach thousands of developers