Get the FREE Ultimate OpenClaw Setup Guide →

mcp

A generic, modular server for implementing the Model Context Protocol (MCP).

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio profullstack-mcp-server node server.js \
  --env HOST="localhost" \
  --env PORT="3000" \
  --env OPENAI_API_KEY="your_openai_api_key_here" \
  --env ANTHROPIC_API_KEY="your_anthropic_api_key_here" \
  --env STABILITY_API_KEY="your_stability_api_key_here"

How to use

This MCP server provides a modular framework for controlling and interacting with multiple AI models through a standardized API. It exposes endpoints for server information, status, health, and metrics, as well as management of models and model inference. You can activate a specific model, perform inferences (for text, image, and other supported modalities), and stream results where supported. The server is designed to load modules dynamically, manage core and module-level functionality, and offer metadata about modules and tools in API responses. You can also explore and manage installed modules or search for new ones to extend capabilities.

Core capabilities include: listing and inspecting models, activating/deactivating models, performing inference with a specified or active model, and retrieving streaming results if the underlying model supports it. The server is designed to work with providers like OpenAI, Stability AI, Anthropic, and Hugging Face, enabling text generation, speech-to-text, image generation, and other model types. Modules can extend functionality, add tools, and integrate additional resources, with tests in Mocha/Chai ensuring reliability. To use these features, ensure your environment variables (API keys for providers) are set, then start the server and interact with the endpoints as documented in the MCP standard methods.

Typical usage involves activating a model and then calling the inference endpoints. For example, activate a GPT-4 model and then post a prompt to /model/infer, or post to /model/:modelId/infer to run inference against a specific model. If you enable streaming, you can receive chunked responses as the model generates output. You can also manage modules via the /modules endpoints and discover available tools and resources via /tools and /resources.

How to install

Prerequisites:

  • Node.js 18.x or higher
  • pnpm 10.x or higher
  1. Clone the repository git clone https://github.com/profullstack/mcp-server.git cd mcp-server

  2. Install dependencies pnpm install

  3. Configure environment

    • Copy sample.env to .env and edit with your API keys and desired settings cp sample.env .env
  4. Run the server

    • Development mode (auto-reload on changes) pnpm dev
  5. Start in production mode (if needed)

    • Build steps may vary by deployment; typically pnpm start
  6. Optional: Run tests

    • All tests pnpm test
    • Module tests only pnpm test:modules
    • All tests (core + modules) pnpm test:all

Additional notes

Notes and tips:

  • This server uses ES Modules (ESM) exclusively; imports use the import syntax.
  • Environment variables are loaded from .env; ensure keys for providers you plan to use are set (OPENAI_API_KEY, STABILITY_API_KEY, ANTHROPIC_API_KEY, etc.).
  • Docker users can build and run the container with the provided Dockerfile; it exposes port 3000 and mounts modules for easy management.
  • The MCP endpoints follow a standard method set; refer to the docs for detailed request/response formats and example payloads.
  • If you extend with additional modules, ensure their package.json metadata is discoverable by the /modules/search endpoint.

Related MCP Servers

Sponsor this space

Reach thousands of developers