openapi
OpenAPI definitions, converters and LLM function calling schema composer.
claude mcp add --transport stdio samchon-openapi npx -y @samchon/openapi
How to use
The @samchon/openapi MCP server dynamically transforms OpenAPI/Swagger documents into LLM function calling schemas that can be executed by your HTTP backend. It normalizes different OpenAPI versions into an emended OpenAPI v3.1 representation, then exposes a consistent interface for generating type-safe function-call schemas that can be invoked by OpenAI, Claude, Qwen, Llama, and other providers. Use this server to quickly expose your existing APIs to AI agents by generating function definitions, parameters, and validation scaffolding that the LLM can leverage to construct correct requests to your backend. The tooling emphasizes type safety, automatic validation, and seamless MCP integration so your backend can be AI-callable without writing bespoke adapters.
To use it, supply your OpenAPI document to the server, which will produce an application consisting of functions that mirror your API endpoints. You can then integrate these functions into your chosen LLM workflow by attaching them as function-call tools to the model's chat or generation interface. The server is designed to work across OpenAI, Claude, Qwen, and other providers, enabling consistent behavior for function calling regardless of provider.
Example workflows include loading a Swagger/OpenAPI document, converting it to the emended OpenAPI format, generating an LLM function calling schema, and then selecting a suitable function to invoke based on the LLM’s outputs. This enables AI-driven orchestration of HTTP backends with validation and type safety baked in.
How to install
Prerequisites:
- Node.js (LTS version) and npm installed on your machine
- Internet access to download the package
Install the MCP server (as a local package via NPX):
-
Install Node.js and npm if not already installed.
- macOS/Linux: install from https://nodejs.org/
- Windows: install from https://nodejs.org/
-
Run the MCP server using NPX (this will fetch and run @samchon/openapi):
# Start the MCP server for openapi using npx
npx -y @samchon/openapi
- If you want to pin a specific version, you can specify it in the command:
npx -y @samchon/openapi@1.x.y
- Alternatively, install locally and run a script that initializes the server as needed for your workflow:
npm install @samchon/openapi
# Then use your own node script to load and transform OpenAPI documents via the library
- Ensure your OpenAPI document is accessible (local file or URL) and ready for conversion.
Additional notes
Tips and common considerations:
- The tool supports Swagger v2.0, OpenAPI v3.0, and OpenAPI v3.1, and emits an emended v3.1 format for consistent downstream usage.
- The MCP integration enables you to treat the API as an AI-callable service within your multi-provider LLM setup.
- If you encounter validation errors, verify that your OpenAPI document is well-formed and valid against the OpenAPI specifications; the emended format aims to reduce ambiguity but still relies on correct source semantics.
- When integrating with an MCP workflow, you can reuse the generated function calling schemas across different LLM providers, simplifying cross-provider AI integration.
- Environment variables and configuration options can be extended as needed in your deployment context; the default MCP config shown uses the package directly via NPX.
Related MCP Servers
AstrBot
Agentic IM Chatbot infrastructure that integrates lots of IM platforms, LLMs, plugins and AI feature, and can be your openclaw alternative. ✨
Everywhere
Context-aware AI assistant for your desktop. Ready to respond intelligently, seamlessly integrating multiple LLMs and MCP tools.
Risuai
Make your own story. User-friendly software for LLM roleplaying
coplay-unity-plugin
Unity plugin for Coplay
brainstorm
MCP server for multi-round AI brainstorming debates between multiple models (GPT, DeepSeek, Groq, Ollama, etc.)
gtm
An MCP server for Google Tag Manager. Connect it to your LLM, authenticate once, and start managing GTM through natural language.