mcp-openapi
Spring Open API MCP server capable of dynamically exposing OpenAPIs as Tools to be consumed from LLMs using MCP Client. This project doesnt interact with MCP server directly.
How to use
The mcp-openapi server is a Spring-based implementation designed to dynamically expose OpenAPIs as tools for consumption from Language Learning Models (LLMs) using the Model Context Protocol (MCP) Client. Developers use this server to integrate OpenAPI specifications seamlessly, allowing their applications to leverage rich API capabilities without direct interaction with an MCP server. This streamlined approach enhances the development process by enabling easy access to various tools through a standardized protocol.
Once connected to the mcp-openapi server, you can interact with the available tools by sending requests that adhere to the OpenAPI specifications exposed by the server. Although no specific tools are documented at this time, you can utilize general API queries such as GET, POST, and PUT to access different endpoints dynamically defined by your OpenAPI configuration. It is recommended to familiarize yourself with the OpenAPI documentation relevant to your project to ensure effective queries and command usage.
How to install
Prerequisites
- Java Development Kit (JDK) 11 or higher
- Maven for building the project
- Node.js for running any client-side applications that may interact with the server
Option A: Quick Start
If you prefer a quick start using npx, you can run the following command:
npx -y mcp-openapi
Option B: Global Install Alternative
For a global installation, clone the repository and build the project:
git clone https://github.com/sirimamilla/mcp-openapi-server.git
cd mcp-openapi-server
mvn clean install
After building, run the server with:
java -jar target/mcp-openapi-server.jar
Additional notes
For optimal performance, ensure that your server configuration is set correctly in the application properties file, which allows you to define environment variables for the OpenAPI specifications. Common gotchas include ensuring that your OpenAPI definitions are valid and that the server is properly configured to listen on the correct ports. Additionally, keep an eye on network settings to prevent connectivity issues when interfacing with LLMs.