openapi-to
Turns any OpenAPI/Swagger API into an MCP server. One MCP tool per endpoint, Streamable HTTP - for AI clients calling your REST API.
claude mcp add --transport stdio evilfreelancer-openapi-to-mcp docker run -i evilfreelancer/openapi-to-mcp:latest \ --env MCP_HOST="0.0.0.0" \ --env MCP_PORT="3100" \ --env MCP_LOG_LEVEL="INFO" \ --env MCP_API_BASE_URL="Base URL for backend API requests (required)" \ --env MCP_OPENAPI_SPEC="URL or file path to OpenAPI spec (required)" \ --env MCP_INSTRUCTIONS_FILE="path/to/instructions (optional)" \ --env MCP_INSTRUCTIONS_MODE="default|replace|append|prepend (optional)"
How to use
This MCP server, openapi-to, exposes each operation described in your OpenAPI/Swagger spec as an individual MCP tool. It loads the OpenAPI document at startup, filters operations according to include/exclude settings, and registers one tool per API operation. Each tool's input schema is derived from the operation's parameters and requestBody, and tool calls are executed as HTTP requests against your backend API. Transport is streamable HTTP, so clients connect via POST/GET to /mcp. The server handles correlation IDs for request tracing and supports configurable log levels via MCP_LOG_LEVEL. To use it, provide your OpenAPI spec with MCP_OPENAPI_SPEC and point MCP_API_BASE_URL at the actual backend API. You can override tool naming with MCP_TOOL_PREFIX, and you can tailor which endpoints become tools using MCP_INCLUDE_ENDPOINTS and MCP_EXCLUDE_ENDPOINTS. You can also customize how instructions are shown to the user by supplying your own MCP_INSTRUCTIONS_FILE and choosing a mode with MCP_INSTRUCTIONS_MODE.
How to install
Prerequisites:
- Docker installed and available on the host
- Access to the OpenAPI spec you want to expose
- Optional: environment variable file (.env) or direct environment configuration
Install and run using Docker:
-
Prepare environment variables (example): MCP_OPENAPI_SPEC=http://api.example.com/openapi.json MCP_API_BASE_URL=http://api.example.com MCP_PORT=3100 MCP_HOST=0.0.0.0
-
Run the container (as shown in the MCP config): docker run --rm -p 3100:3100
-e MCP_OPENAPI_SPEC=$MCP_OPENAPI_SPEC
-e MCP_API_BASE_URL=$MCP_API_BASE_URL
evilfreelancer/openapi-to-mcp:latest -
Verify the server is listening and accessible at http://<host>:3100/mcp
If you prefer local npm-based usage (not required for the Docker path):
-
Install dependencies and build locally: npm ci npm run build
-
Start the server: npm run start
-
Access the MCP endpoint at http://localhost:3100/mcp
Additional notes
Notes:
- MCP_OPENAPI_SPEC must be provided (URL or file path). The server supports both URL and filesystem sources.
- If MCP_INCLUDE_ENDPOINTS is set, only those endpoints become tools; MCP_EXCLUDE_ENDPOINTS shadows non-included ones unless explicitly included.
- Tool names are formed from MCP_TOOL_PREFIX + path segments, with parameters included; name uniqueness is enforced when multiple methods share the same path segment.
- Ensure MCP_API_BASE_URL is reachable from the MCP server so that backend requests succeed.
- If you use Docker on Linux, you may need --add-host for host.docker.internal or use host networking depending on your setup.
- For observability, enable MCP_LOG_LEVEL to DEBUG during troubleshooting.
Related MCP Servers
openmcp
Turn any openapi file into an mcp server, with just the tools you need.
mcp-agent
Lightweight, focused utilities to manage connections and execute MCP tools with minimal integration effort. Use it to directly call tools or build simple agents within your current architecture.
puppeteer
Self-hosted Puppeteer MCP server with remote SSE access, API key authentication, and Docker deployment. Complete tool suite for browser automation via Model Context Protocol.
civitai
A Model Context Protocol server for browsing and discovering AI models on Civitai
codecks
A simple Model Context Protocol server for basic Codecks task management
warp-sql
🗄️ Model Context Protocol (MCP) server for SQL Server integration with Warp terminal. Execute queries, explore schemas, export data, and analyze performance with natural language commands.