Get the FREE Ultimate OpenClaw Setup Guide →

mcp -openai

mcp-server-openai with o3-mini support

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio thadius83-mcp-server-openai python -m src.mcp_server_openai.server --openai-api-key your-key-here \
  --env PYTHONPATH="/path/to/your/mcp-server-openai"

How to use

This MCP server exposes a single tool called ask-openai that routes questions to OpenAI models via MCP. It supports multiple models, including o3-mini (default) for concise responses and gpt-4o-mini for more detailed answers. To use it, configure Claude (or your MCP client) to point at the server using the server name github.com/thadius83/mcp-server-openai and invoke the ask-openai tool with a JSON payload containing the query and an optional model. The server handles authentication by accepting your OpenAI API key as part of the command line arguments when starting the server, and you can select the model per request or rely on the default. The responses are returned in a standardized JSON format that the client can render to the user.

How to install

Prerequisites:

  • Python 3.10 or newer
  • pip
  • OpenAI API key

Manual installation steps:

  1. Clone the repository git clone https://github.com/thadius83/mcp-server-openai.git cd mcp-server-openai

  2. Install the package in editable mode pip install -e .

  3. Run the MCP server (example using the provided configuration) python -m src.mcp_server_openai.server --openai-api-key your-openai-api-key

  4. Add the server to Claude (or your MCP client) configuration Ensure PYTHONPATH points to the project directory and configure the MCP client to pass the OpenAI API key via the command line, as shown in the example configuration.

Smithery installation (optional):

  • Install via Smithery for Claude integration as described in the README using the Smithery CLI, which will register the server for Claude usage.

Additional notes

Notes and tips:

  • The server requires an OpenAI API key; keep it secure and do not share it publicly.
  • Ensure PYTHONPATH points to the local repository path when running the server so imports resolve correctly.
  • Available models: o3-mini (default) and gpt-4o-mini. You can pass the desired model in the request payload; if omitted, o3-mini is used.
  • If you encounter authentication or model errors, verify your OpenAI API key and that the model names are supported by the server configuration.
  • Example payload for ask-openai: { "query": "Your question here", "model": "o3-mini" // optional }
  • Troubleshooting steps include testing the server directly with the OpenAI key and checking that the PYTHONPATH and module path (src.mcp_server_openai.server) are correct.

Related MCP Servers

Sponsor this space

Reach thousands of developers