openai-agents
OpenAI Agents And Tools as MCP Server
claude mcp add --transport stdio lroolle-openai-agents-mcp-server uvx openai-agents-mcp-server \ --env OPENAI_API_KEY="your-api-key-here"
How to use
This MCP server exposes OpenAI agents as MCP endpoints so clients can orchestrate and delegate tasks to specialized tools. It provides three main capabilities: Web Search Agent for real-time web information, File Search Agent for searching and analyzing content in OpenAI’s vector store, and Computer Action Agent for performing safe, simulated computer actions. In addition, an Orchestrator Agent can coordinate between these specialized agents to decide which tool to use for a given task. Clients such as the Claude Desktop app can discover and invoke these agents through the MCP protocol, enabling composable workflows across agents.
To use the server, run it with an appropriate Transport (for example SSE) and provide your OpenAI API key. Once running, you can send MCP requests to the server to invoke specific agents or the orchestrator, supply any necessary inputs (like vector store IDs for file search or context for web search), and receive structured responses. The server is designed to be extensible, so you can add more specialized agents or adjust the orchestrator’s behavior to fit your workflows.
How to install
Prerequisites:
- Python 3.11 or higher
- uv package manager (recommended; install via pipx or system package manager)
- OpenAI API key
Option A: Install via Smithery (for Claude Desktop users)
- Ensure Node.js and npm are installed.
- Run:
npx -y @smithery/cli install @lroolle/openai-agents-mcp-server --client claude
- Follow Smithery prompts to set up the client linkage.
Option B: Local development setup (manual)
- Create and activate a Python virtual environment:
uv venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
- Install dependencies and the MCP tooling (via uv):
uv sync --dev
- Run the server locally with a chosen transport (e.g., stdio or sse):
export OPENAI_API_KEY=your-api-key
export MCP_TRANSPORT=sse
uv run mcp dev src/agents_mcp_server/server.py
- Point your MCP client to the server URI (e.g., SSE endpoint) and begin sending MCP requests.
Additional notes
Environment variables: OPENAI_API_KEY is required. MCP_TRANSPORT can be stdio (default) or sse for server-sent events. The File Search Agent requires vector_store_ids corresponding to your OpenAI vector stores. The Computer Action Agent uses a simulated AsyncComputer implementation by default; to enable real interactions, implement and plug in a concrete AsyncComputer. If you plan to deploy, consider configuring additional environment variables for logging, rate limiting, and security. The server can be extended by adding more specialized agents or enhancing the orchestrator for complex workflows.
Related MCP Servers
sample-agentic-ai-demos
Collection of examples of how to use Model Context Protocol with AWS.
mcp-crew-ai
MCP Crew AI Server is a lightweight Python-based server designed to run, manage and create CrewAI workflows.
Python-Runtime-Interpreter
PRIMS is a lightweight, open-source Model Context Protocol (MCP) server that lets LLM agents safely execute arbitrary Python code in a secure, throw-away sandbox.
prefect
Prefect MCP server
mcpx-py
Python client library for https://mcp.run - call portable & secure tools for your AI Agents and Apps
muxi
An extensible AI agents framework