Get the FREE Ultimate OpenClaw Setup Guide →

agentic-developer

An MCP server that scales development into controllable agentic, recursive flows, and build a feature from bottom-up

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio teabranch-agentic-developer-mcp python -m mcp_server

How to use

Agentic Developer MCP wraps OpenAI's Codex CLI behind an MCP server so you can interact with Codex through the TeaBranch open-responses-server middleware. The Python-based MCP server exposes an API compatible with MCP tooling, and you can run it using the Python module interface for mcp_server. The setup enables you to issue Codex prompts, clone repositories, and run Codex-based analysis workflows via the MCP tooling. You can use either stdio or SSE transport when starting the server, enabling integration with different client implementations. The repository includes tooling like run_codex and clone_and_write_prompt to streamline common Codex operations and a mcps.json configuration to register MCP tools that Codex will load automatically. When running behind the open-responses-server, you’ll gain Responses API compatibility alongside MCP support, allowing broader interoperability with your existing middleware.

To use the server, start the MCP server via the Python module, then send MCP requests to the configured endpoint. The included toolset lets you clone repositories, run Codex prompts, and configure MCP tools for discovery by Codex through the mcps.json file under .agent/.

How to install

Prerequisites:

  • Python 3.8+ installed on your system
  • Node.js 22+ if you plan to use the Codex CLI via Node tooling (Codex CLI is sometimes installed as a global npm package)
  • Git installed for repository cloning

Installation steps:

  1. Clone the repository: git clone https://github.com/yourusername/codex-mcp-wrapper.git cd codex-mcp-wrapper

  2. Set up Python environment and install the MCP server package in development mode: python -m venv venv source venv/bin/activate # Linux/macOS venv\Scripts\activate # Windows pip install -e .

  3. Ensure Codex CLI is available (if you plan to use the Codex CLI approach in tooling): npm install -g @openai/codex

  4. Start the MCP server: python -m mcp_server

Optional (Docker-based workflow):

  • Build and run using Docker as described in your Docker setup, ensuring ports 8080 (Codex MCP wrapper) and 3000 (open-responses-server) are exposed.

Note: The mcps.json under .agent/ will register available MCP tools so Codex can load them automatically. If you’re using a Docker setup, ensure the container has access to your Codex CLI or Code execution environment as required by your deployment.

Additional notes

Tips and common issues:

  • If you encounter Codex CLI authentication errors, ensure your OpenAI API credentials are configured and that the Codex CLI has network access.
  • The mcps.json configuration under .agent/ should be kept up to date with the MCP tools you want Codex to expose; after editing, reload the MCP server or restart it to pick up changes.
  • For port configuration, the default setup uses 8080 for the Codex MCP wrapper and 3000 for the open-responses-server; adjust docker-compose or server startup accordingly if you’re integrating into a larger deployment.
  • When using SSE transport for the MCP server, you can specify the port to listen on (e.g., --port 8000) to align with your infrastructure.
  • If you’re running locally, consider using a Python virtual environment to isolate dependencies from system Python.
  • The project mentions the engine can be replaced (e.g., with OpenCode or Amazon Strands); adapt the mcp_config as needed if you switch execution backends.

Related MCP Servers

Sponsor this space

Reach thousands of developers