Get the FREE Ultimate OpenClaw Setup Guide →

Context7-ChatGPT-Bridge

A bridge that allows ChatGPT to access up-to-date programming documentation through the Context7 MCP server. Implements ChatGPT's required search and fetch tools while using Context7's documentation database internally.

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio salah9003-context7-chatgpt-bridge python context7_bridge.py \
  --env LOG_LEVEL="INFO"

How to use

This MCP server acts as a bridge between ChatGPT and Context7's current documentation database. It exposes search and fetch capabilities so ChatGPT can discover relevant programming libraries, then retrieve up-to-date documentation through Context7. The bridge translates ChatGPT’s tools into Context7 API calls (resolve-library-id and get-library-docs) and formats the results for ChatGPT to present.

To use it, run the bridge locally or in your environment (it will expose an HTTP/SSE endpoint that ChatGPT can connect to via ngrok). Once running, you can use the two primary tools:

  • search: Look up libraries or Context7 IDs by name or direct IDs. For example searching for "React" or "/reactjs/react.dev" returns library IDs and metadata.
  • fetch: Retrieve comprehensive documentation for a chosen library. You can request a basic doc (library_id), or deeper content using tokens, topics, or a combination like library_id|topic:authentication|tokens:12000 to tailor the depth and scope.

ChatGPT communicates with the bridge, which then calls Context7’s documentation database and returns formatted results suitable for follow-up questions, code examples, or API references. The system also handles common workflow requirements, such as performing a search before fetch to ensure valid IDs are used.

How to install

Prerequisites:

  • Node.js is not required for this bridge (the bridge is Python-based), but ensure Python >= 3.8 is installed.
  • Python 3.8+ and pip are required.
  • Optional: ngrok if you want a public URL for ChatGPT access (the bridge can run with or without ngrok).

Installation steps:

  1. Clone the repository or download the README/example project files to your environment.
  2. Create and activate a Python virtual environment (recommended):
    • python3 -m venv venv
    • source venv/bin/activate # macOS/Linux
    • venv\Scripts\activate # Windows
  3. Install dependencies:
    • pip install -r requirements.txt
  4. Run the bridge:
    • python context7_bridge.py The bridge will start on the default port (8000) and, if ngrok is available, will automatically expose a public URL for ChatGPT integration.
  5. Optional: If you prefer manual ngrok management:
    • Run the bridge with no-ngrok: python context7_bridge.py --no-ngrok
    • In another terminal: ngrok http 8000
  6. Add the ngrok URL to ChatGPT as an MCP connector (for example: https://abc123.ngrok-free.app/sse).

Additional notes

Tips:

  • Always perform a search first to obtain valid library IDs before using fetch. If you encounter "Unknown document ID", retry with a search or use a direct Context7 library ID starting with '/'.
  • You can customize the depth of fetched documentation using the tokens parameter and topic filters (e.g., topic: hooks, authentication, installation, api, examples).
  • The bridge expects JSON-RPC style requests from ChatGPT via the SSE endpoint; ensure your ChatGPT connector is configured to use the /sse route.
  • If you run into connectivity issues, verify that Python, pip, and network access to Context7 are available. You can enable debug logging by starting the bridge with: LOG_LEVEL=DEBUG python context7_bridge.py
  • If ngrok is not desired, you can expose a local 8000 port behind your own reverse proxy or VPN; the bridge will operate normally without ngrok.

Related MCP Servers

Sponsor this space

Reach thousands of developers