gateway
A blazing fast AI Gateway with integrated guardrails. Route to 200+ LLMs, 50+ AI Guardrails with 1 fast & friendly API.
claude mcp add --transport stdio portkey-ai-gateway npx -y @portkey-ai/gateway
How to use
The Portkey AI Gateway is an open-source enterprise gateway that routes requests to 1600+ language, vision, audio, and image models through a single, fast API. You can run it locally with Node.js and npm, and then send requests to the gateway's API to access a wide range of models and providers without integrating each provider separately. The gateway handles routing, retries, fallbacks, load balancing, and guardrails to help you manage multi-model deployments reliably. After starting the gateway, you’ll interact with a unified request surface at the gateway URL (for example, http://localhost:8787/v1) and use its console for logging and observability where available. The repository also notes enterprise-ready features like auth and observability through the MCP Gateway, making it suitable for private deployments and managed environments.
How to install
Prerequisites:
- Node.js and npm installed on your machine
- Sufficient network access to install and run the gateway
Option A: Run via npx (no local install required)
- Open a terminal
- Run:
npx @portkey-ai/gateway
- The gateway will start and listen on http://localhost:8787/v1 (and the console at http://localhost:8787/public/).
Option B: Install locally (if you want to pin versions or customize)
- Ensure Node.js and npm are installed
- Install the package:
npm install -g @portkey-ai/gateway
- Start the gateway:
gateway # or npm run start, depending on package.json scripts
- Access the API at http://localhost:8787/v1 and the console at http://localhost:8787/public/
Notes:
- The gateway is designed to be drop-in compatible with existing MCP tooling. You can manage MCP servers and observability via the gateway’s interfaces.
- If you use a private deployment or enterprise features, consult the enterprise deployment guides referenced in the docs for authentication and topology considerations.
Additional notes
Tips and common issues:
- Default gateway URL is http://localhost:8787/v1; the console lives at http://localhost:8787/public/
- For production deployments, consider enabling authentication and securing the gateway endpoints as described in the MCP Gateway docs.
- The gateway supports retries, fallbacks, load balancing, and guardrails to protect and stabilize multi-model routing.
- If you run into port conflicts, adjust the gateway’s port in the deployment configuration or via environment variables as supported by your deployment method.
- Check the repository’s Quickstart and deployment sections for Docker or cloud deployment options if you don’t want to run locally.
Related MCP Servers
mcp-agent
Build effective agents using Model Context Protocol and simple workflow patterns
bifrost
Fastest enterprise AI gateway (50x faster than LiteLLM) with adaptive load balancer, cluster mode, guardrails, 1000+ models support & <100 µs overhead at 5k RPS.
openinference
OpenTelemetry Instrumentation for AI Observability
mcp-gateway
A plugin-based gateway that orchestrates other MCPs and allows developers to build upon it enterprise-grade agents.
mcp-toolbox-sdk-python
Python SDK for interacting with the MCP Toolbox for Databases.
mcp -langfuse
Model Context Protocol (MCP) Server for Langfuse Prompt Management. This server allows you to access and manage your Langfuse prompts through the Model Context Protocol.