Get the FREE Ultimate OpenClaw Setup Guide →

langgraph-ai

LangGraph AI Repository

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio piyushagni5-langgraph-ai python -m langgraph_ai.mcp_server \
  --env ANTHROPIC_API_KEY="your-anthropic-api-key" \
  --env LANGCHAIN_API_KEY="optional-langchain-api-key" \
  --env LANGCHAIN_PROJECT="optional-project-name" \
  --env LANGCHAIN_TRACING_V2="optional-enabled-true-or-false"

How to use

LangGraph AI ships an MCP (Model Context Protocol) server as part of its LangGraph tooling. The MCP server is designed to expose a structured, context-aware interface that coordinates model execution, retrieval-augmented generation workflows, and agent orchestration for advanced AI tasks. With this server, you can run and test MCP clients against a local/machine-hosted endpoint, enabling you to build multi-component AI pipelines that leverage LangGraph’s RAG strategies, agent routing, and human-in-the-loop capabilities. The repository focuses on agentic RAG patterns, workflow orchestration, and MCP-based client/server interactions, so you can experiment with adaptive routing, evaluation, and optimization loops across modules.

To use the server, start the Python-based MCP server entry point defined in the project. Once running, you can connect MCP clients from your applications to perform context-aware tool calls, request execution across multiple servers or services, and exchange structured messages that include metadata, context fragments, and evaluation results. This setup supports end-to-end scenarios such as multi-hop reasoning, retrieval-augmented prompts, and orchestration patterns that coordinate multiple AI components in a single workflow.

How to install

Prerequisites:

  • Python 3.10 or higher
  • Git
  • Optional: UV package manager (recommended) or standard Python tooling

Step 1: Clone the repository

git clone https://github.com/piyushagni5/langgraph-ai.git
cd langgraph-ai

Step 2: Install UV (recommended) or skip to plain Python install

  • UV (recommended):
curl -LsSf https://astral.sh/uv/install.sh | sh

Windows:

powershell -c "irm https://astral.sh/uv/install.ps1 | iex" 

Step 3: Install Python dependencies (prefer UV environment if using UV)

uv venv --python 3.10
uv pip install -r requirements.txt

If not using UV, use standard Python environments:

python -m venv .venv
source .venv/bin/activate  # macOS/Linux
.venv\Scripts\activate     # Windows
pip install -r requirements.txt

Step 4: Run the MCP server

# From the repository root, using the configured entrypoint
uv run -m langgraph_ai.mcp_server

Or, if not using UV:

python -m langgraph_ai.mcp_server

Step 5: Verify server is running

  • Check console logs for MCP server startup messages
  • Connect an MCP client to the exposed endpoint (default localhost/port as defined by the server).

Additional notes

Tips and common issues:

  • Ensure API keys are provided via environment variables if the server relies on external services (ANTHROPIC_API_KEY, LANGCHAIN_API_KEY).
  • If you enable tracing (LANGCHAIN_TRACING_V2), ensure the tracing backend is accessible and the project key is configured.
  • The MCP server may be modular with multiple submodules (e.g., server-client, SSE-based client). Refer to the specific project README under mcp/ for module-specific notes.
  • When using UV, you can manage dependencies per project to avoid cross-project conflicts; consider creating per-project venvs.
  • If you run into port conflicts, check your environment for running services and adjust the server configuration accordingly (port binding, host).
  • For testing MCP interactions, you can use the provided tests (uv run pytest ...) to validate client-server contract behavior.

Related MCP Servers

Sponsor this space

Reach thousands of developers