Get the FREE Ultimate OpenClaw Setup Guide →

langgraph-fastapi -demo

A sample project that turns a shopping list FastAPI app into an MCP server and connects a LangGraph based chatbot to the MCP server so that it can manage the user's shopping list via chat.

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio davidkiss-langgraph-fastapi-mcp-server-demo uvx run uvicorn server.main:app --host 0.0.0.0 --port 8000 --reload \
  --env OPENAI_API_KEY="your-openai-api-key-here"

How to use

This MCP server turns the LangGraph-powered shopping assistant into a reusable Model Context Protocol (MCP) service backed by a FastAPI app. It exposes endpoints via FastAPI-MCP that allow a chatbot to query and manipulate a shopping list, enabling actions like adding items, listing current items, marking items as purchased, and removing items. The integration with LangGraph and Gradio provides a chat-based interface to manage the list through natural language queries, while LangChain/OpenAI components handle the model interactions and tool definitions. You can run the server and then connect your LangGraph-based chatbot to the MCP tools so the bot can read and update the shopping list in real time.

To use the tools, ensure the Forever OpenAI API key is available in the environment (OPENAI_API_KEY) and the MCP server is running. The MCP endpoints will be available via the server (FastAPI) under the configured host and port, allowing your agent to call list, add, update, and remove actions as tools within its toolset.

How to install

Prerequisites:

  • Python 3.8+ installed
  • uv (Python package manager) installed
  • OpenAI API key (for the language model)

Step-by-step:

  1. Install uv if you haven’t already. Follow the instructions at the uv documentation to install the tool.

  2. Install dependencies for the FastAPI app (in the project root):

    • uv sync
  3. Create an OpenAI API key in your environment and save it to a .env file at the project root:

    .env

    OPENAI_API_KEY=your-openai-api-key-here

  4. Run the FastAPI app (this starts the MCP-enabled API):

    uv run uvicorn server.main:app --host 0.0.0.0 --port 8000 --reload

  5. Run the chatbot interface (if applicable in your setup):

    uv run chatbot.py

  6. Open your browser to http://localhost:8000/docs to interact with the MCP endpoints and test tool calls.

Additional notes

Environment variables and configuration:

  • OpenAI API key must be present (OPENAI_API_KEY).
  • The MCP server runs on port 8000 by default; adjust as needed in your deployment configuration.
  • If using a container or cloud environment, ensure port mappings and network access are configured to allow API access.

Common issues:

  • Missing or invalid OPENAI_API_KEY will cause the LLM tools to fail; verify the key has proper permissions.
  • If the MCP endpoints do not appear in the API docs, check that the server has started without errors and that uvicorn is pointed at the correct module (server.main:app).
  • When testing in a notebook or restricted environment, ensure outbound network access is allowed for OpenAI API calls.

Related MCP Servers

Sponsor this space

Reach thousands of developers