langchain -client
This Streamlit application provides a user interface for connecting to MCP (Model Context Protocol) servers and interacting with them using different LLM providers (OpenAI, Anthropic, Google, Ollama).
claude mcp add --transport stdio guinacio-langchain-mcp-client python -m streamlit run app.py \ --env MCP_LOG_LEVEL="info (optional: set to debug for verbose logs)" \ --env STREAMLIT_PORT="defaults to 8501 (optional: override if needed)" \ --env STREAMLIT_ADDRESS="localhost or 0.0.0.0 (optional)"
How to use
This MCP server is a LangChain-based client application designed to connect to various MCP servers and interact with multiple large language model providers (OpenAI, Anthropic, Google, and Ollama). It supports streaming responses, tool testing, file attachments, memory management, multi-server connections, and advanced model configuration. Use it to connect to MCP tools from different providers, run tool calls in real time, and manage conversations across sessions. The interface exposes provider selection, tool access, memory options, and configuration settings, enabling you to test prompts, validate parameters, and observe streaming outputs as models generate text token by token.
How to install
Prerequisites:
- Python 3.8+ (recommended 3.10+)
- Git
- Optional: Python virtual environment tooling (venv, pyenv)
-
Clone the repository: git clone https://github.com/guinacio-langchain-mcp-client.git cd guinacio-langchain-mcp-client
-
Create and activate a virtual environment (optional but recommended): python -m venv venv source venv/bin/activate # on Unix/macOS .\venv\Scripts\activate # on Windows
-
Install dependencies: pip install -r requirements.txt
-
Configure MCP connection details (see mcp_config section in docs) and ensure any API keys or environment variables required by providers are set in your environment.
-
Run the Streamlit app (as defined in mcp_config): streamlit run app.py
-
Open the provided URL (default http://localhost:8501) in your browser to access the LangChain MCP Client.
Note: If the repository uses a different entry point for Streamlit (e.g., main.py or a specific script), adjust the command accordingly. Ensure that any provider API keys (OpenAI, Anthropic, Google, etc.) are configured in your environment per provider requirements.
Additional notes
Tips and common issues:
- Ensure Python dependencies are installed in the active environment where you run the app.
- If you encounter port conflicts, override the Streamlit port via environment variable STREAMLIT_PORT or by editing the run command.
- For streaming to work reliably, verify network access to provider APIs and check that the chosen models support streaming in your account.
- If using multiple MCP servers, you can connect to them concurrently and switch contexts within the UI.
- Attachments (images, PDFs, text/markdown) rely on client-side processing and model capabilities; ensure the selected provider/model supports vision or multimodal inputs.
- Review configuration changes with the app’s configuration management features (export/import, apply changes, reset).
- For debugging, enable verbose logs by setting MCP_LOG_LEVEL to debug temporarily.
Related MCP Servers
FireRed-OpenStoryline
FireRed-OpenStoryline is an AI video editing agent that transforms manual editing into intention-driven directing through natural language interaction, LLM-powered planning, and precise tool orchestration. It facilitates transparent, human-in-the-loop creation with reusable Style Skills for consistent, professional storytelling.
mcp-toolbox-sdk-python
Python SDK for interacting with the MCP Toolbox for Databases.
lc2mcp
Convert LangChain tools to FastMCP tools
Archive-Agent
Find your files with natural language and ask questions.
LLaMa -Streamlit
AI assistant built with Streamlit, NVIDIA NIM (LLaMa 3.3:70B) / Ollama, and Model Control Protocol (MCP).
mcp-playground
A Streamlit-based chat app for LLMs with plug-and-play tool support via Model Context Protocol (MCP), powered by LangChain, LangGraph, and Docker.