litemcp
A minimal, lightweight client designed to simplify SDK adoption into MCP
claude mcp add --transport stdio yanmxa-litemcp python -m litemcp
How to use
litemcp is a lightweight MCP server client designed to help you rapidly integrate AI SDKs (such as LangChain, OpenAI Agent SDK, and direct OpenAI API usage) into your MCP projects. It exposes tool-access APIs that let you bind available tools to your LLM runtimes, run agent workflows, and execute tool calls via a simple, minimal interface. You can use MCPServerManager to start the server locally, then obtain specialized tool collections (e.g., agent SDK tools, LangChain-compatible tools) and feed them into your agents or chat wrappers. This enables you to integrate external SDKs and custom tool implementations without heavy boilerplate, while keeping dependencies small and your integration logic straightforward. The library emphasizes safe tool usage, including optional validators to guard against undesired tool calls and human-in-the-loop approvals when needed.
How to install
Prerequisites:
- Python 3.8+ (recommended) -pip (comes with Python)
- Access to install Python packages from PyPI
Install the MCP server package:
pip install litemcp
Verify installation (example):
python -m litemcp --help
Run the MCP server (example, assuming the package provides a CLI entry point compatible with -m usage):
python -m litemcp
If you prefer to install and run via an isolated environment, you can use a virtual environment:
python -m venv venv
source venv/bin/activate # on Windows use: venv\Scripts\activate
pip install litemcp
python -m litemcp
Additional notes
Notes and tips:
- The MCP configuration supports enabling/disabling servers and excluding tools; adjust mcpServers accordingly in your config file.
- If you integrate with tools like LangChain, you can bind the MCP-provided tools to your chain object and execute tool calls as part of the chain flow.
- For safety, consider enabling a validator function to approve or block specific tool calls before they execute.
- Ensure your Python environment has network access to fetch any required SDKs or APIs when binding to external tools.
- Check for compatibility between your chosen AI runtime (e.g., OpenAI SDK, LangChain) and the MCP tools you expose; keep tool interfaces consistent to avoid runtime errors.
- If you encounter issues with tool invocation, inspect tool_call payloads and the corresponding tool implementations to verify name resolution and argument handling.
Related MCP Servers
mcp-agent
Build effective agents using Model Context Protocol and simple workflow patterns
code-mode
🔌 Plug-and-play library to enable agents to call MCP and UTCP tools via code execution.
mcp-toolbox-sdk-python
Python SDK for interacting with the MCP Toolbox for Databases.
mxcp
Model eXecution + Context Protocol: Enterprise-Grade Data-to-AI Infrastructure
crawl4ai
🕷️ A lightweight Model Context Protocol (MCP) server that exposes Crawl4AI web scraping and crawling capabilities as tools for AI agents. Similar to Firecrawl's API but self-hosted and free. Perfect for integrating web scraping into your AI workflows with OpenAI Agents SDK, Cursor, Claude Code, and other MCP-compatible tools.
mcp-aoai-web-browsing
A minimal Model Context Protocol 🖥️ server/client🧑💻with Azure OpenAI and 🌐 web browser control via Playwright.