concierge
🚀 Universal SDK for building next-gen MCP servers
claude mcp add --transport stdio concierge-hq-concierge python -m concierge \ --env PYTHONUNBUFFERED="1"
How to use
Concierge is an MCP server framework that provides progressive disclosure of tools for AI agents. It wraps an underlying MCP server and dynamically exposes only the tools relevant to the current workflow stage, helping to reduce complexity and cost while improving reliability. With Concierge, you can group tools into stages (e.g., browse, cart, checkout) and define allowed transitions between those stages to enforce your business process. Tools remain regular MCP endpoints—decorated with @app.tool() in your existing MCP server—so your prompts and tool logic stay unchanged. Concierge augments this with additional capabilities like state sharing between steps and optional semantic search to collapse large tool collections behind two meta-tools (search_tools and call_tool), making it scalable for APIs with dozens or hundreds of tools.
To use Concierge, install the concierge-sdk, wrap an existing MCP server, or scaffold a new project. You can then start the server and expose tools over multiple transports (stdio for CLI, streamable HTTP for web deployments, or SSE). The documented example shows how to declare stages and transitions, store and retrieve session state across steps, and run the server. The progressive disclosure ensures the agent never sees tools outside the current workflow context, while still allowing you to enable advanced features like semantic search when needed.
How to install
Prerequisites:\n- Python 3.9+ (recommended)\n- pip (comes with Python)\n\nInstallation steps:\n1) Create and activate a Python virtual environment (recommended):\n\n``` python -m venv venv
On macOS/Linux:
source venv/bin/activate
On Windows:
venv\Scripts\activate
````\n\n2) Install the Concierge SDK from PyPI:\n\npip install concierge-sdk\n\n3) Scaffold a new project or integrate with your existing MCP server:\n- Scaffold example:\n```
In your project directory, initialize if supported by the package tooling
(Examples in README show Concierge scaffolding; adapt to your setup)
\n4) Run your MCP server (example):\n```
python -m concierge
\nNotes:\n- If you already have an MCP server, wrap it with Concierge as shown in the README examples to gain progressive tool disclosure without changing your tool implementations.\n- Ensure your environment has network access and any needed dependencies for your tools.
Additional notes
Tips and considerations:\n- The environment can be extended with app.stages and app.transitions to enforce workflow logic.\n- You can enable semantic search to handle large tool sets by configuring provider_type and max_results in Config.\n- When running over HTTP, Concierge supports streamable HTTP for web deployments and a standard stdio transport for CLI-based clients.\n- If you encounter issues with tool visibility, verify that your stage names align between app.stages and app.transitions and that tools are registered with the correct decorators.\n- Use the wrap-and-go approach to start with a minimal tool set and progressively disclose more tools as needed.\n- For debugging, enable verbose logging in your Python environment and review MCP request/response traces to ensure stage/transition rules are being applied.
Related MCP Servers
dify
Production-ready platform for agentic workflow development.
bytebot
Bytebot is a self-hosted AI desktop agent that automates computer tasks through natural language commands, operating within a containerized Linux desktop environment.
lamda
The most powerful Android RPA agent framework, next generation of mobile automation robots.
solace-agent-mesh
An event-driven framework designed to build and orchestrate multi-agent AI systems. It enables seamless integration of AI agents with real-world data sources and systems, facilitating complex, multi-step workflows.
argo
ARGO is an open-source AI Agent platform that brings Local Manus to your desktop. With one-click model downloads, seamless closed LLM integration, and offline-first RAG knowledge bases, ARGO becomes a DeepResearch powerhouse for autonomous thinking, task planning, and 100% of your data stays locally. Support Win/Mac/Docker.
volcano-agent-sdk
🌋 Build AI agents that seamlessly combine LLM reasoning with real-world actions via MCP tools — in just a few lines of TypeScript.