chatbi
使用LLM生成代码来对Excel进行分析的MCP Server
claude mcp add --transport stdio steven-luo-chatbi-mcp-server python pandas_mcp_server.py \ --env ENV_PATH=".env" \ --env LLM_PROVIDER="optional: name of the LLM provider (e.g., OpenAI, Qwen, etc.)" \ --env OPENAI_API_KEY="your OpenAI API key (or key for the LLM provider you use)"
How to use
ChatBI MCP Server provides an automation layer for data analysis and visualization tasks powered by large language models. After configuring the server, you can access a set of tools that assist with data exploration, plan generation for analyses, execution of steps, and generation of dashboards. The server emphasizes automated data analysis workflows: you can generate an analysis plan, run the plan, and obtain a narrative report and corresponding visualizations. When integrated with an MCP client like Cherry Studio, the server exposes its tools under the Tools tab, allowing you to verify tool availability and execute capabilities directly from the UI. For visualization, the server can be used alongside mcp-server-chart to render charts and dashboards that reflect the analysis outputs. In practice, you typically prepare your data source, configure environment variables for the desired LLMs, and use the provided commands to generate analysis plans, run analyses, and publish dashboards.
How to install
Prerequisites:
- Python 3.11 or compatible runtime
- Git
- Access to an LLM API (e.g., OpenAI) and a valid API key
Installation steps:
- Clone the repository or download the ZIP containing the ChatBI MCP Server.
- Create and activate a Python virtual environment:
- Install via uv (as used in the project): uv venv .venv --python=3.11 source .venv/bin/activate
- Install Python dependencies: uv pip install -r requirements.txt
- Prepare configuration:
- Copy example env and edit with your LLM details: cp .env.example .env
- Fill in the required LLM settings (model selection, keys, endpoints).
- Run the server (from project root, server script resides in src): cd src python pandas_mcp_server.py
Notes:
- Ensure the environment variables in .env are aligned with the values you place in the mcp_config env block if you override them there.
- If you encounter timeouts during long-running LLM generation tasks, adjust timeout settings in your client configuration or the server’s tooling as recommended by the project docs.
Additional notes
Tips and considerations:
- The ChatBI MCP Server relies on LLMs to generate data analysis plans and to perform code generation tasks. Configure your environment with a stable LLM provider and sufficient tokens/throughput.
- In client configuration (e.g., Cherry Studio), you can customize the config.yaml validation data. The default validation payload shown in the README is a placeholder for testing; replace it with your own secured value if needed.
- For long-running tasks such as code generation or complex analyses, increase the request timeout to prevent premature failures.
- If you use visualization features with mcp-server-chart, ensure the chart server is accessible and properly wired to your MCP server.
- Keep the .env file secure; do not commit API keys to version control.
Related MCP Servers
mcp-vegalite
MCP server from isaacwasserman/mcp-vegalite-server
github-chat
A Model Context Protocol (MCP) for analyzing and querying GitHub repositories using the GitHub Chat API.
nautex
MCP server for guiding Coding Agents via end-to-end requirements to implementation plan pipeline
pagerduty
PagerDuty's official local MCP (Model Context Protocol) server which provides tools to interact with your PagerDuty account directly from your MCP-enabled client.
futu-stock
mcp server for futuniuniu stock
mcp -boilerplate
Boilerplate using one of the 'better' ways to build MCP Servers. Written using FastMCP