alibabacloud-lindorm
MCP server from aliyun/alibabacloud-lindorm-mcp-server
claude mcp add --transport stdio aliyun-alibabacloud-lindorm-mcp-server uv run python -m src.lindorm_mcp_server.server \ --env PASSWORD="Your Lindorm account password" \ --env USERNAME="Your Lindorm account username" \ --env TABLE_DATABASE="Database name for SQL operations" \ --env USING_VPC_NETWORK="true or false depending on your setup" \ --env LINDORM_INSTANCE_ID="Your Lindorm instance ID" \ --env TEXT_EMBEDDING_MODEL="Name of your deployed text-embedding model"
How to use
This MCP server provides a Lindorm integration, exposing tools to perform both full-text and vector searches against Lindorm indexes (knowledge bases) and to run SQL operations on Lindorm databases. The server exposes a set of tools including lindorm_retrieve_from_index for querying knowledgebases with aggregated results from text and vector indices, lindorm_get_index_fields to inspect index metadata (notably the content and vector fields), lindorm_list_all_index to enumerate all knowledgebases, lindorm_execute_sql to run SQL queries against Lindorm, lindorm_show_tables to list tables, and lindorm_describe_table to retrieve a table's schema. You can interact with these tools via the MCP runtime once the server is started with uv, using your Lindorm credentials and the configured embedding model. This enables you to build retrieval-augmented workflows where natural language queries are translated into SQL or vector/search operations against your Lindorm data.
How to install
Prerequisites:\n- Python 3.8+ installed on your system\n- uv (Uvicorn-like tooling for MCP) available via pip or your environment runner\n- Access to a Lindorm instance with the necessary engines (wide-table, search, vector, and AI engines) as described in the Lindorm setup.\n\nInstallation steps:\n1) Clone the repository: git clone https://github.com/your-org/aliyun-alibabacloud-lindorm-mcp-server.git\n2) Navigate to the project directory: cd aliyun-alibabacloud-lindorm-mcp-server\n3) Create and configure environment variables:\n - cp .env.example .env\n - Edit .env to include LINDORM_INSTANCE_ID, USING_VPC_NETWORK, USERNAME, PASSWORD, TEXT_EMBEDDING_MODEL, TABLE_DATABASE.\n4) Install the MCP runtime (uv):\n uv pip install .\n5) Run the MCP server:\n uv run python -m src.lindorm_mcp_server.server\n\nNotes:\n- Ensure your Lindorm credentials are kept secure.\n- If you run behind a VPC, set USING_VPC_NETWORK=true and ensure network access to Lindorm.\n- The embedding model name should match what you deployed in Lindorm.
Additional notes
Environment variables: LINDORM_INSTANCE_ID, USING_VPC_NETWORK, USERNAME, PASSWORD, TEXT_EMBEDDING_MODEL, TABLE_DATABASE are required for connecting to Lindorm and performing SQL and vector operations. If you encounter connection issues, verify network access to Lindorm, and ensure the embedding model name matches the deployed model. The available tools support both textual and vector search workflows; for example, lindorm_retrieve_from_index combines full-text and vector results, while lindorm_execute_sql runs standard SQL against Lindorm databases. When debugging, check the .env file and confirm that uv is correctly invoking the server module path (src.lindorm_mcp_server.server).
Related MCP Servers
mcp-vegalite
MCP server from isaacwasserman/mcp-vegalite-server
github-chat
A Model Context Protocol (MCP) for analyzing and querying GitHub repositories using the GitHub Chat API.
nautex
MCP server for guiding Coding Agents via end-to-end requirements to implementation plan pipeline
pagerduty
PagerDuty's official local MCP (Model Context Protocol) server which provides tools to interact with your PagerDuty account directly from your MCP-enabled client.
futu-stock
mcp server for futuniuniu stock
mcp -boilerplate
Boilerplate using one of the 'better' ways to build MCP Servers. Written using FastMCP