apache-hadoop-hive -by-cdata
This read-only MCP Server allows you to connect to Apache Hadoop Hive data from Claude Desktop through CData JDBC Drivers. For full CRUD support, check out the first managed MCP platform: CData Connect AI (https://www.cdata.com/ai/).
claude mcp add --transport stdio cdatasoftware-apache-hadoop-hive-mcp-server-by-cdata java -jar CDataMCP-jar-with-dependencies.jar /PATH/TO/apache-hadoop-hive.prp \ --env JAVA_HOME="Path to your Java installation (e.g., /usr/lib/jvm/java-11-openjdk) or leave unset if JAVA_HOME is already configured"
How to use
This MCP server provides a read-only interface to Apache Hadoop Hive by wrapping the CData JDBC Driver for Hive. It exposes a set of tools that let you query the Hive data without writing SQL directly against the data source. Once the server is running, clients (including Claude Desktop or other MCP clients) can discover and invoke the built-in tools to list tables, view columns, and run SELECT queries. The available tools follow a consistent naming scheme based on the server identifier (for example, apache_hadoop_hive_get_tables, apache_hadoop_hive_get_columns, and apache_hadoop_hive_run_query). Use these tools to inspect the data model and retrieve results in a structured format suitable for prompts and analysis by LLMs.
To configure client access, create a claude_desktop_config.json (or equivalent) that points to the Java process that hosts the MCP server. The typical setup launches the MCP server with the JAR and a .prp file (the JDBC connection profile you prepared). Once configured, you can issue JSON-RPC requests to the server to call the tools and receive results in CSV or other supported formats. Remember that this is a local, stdio-based MCP server; it runs on the same machine as the client and is designed for read-only access to Hive data via the CData driver.
How to install
Prerequisites:
- Java Development Kit (JDK) 8+ installed and JAVA_HOME configured
- Maven installed for building the MCP server
- Access to the CData JDBC Driver for Apache Hadoop Hive and a valid license
Installation steps:
-
Clone the repository: git clone https://github.com/cdatasoftware/apache-hadoop-hive-mcp-server-by-cdata.git cd apache-hadoop-hive-mcp-server-by-cdata
-
Build the server: mvn clean install This produces the JAR file: CDataMCP-jar-with-dependencies.jar
-
Download and install the CData JDBC Driver for Hive from: https://www.cdata.com/drivers/hive/download/jdbc
-
License the CData JDBC Driver following the vendor instructions (typically via a jar command) and configure the JDBC connection to your Hive data source. Create a .prp file (e.g., apache-hadoop-hive.prp) with properties such as Prefix, ServerName, ServerVersion, DriverPath, DriverClass, and JdbcUrl as described in the README.
-
Run the MCP server: java -jar /path/to/CDataMCP-jar-with-dependencies.jar /path/to/apache-hadoop-hive.prp
-
Configure Claude Desktop or your MCP client to point to the running server using the appropriate mcpServers entry as shown in the usage section.
Additional notes
Tips and caveats:
- The server is designed for read-only access; write/update/delete operations are not exposed.
- The MCP server uses stdio, so the client and server must run on the same machine.
- Ensure the JDBC Driver is licensed and properly configured in the .prp file (DriverPath and DriverClass must match your installation).
- If you modify the .prp or JDBC setup, restart the MCP server to apply changes.
- When integrating with Claude Desktop, you may need to fully quit and re-open the client for the server to appear.
- If you encounter connection issues, verify the JdbcUrl in your .prp and that the Hive data source is reachable from the host running the MCP server.