wren-engine
🤖 The Semantic Engine for Model Context Protocol(MCP) Clients and AI Agents 🔥
claude mcp add --transport stdio canner-wren-engine python -m mcp_server \ --env MCP_HOST="localhost" \ --env MCP_PORT="5000" \ --env MCP_API_KEY="<your-api-key>" \ --env WREN_ENGINE_CONFIG="<path-to-config-file-if-applicable>"
How to use
Wren Engine is a semantic engine that integrates with MCP clients and AI agents to provide context-aware data access across a variety of data sources. It exposes a MCP server that interprets natural language requests, maps them to the appropriate data sources, and applies governance and semantic reasoning to return precise results. You can use it to connect to databases, file storage, and data warehouses through the MCP protocol, enabling AI agents to query data with business terms like active customers, churn rate, or net revenue in a semantically consistent manner. The included wren-core underpins the semantic reasoning layer and ensures interoperability with standard data stacks.
How to install
Prerequisites:
- Python 3.9+ with pip
- Git
- Optional: virtual environment tools (venv, virtualenv)
Install steps:
-
Clone the repository git clone https://github.com/Canner/wren-engine.git cd wren-engine
-
(Optional) Create and activate a virtual environment python -m venv venv
On macOS/Linux
source venv/bin/activate
On Windows
venv\Scripts\activate.bat
-
Install dependencies for the MCP server (adjust if a requirements file exists) pip install -r mcp-server/requirements.txt
Or install the package in editable mode if provided
pip install -e ./mcp-server
-
Run the MCP server (adjust command if your setup uses a different module/package name) python -m mcp_server
-
Verify the server is listening on the configured host/port and is reachable by MCP clients.
-
If you have a deployment workflow, package the server as appropriate for your environment (see mcp-config for runtime details).
Additional notes
Tips and notes:
- Ensure the MCP_HOST and MCP_PORT in the environment are accessible by your MCP clients and any reverse proxies or firewalls allow traffic on that port.
- If you have multiple data sources, configure their connections in the Wren Engine config or via environment variables as supported by the server. Common variables may include database connection strings, credentials, and file storage endpoints.
- The Wren Engine focuses on semantic understanding and governance; define roles and access controls in your environment to leverage its governance-ready capabilities.
- For debugging, check the MCP server logs for requests, responses, and any data source connection errors. Validate connectivity to your supported data sources (e.g., PostgreSQL, MySQL, BigQuery) independently to isolate issues.
- If you switch from development to production, consider containerization (Docker) or orchestration (Kubernetes) and ensure secret management for credentials.
Related MCP Servers
mindsdb
Query Engine for AI Analytics: Build self-reasoning agents across all your live data
bytebot
Bytebot is a self-hosted AI desktop agent that automates computer tasks through natural language commands, operating within a containerized Linux desktop environment.
cursor-talk-to-figma
TalkToFigma: MCP integration between AI Agent (Cursor, Claude Code) and Figma, allowing Agentic AI to communicate with Figma for reading designs and modifying them programmatically.
gaianet-node
Install, run and deploy your own decentralized AI agent service
pgmcp
An MCP server to query any Postgres database in natural language.
argo
ARGO is an open-source AI Agent platform that brings Local Manus to your desktop. With one-click model downloads, seamless closed LLM integration, and offline-first RAG knowledge bases, ARGO becomes a DeepResearch powerhouse for autonomous thinking, task planning, and 100% of your data stays locally. Support Win/Mac/Docker.