n8n -RAG
This tool generates importable n8n workflows from plain English descriptions
claude mcp add --transport stdio christinec-dev-n8n-mcp-rag docker compose up -d --build \ --env REDIS_URL="Redis connection URL for distributed caching (optional; e.g., redis://localhost:6379/0)" \ --env LANGWATCH_API_KEY="Your LangWatch API key for RAG monitoring (optional)" \ --env RUN_INDEX_ON_START="Set to false to skip initial indexing; true to index on start (optional)"
How to use
This MCP server wraps an n8n-driven workflow generator that uses Retrieval-Augmented Generation (RAG) to produce importable n8n workflows from plain English prompts or structured JSON prompts. It supports multiple providers/endpoints for model inference and integrates LangWatch for observability of RAG requests and responses. Use the UI to describe the workflow you want in natural language or provide a JSON schema with fields like goal, triggers, and integrations. The backend retrieves relevant context chunks, builds a tailored prompt, calls your chosen provider, and returns a ready-to-import n8n workflow JSON.
How to install
Prerequisites: a machine with Docker and Docker Compose installed, and access to a provider API key (OpenAI, Anthropic, Gemini, etc.). Optional: a LangWatch API key for monitoring and a Redis instance for caching.
- Clone the repository and navigate to the project directory.
- Copy the example environment file and populate it with your settings:
- LANGWATCH_API_KEY (optional)
- REDIS_URL (optional; e.g., redis://localhost:6379/0)
- RUN_INDEX_ON_START (optional; default may be true/false in your setup)
- Build and start the containers: docker compose up -d --build
- Check logs to ensure the app started correctly: docker compose logs -f app
- Open the UI at http://localhost:8000/ui and start generating workflows.
Indexing data (optional but recommended):
- One-time local indexing: docker compose run --rm app python chunk_all.py docker compose run --rm app python build_chroma.py
Reindex on demand:
- curl -X POST "http://localhost:8000/reindex" -H "X-Admin-Token: your-admin-token"
Additional notes
Tips: Ensure your API keys and tokens are kept out of code and stored in the .env file or your environment. If you run into indexing issues, verify that Redis and Chroma dependencies are available and that REDIS_URL, CHROMA settings, and provider keys are correctly configured. For LangWatch monitoring, provide LANGWATCH_API_KEY to enable automatic logging of RAG requests and responses. If you encounter Docker-related problems, confirm Docker and Docker Compose versions and network access. The server exposes endpoints like GET /health, POST /generate, and POST /refine_prompt; use /ui for a friendlier interface. If you plan to deploy publicly, consider securing endpoints behind a reverse proxy and enabling authentication tokens.
Related MCP Servers
nautex
MCP server for guiding Coding Agents via end-to-end requirements to implementation plan pipeline
mcp-yfinance
Real-time stock API with Python, MCP server example, yfinance stock analysis dashboard
pfsense
pfSense MCP Server enables security administrators to manage their pfSense firewalls using natural language through AI assistants like Claude Desktop. Simply ask "Show me blocked IPs" or "Run a PCI compliance check" instead of navigating complex interfaces. Supports REST/XML-RPC/SSH connections, and includes built-in complian
cloudwatch-logs
MCP server from serkanh/cloudwatch-logs-mcp
servicenow-api
ServiceNow MCP Server and API Wrapper
the -company
TheMCPCompany: Creating General-purpose Agents with Task-specific Tools