wavefront
🔥🔥🔥 Enterprise AI middleware, alternative to unifyapps, n8n, lyzr
claude mcp add --transport stdio rootflo-wavefront node wavefront-server/index.js \ --env WAVEFRONT_ENV="production" \ --env WAVEFRONT_CONFIG="path/to/config.json"
How to use
Wavefront is an open-source enterprise middleware for building AI-powered workflows and agents. It acts as a unified API layer and orchestration platform that connects various data sources, knowledge bases, and AI models into cohesive pipelines. You can configure multiple agents and workflows to run across enterprise data sources (databases, OLAP/OLTP, cloud storage, APIs) with built-in authentication, RBAC, observability, and guardrails. The server exposes a consistent interface for triggering tasks, routing results, and managing agent orchestration, enabling production-grade AI applications with enterprise controls.
To get started, install and run the Wavefront server, then use the provided tooling to define agents, connect data sources, and compose workflows. The platform supports integration with multiple LLM/SLM providers, retrieval-augmented generation through internal MCP connectors, and modular AI application composition. Use the CLI (when available) or the programmatic API to create agents, configure workspaces, and monitor execution through Grafana/Prometheus telemetry dashboards.
How to install
Prerequisites:
- Node.js (v16+ recommended) and npm
- Git
- Clone the repository
git clone https://github.com/rootflo/wavefront.git
cd wavefront
- Install dependencies for the wavefront-server
npm install --prefix wavefront-server
- Configure environment (example)
# Create a config file or set environment variables as needed
export WAVEFRONT_ENV=production
export WAVEFRONT_CONFIG=./config/default.json
- Run the Wavefront server
node wavefront-server/index.js
- Verify the server is running by hitting the API endpoint or opening the dashboard as documented in the Wavefront docs.
Additional notes
Notes and tips:
- This server acts as middleware; you’ll typically connect data sources, models, and agents via MCP connectors.
- Ensure proper RBAC configuration for production use. Plan to integrate with your existing Google/Entra SSO if needed.
- Telemetry is available via Grafana/Prometheus; configure dashboards to monitor agent performance and guardrails.
- If you upgrade, APIs may change; refer to ROADMAP.md and the platform docs for latest integration details.
- If you encounter hostname or firewall issues, verify network access between the Wavefront server and your data sources.
- Environment variables and config paths are configurable; use a centralized config.json for reproducible deployments.
Related MCP Servers
generative-ai
Comprehensive resources on Generative AI, including a detailed roadmap, projects, use cases, interview preparation, and coding preparation.
mcp-memory-service
Open-source persistent memory for AI agent pipelines (LangGraph, CrewAI, AutoGen) and Claude. REST API + knowledge graph + autonomous consolidation.
flock
Flock is a workflow-based low-code platform for rapidly building chatbots, RAG, and coordinating multi-agent teams, powered by LangGraph, Langchain, FastAPI, and NextJS.(Flock 是一个基于workflow工作流的低代码平台,用于快速构建聊天机器人、RAG、Agent和Muti-Agent应用,采用 LangGraph、Langchain、FastAPI 和 NextJS 构建。)
evo-ai
Evo AI is an open-source platform for creating and managing AI agents, enabling integration with different AI models and services.
CoexistAI
CoexistAI is a modular, developer-friendly research assistant framework . It enables you to build, search, summarize, and automate research workflows using LLMs, web search, Reddit, YouTube, and mapping tools—all with simple MCP tool calls or API calls or Python functions.
langgraph-ai
LangGraph AI Repository