weam
Web app for teams of 20+ members. In-built connections to major LLMs via API. Share chats, prompts, and agents in team or private folders. Modern, fully responsive stack (Next.js, Node.js). Deploy your own vibe-coded AI apps, agents, or workflows—or use ready-made solutions from the library.
claude mcp add --transport stdio weam-ai-weam node server.js \ --env WEAM_APP_ENV="production" \ --env OPENAI_API_KEY="your-openai-api-key" \ --env HUGGINGFACE_API_TOKEN="your-huggingface-token" \ --env NEXT_PUBLIC_API_BASE_URL="https://your-weam-deployment-url"
How to use
Weam is a production-grade open source platform that unifies LLMs, agents, AI apps, and team collaboration in a single workspace. It ships with a Next.js frontend and a Node.js backend, providing out-of-the-box support for multiple LLM providers, local model integration via Ollama, and a growing MCP (Model Context Protocol) integration layer. Once deployed, you can create Brains to organize chats, prompts, and agents by department or project, connect to external services via MCP integrations (such as Gmail, Slack, and Google Drive as they’re added), and deploy apps from the available AI Apps catalog. The MCP area enables you to hook in external services and workflows so agents can reason with data from your tools, documents, and team processes. To start using Weam, ensure your environment has the required CPU and RAM, start the Node.js server, and open the frontend UI to begin creating Brains, prompts, and agents.
Key capabilities include:
- Chat with multiple LLM providers (OpenAI, Anthropic, Gemini, Llama, Perplexity, and more) with persistent conversation history and context-aware behavior.
- Productivity tools like web scraping, web search, team collaboration, and threaded comments to support team workflows.
- Prompt Library for creating, sharing, and reusing prompts across teams, plus prompt enhancement and quick access features.
- AI Agents with custom instructions, knowledge bases, and model selection, with MCP integration to connect agents to external services.
- RAG (Retrieval-Augmented Generation) pipeline for document processing and intelligent retrieval.
- MCP Integration to extend Weam with external apps and services via MCP connections.
To use the MCP features and tools, navigate to the MCP section in Weam’s interface, add a new connection to a supported service, and configure agents to leverage documents, prompts, and contextual data from that service. You can create or import Brains, assign prompts to agents, and deploy AI apps that are available in the platform’s Apps catalog.
How to install
Prerequisites:
- Node.js (recommended LTS version) installed on the host
- Access to a suitable runtime environment (localhost or server with network access)
- An API key for your preferred LLM providers (e.g., OpenAI) and any required tokens for MCP connections
Installation steps:
-
Clone the repository: git clone https://github.com/weam-ai/weamai.git cd weamai
-
Install backend and frontend dependencies: npm install
-
Configure environment variables:
- Create a .env file or set environment variables as needed. Example: NEXT_PUBLIC_API_BASE_URL=https://your-weam-deployment-url OPENAI_API_KEY=your-openai-api-key HUGGINGFACE_API_TOKEN=your-huggingface-token
- Ensure any other required MCP credentials or tokens are provided.
-
Start the application: npm run start
If using a custom script, ensure the server.js path matches your setup
-
Access the UI:
- Open http://localhost:3000 (or the configured port) in your browser.
-
Optional: Run in production mode behind a reverse proxy or container orchestrator (Docker/Kubernetes) as appropriate for your infra.
Additional notes
Tips and common issues:
- Ensure 4+ CPU cores and at least 8GB RAM for smooth operation; production deployments may require more depending on load.
- When configuring MCP connections, verify the scopes and permissions required by each external service.
- If the frontend cannot reach the backend, verify environment variables (API base URL, ports) and CORS settings.
- Keep API keys and tokens secure; do not commit .env files to version control.
- For local model hosting ( Ollama ), ensure the local model service is reachable from the Node backend (same host or correct network routing).
- Monitor logs for startup errors related to missing env vars or misconfigured MCP connections and address them by updating .env or the mcp_config accordingly.
Related MCP Servers
mindsdb
Query Engine for AI Analytics: Build self-reasoning agents across all your live data
bytebot
Bytebot is a self-hosted AI desktop agent that automates computer tasks through natural language commands, operating within a containerized Linux desktop environment.
sre
The SmythOS Runtime Environment (SRE) is an open-source, cloud-native runtime for agentic AI. Secure, modular, and production-ready, it lets developers build, run, and manage intelligent agents across local, cloud, and edge environments.
minima
On-premises conversational RAG with configurable containers
vllora
Debug your AI agents
bytechef
Open-source, AI-native, low-code platform for API orchestration, workflow automation, and AI agent integration across internal systems and SaaS products.