WisAI
Agentic AI Orchestrator and coding agent for business-as-code platforms, services, and data. Focused on streamlining development for startups and small business who want to own their own cloud.
claude mcp add --transport stdio uhstray-io-wisai uvx run main.py \ --env UV_PORT="Backend port (default 2024)" \ --env UV_CONFIG="path/to/config.json (optional)"
How to use
WisAI (WisLLM) is an agent-driven framework designed to orchestrate multiple specialized LLMs and automation agents to support business-as-code platforms, services, and data workflows. The system emphasizes modular, collaborative, and locally runnable agents that can communicate, delegate tasks, and provide feedback. Through the WisLLM design, you can leverage a Scrum Master, Architect, Data Scientist, Requirements Researcher, and several development and operations agents to collaboratively plan, implement, and deploy solutions. The included LangGraph integration allows you to visualize agent graphs, manage inter-agent handoffs, and coordinate task execution across the design, dev, and ops pipelines. The platform aims to run locally with emphasis on energy efficiency and minimal complexity by using modular components and lightweight inter-agent communication.
To use WisAI, start the server with the provided entrypoint (uvx run main.py) and access the local API at the configured port (default http://localhost:2024). The system exposes an API surface for interacting with the WisLLM API, monitoring agent activity, and triggering workflows. Within the running environment, you can deploy and test various agents (e.g., Scrum Master, Architect, Data Scientist, Data Engineer, UX/UI Engineer, DevSecOps) to execute iterative tasks such as requirements elicitation, plan generation, and code/data development within an agile cycle. Be prepared to provide feedback during agent interactions to guide task progression and ensure alignment with business goals.
How to install
Prerequisites
- Python 3.10+ installed on your machine
- Git installed
- Access to a terminal/command prompt
Step-by-step installation
-
Clone the repository git clone https://github.com/yourorg/uhstray-io-wisai.git cd uhstray-io-wisai
-
(Optional but recommended) Create and activate a virtual environment python -m venv venv
Windows
venv\Scripts\activate
macOS/Linux
source venv/bin/activate
-
Install dependencies pip install --upgrade pip pip install -r requirements.txt
If no requirements.txt is present, install the main package and runtimes as documented in README
-
Prepare configuration (optional)
- Create a config.json if you plan to customize ports, paths, or agent behavior. Example: { "port": 2024, "agents": ["scrum_master", "architect", "data_scientist"] }
- Ensure the path is accessible and referenced via UV_CONFIG or default behavior
-
Run the server uvx run main.py
-
Verify the server is running
- API: http://localhost:2024
- Docs: http://localhost:2024/docs
Notes
- If you don’t have uvx installed, you may need to install the uvx runtime per your environment and ensure it’s on your PATH.
- The README mentions an API and LangGraph Studio; ensure any required ports are open and dependencies for LangGraph Studio are installed if you plan to use Studio features.
Additional notes
Tips and common issues:
- Environment variables: Configure UV_PORT and UV_CONFIG to customize runtime port and configuration file. If UV_CONFIG is omitted, defaults are used.
- Local development: Run with a virtual environment to avoid system-wide conflicts. Keep dependencies isolated per project.
- LangGraph Studio: For visualization and orchestration, you can connect LangGraph Studio to the running WisAI instance using the provided base URL. This helps in managing agent graphs and inter-agent handoffs.
- If you encounter API errors, verify that the server process started correctly and that required dependencies (like vllm or related runtimes) are installed in the active environment.
- Logging: Enable or inspect logs to identify agent interaction sequences and task handoffs for debugging complex agent workflows.
Related MCP Servers
mcp -code-execution-mode
An MCP server that executes Python code in isolated rootless containers with optional MCP server proxying. Implementation of Anthropic's and Cloudflare's ideas for reducing MCP tool definitions context bloat.
go-utcp
Official Go implementation of the UTCP
sandbox
A Model Context Protocol (MCP) server that enables LLMs to run ANY code safely in isolated Docker containers.
mcp-playground
A Streamlit-based chat app for LLMs with plug-and-play tool support via Model Context Protocol (MCP), powered by LangChain, LangGraph, and Docker.
Python-Runtime-Interpreter
PRIMS is a lightweight, open-source Model Context Protocol (MCP) server that lets LLM agents safely execute arbitrary Python code in a secure, throw-away sandbox.
knowledgebase
BioContextAI Knowledgebase MCP server for biomedical agentic AI