Get the FREE Ultimate OpenClaw Setup Guide →

mangaba_ai

Repositório minimalista para criação de agentes de IA inteligentes e versáteis com protocolos A2A (Agent-to-Agent) e MCP (Model Context Protocol).

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio mangaba-ai-mangaba_ai python -m mangaba_ai \
  --env LOG_LEVEL="INFO" \
  --env MODEL_NAME="gemini-2.5-flash" \
  --env LLM_PROVIDER="google | openai | anthropic | huggingface" \
  --env GOOGLE_API_KEY="" \
  --env OPENAI_API_KEY="" \
  --env ANTHROPIC_API_KEY="" \
  --env HUGGINGFACE_API_KEY=""

How to use

Mangaba AI provides a Python-based MCP (Model Context Protocol) service that orchestrates multi-agent workflows and context management. It enables building agent crews with defined roles, goals, and tasks, all communicating via A2A and leveraging a structured task/process system. The included tooling supports configuring providers (Google Gemini, OpenAI, Anthropic, or Hugging Face), running examples, and managing environment variables for authentication and model selection. To use it, install mangaba via pip, then run the MCP entry point to start the server, after which you can define agents, tasks, and processes in your application or scripts and invoke the crew orchestration flow to execute complex, multi-step workflows.

How to install

Prerequisites:

  • Python 3.8+ (recommended 3.9+)
  • pip (comes with Python)
  • Optional: virtual environment tools (venv) for isolation

Step-by-step installation:

  1. Create and activate a virtual environment (optional but recommended):
python -m venv .venv
# Windows
.\.venv\Scripts\Activate.ps1
# Linux/macOS
source .venv/bin/activate
  1. Install Mangaba from PyPI:
pip install mangaba
  1. Quick test to verify installation:
python -c "from mangaba_ai import MangabaAgent; print(MangabaAgent)"
  1. Run the MCP server entry point (module):
python -m mangaba_ai
  1. Optional: clone and run examples or setup environment variables as documented in the README.

Configuring environment variables (example):

# Example .env values
LLM_PROVIDER=google                 # google | openai | anthropic | huggingface
GOOGLE_API_KEY=your_google_key
OPENAI_API_KEY=your_openai_key
ANTHROPIC_API_KEY=your_anthropic_key
HUGGINGFACE_API_KEY=your_hf_key
MODEL_NAME=gemini-2.5-flash
LOG_LEVEL=INFO

Additional notes

Notes and tips:

  • The provider keys must match the chosen LLM provider in your .env file. Do not expose keys in source control.
  • MODEL_NAME is provider-specific; ensure the value exists for the selected provider.
  • The MCP server (Mangaba) emphasizes A2A and MCP protocols for agent communication and context management; adjust crew and task definitions to align with your orchestration needs.
  • If you switch providers, only update LLM_PROVIDER and the corresponding API key variable; MODEL_NAME may also need adjustment.
  • For debugging, set LOG_LEVEL to DEBUG and leverage the example snippets in the docs to model task flows.

Related MCP Servers

Sponsor this space

Reach thousands of developers