Get the FREE Ultimate OpenClaw Setup Guide →
npx machina-cli add skill Orchestra-Research/AI-Research-SKILLs/crewai --openclaw
Files (1)
SKILL.md
13.2 KB

CrewAI - Multi-Agent Orchestration Framework

Build teams of autonomous AI agents that collaborate to solve complex tasks.

When to use CrewAI

Use CrewAI when:

  • Building multi-agent systems with specialized roles
  • Need autonomous collaboration between agents
  • Want role-based task delegation (researcher, writer, analyst)
  • Require sequential or hierarchical process execution
  • Building production workflows with memory and observability
  • Need simpler setup than LangChain/LangGraph

Key features:

  • Standalone: No LangChain dependencies, lean footprint
  • Role-based: Agents have roles, goals, and backstories
  • Dual paradigm: Crews (autonomous) + Flows (event-driven)
  • 50+ tools: Web scraping, search, databases, AI services
  • Memory: Short-term, long-term, and entity memory
  • Production-ready: Tracing, enterprise features

Use alternatives instead:

  • LangChain: General-purpose LLM apps, RAG pipelines
  • LangGraph: Complex stateful workflows with cycles
  • AutoGen: Microsoft ecosystem, multi-agent conversations
  • LlamaIndex: Document Q&A, knowledge retrieval

Quick start

Installation

# Core framework
pip install crewai

# With 50+ built-in tools
pip install 'crewai[tools]'

Create project with CLI

# Create new crew project
crewai create crew my_project
cd my_project

# Install dependencies
crewai install

# Run the crew
crewai run

Simple crew (code-only)

from crewai import Agent, Task, Crew, Process

# 1. Define agents
researcher = Agent(
    role="Senior Research Analyst",
    goal="Discover cutting-edge developments in AI",
    backstory="You are an expert analyst with a keen eye for emerging trends.",
    verbose=True
)

writer = Agent(
    role="Technical Writer",
    goal="Create clear, engaging content about technical topics",
    backstory="You excel at explaining complex concepts to general audiences.",
    verbose=True
)

# 2. Define tasks
research_task = Task(
    description="Research the latest developments in {topic}. Find 5 key trends.",
    expected_output="A detailed report with 5 bullet points on key trends.",
    agent=researcher
)

write_task = Task(
    description="Write a blog post based on the research findings.",
    expected_output="A 500-word blog post in markdown format.",
    agent=writer,
    context=[research_task]  # Uses research output
)

# 3. Create and run crew
crew = Crew(
    agents=[researcher, writer],
    tasks=[research_task, write_task],
    process=Process.sequential,  # Tasks run in order
    verbose=True
)

# 4. Execute
result = crew.kickoff(inputs={"topic": "AI Agents"})
print(result.raw)

Core concepts

Agents - Autonomous workers

from crewai import Agent

agent = Agent(
    role="Data Scientist",                    # Job title/role
    goal="Analyze data to find insights",     # What they aim to achieve
    backstory="PhD in statistics...",         # Background context
    llm="gpt-4o",                             # LLM to use
    tools=[],                                 # Tools available
    memory=True,                              # Enable memory
    verbose=True,                             # Show reasoning
    allow_delegation=True,                    # Can delegate to others
    max_iter=15,                              # Max reasoning iterations
    max_rpm=10                                # Rate limit
)

Tasks - Units of work

from crewai import Task

task = Task(
    description="Analyze the sales data for Q4 2024. {context}",
    expected_output="A summary report with key metrics and trends.",
    agent=analyst,                            # Assigned agent
    context=[previous_task],                  # Input from other tasks
    output_file="report.md",                  # Save to file
    async_execution=False,                    # Run synchronously
    human_input=False                         # No human approval needed
)

Crews - Teams of agents

from crewai import Crew, Process

crew = Crew(
    agents=[researcher, writer, editor],      # Team members
    tasks=[research, write, edit],            # Tasks to complete
    process=Process.sequential,               # Or Process.hierarchical
    verbose=True,
    memory=True,                              # Enable crew memory
    cache=True,                               # Cache tool results
    max_rpm=10,                               # Rate limit
    share_crew=False                          # Opt-in telemetry
)

# Execute with inputs
result = crew.kickoff(inputs={"topic": "AI trends"})

# Access results
print(result.raw)                             # Final output
print(result.tasks_output)                    # All task outputs
print(result.token_usage)                     # Token consumption

Process types

Sequential (default)

Tasks execute in order, each agent completing their task before the next:

crew = Crew(
    agents=[researcher, writer],
    tasks=[research_task, write_task],
    process=Process.sequential  # Task 1 → Task 2 → Task 3
)

Hierarchical

Auto-creates a manager agent that delegates and coordinates:

crew = Crew(
    agents=[researcher, writer, analyst],
    tasks=[research_task, write_task, analyze_task],
    process=Process.hierarchical,  # Manager delegates tasks
    manager_llm="gpt-4o"           # LLM for manager
)

Using tools

Built-in tools (50+)

pip install 'crewai[tools]'
from crewai_tools import (
    SerperDevTool,           # Web search
    ScrapeWebsiteTool,       # Web scraping
    FileReadTool,            # Read files
    PDFSearchTool,           # Search PDFs
    WebsiteSearchTool,       # Search websites
    CodeDocsSearchTool,      # Search code docs
    YoutubeVideoSearchTool,  # Search YouTube
)

# Assign tools to agent
researcher = Agent(
    role="Researcher",
    goal="Find accurate information",
    backstory="Expert at finding data online.",
    tools=[SerperDevTool(), ScrapeWebsiteTool()]
)

Custom tools

from crewai.tools import BaseTool
from pydantic import Field

class CalculatorTool(BaseTool):
    name: str = "Calculator"
    description: str = "Performs mathematical calculations. Input: expression"

    def _run(self, expression: str) -> str:
        try:
            result = eval(expression)
            return f"Result: {result}"
        except Exception as e:
            return f"Error: {str(e)}"

# Use custom tool
agent = Agent(
    role="Analyst",
    goal="Perform calculations",
    tools=[CalculatorTool()]
)

YAML configuration (recommended)

Project structure

my_project/
├── src/my_project/
│   ├── config/
│   │   ├── agents.yaml    # Agent definitions
│   │   └── tasks.yaml     # Task definitions
│   ├── crew.py            # Crew assembly
│   └── main.py            # Entry point
└── pyproject.toml

agents.yaml

researcher:
  role: "{topic} Senior Data Researcher"
  goal: "Uncover cutting-edge developments in {topic}"
  backstory: >
    You're a seasoned researcher with a knack for uncovering
    the latest developments in {topic}. Known for your ability
    to find relevant information and present it clearly.

reporting_analyst:
  role: "Reporting Analyst"
  goal: "Create detailed reports based on research data"
  backstory: >
    You're a meticulous analyst who transforms raw data into
    actionable insights through well-structured reports.

tasks.yaml

research_task:
  description: >
    Conduct thorough research about {topic}.
    Find the most relevant information for {year}.
  expected_output: >
    A list with 10 bullet points of the most relevant
    information about {topic}.
  agent: researcher

reporting_task:
  description: >
    Review the research and create a comprehensive report.
    Focus on key findings and recommendations.
  expected_output: >
    A detailed report in markdown format with executive
    summary, findings, and recommendations.
  agent: reporting_analyst
  output_file: report.md

crew.py

from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai_tools import SerperDevTool

@CrewBase
class MyProjectCrew:
    """My Project crew"""

    @agent
    def researcher(self) -> Agent:
        return Agent(
            config=self.agents_config['researcher'],
            tools=[SerperDevTool()],
            verbose=True
        )

    @agent
    def reporting_analyst(self) -> Agent:
        return Agent(
            config=self.agents_config['reporting_analyst'],
            verbose=True
        )

    @task
    def research_task(self) -> Task:
        return Task(config=self.tasks_config['research_task'])

    @task
    def reporting_task(self) -> Task:
        return Task(
            config=self.tasks_config['reporting_task'],
            output_file='report.md'
        )

    @crew
    def crew(self) -> Crew:
        return Crew(
            agents=self.agents,
            tasks=self.tasks,
            process=Process.sequential,
            verbose=True
        )

main.py

from my_project.crew import MyProjectCrew

def run():
    inputs = {
        'topic': 'AI Agents',
        'year': 2025
    }
    MyProjectCrew().crew().kickoff(inputs=inputs)

if __name__ == "__main__":
    run()

Flows - Event-driven orchestration

For complex workflows with conditional logic, use Flows:

from crewai.flow.flow import Flow, listen, start, router
from pydantic import BaseModel

class MyState(BaseModel):
    confidence: float = 0.0

class MyFlow(Flow[MyState]):
    @start()
    def gather_data(self):
        return {"data": "collected"}

    @listen(gather_data)
    def analyze(self, data):
        self.state.confidence = 0.85
        return analysis_crew.kickoff(inputs=data)

    @router(analyze)
    def decide(self):
        return "high" if self.state.confidence > 0.8 else "low"

    @listen("high")
    def generate_report(self):
        return report_crew.kickoff()

# Run flow
flow = MyFlow()
result = flow.kickoff()

See Flows Guide for complete documentation.

Memory system

# Enable all memory types
crew = Crew(
    agents=[researcher],
    tasks=[research_task],
    memory=True,           # Enable memory
    embedder={             # Custom embeddings
        "provider": "openai",
        "config": {"model": "text-embedding-3-small"}
    }
)

Memory types: Short-term (ChromaDB), Long-term (SQLite), Entity (ChromaDB)

LLM providers

from crewai import LLM

llm = LLM(model="gpt-4o")                              # OpenAI (default)
llm = LLM(model="claude-sonnet-4-5-20250929")                       # Anthropic
llm = LLM(model="ollama/llama3.1", base_url="http://localhost:11434")  # Local
llm = LLM(model="azure/gpt-4o", base_url="https://...")              # Azure

agent = Agent(role="Analyst", goal="Analyze data", llm=llm)

CrewAI vs alternatives

FeatureCrewAILangChainLangGraph
Best forMulti-agent teamsGeneral LLM appsStateful workflows
Learning curveLowMediumHigher
Agent paradigmRole-basedTool-basedGraph-based
MemoryBuilt-inPlugin-basedCustom

Best practices

  1. Clear roles - Each agent should have a distinct specialty
  2. YAML config - Better organization for larger projects
  3. Enable memory - Improves context across tasks
  4. Set max_iter - Prevent infinite loops (default 15)
  5. Limit tools - 3-5 tools per agent max
  6. Rate limiting - Set max_rpm to avoid API limits

Common issues

Agent stuck in loop:

agent = Agent(
    role="...",
    max_iter=10,           # Limit iterations
    max_rpm=5              # Rate limit
)

Task not using context:

task2 = Task(
    description="...",
    context=[task1],       # Explicitly pass context
    agent=writer
)

Memory errors:

# Use environment variable for storage
import os
os.environ["CREWAI_STORAGE_DIR"] = "./my_storage"

References

Resources

Source

git clone https://github.com/Orchestra-Research/AI-Research-SKILLs/blob/main/14-agents/crewai/SKILL.mdView on GitHub

Overview

CrewAI is a multi-agent orchestration framework that lets you build teams of autonomous agents that collaborate to solve complex tasks. It supports role-based collaboration, memory across tasks (short-term, long-term, and entity memory), and sequential or hierarchical workflows. Built without LangChain dependencies for lean, production-grade pipelines with tracing and observability.

How This Skill Works

Define agents with roles, goals, and backstories that operate in two paradigms: Crews (autonomous agents) and Flows (event-driven). They access memory, use 50+ tools, and run tasks in sequence or in a hierarchy. The framework provides production-ready tracing and enterprise features for observability and reliability.

When to Use It

  • Building teams of specialized agents for complex tasks
  • Need autonomous collaboration with role-based task delegation (researcher, writer, analyst)
  • Require sequential or hierarchical execution of steps
  • Production workflows with memory and observability
  • Simpler setup than LangChain/LangGraph for multi-agent projects

Quick Start

  1. Step 1: Install the core framework: pip install crewai
  2. Step 2: Create a project with the CLI (crewai create crew my_project), then cd and install dependencies
  3. Step 3: Define agents and tasks, then run the crew (crewai run) and monitor outputs

Best Practices

  • Define clear agent roles, goals, and backstories before kickoff
  • Leverage memory settings (short/long/entity) to retain context across tasks
  • Design tasks with explicit dependencies to support sequential flows
  • Use the 50+ built-in tools to fit the task (web, databases, AI services)
  • Enable tracing and observability for production readiness

Example Use Cases

  • Market research: researchers gather trends, writers produce a blog post
  • Documentation pipeline: data extraction, summarization, QA, and publish
  • Competitive analysis: agents compile sources, synthesize insights, and assemble a brief
  • Policy review workflow: legal reviewer validates outputs before publication
  • Product spec generation: research informs specification writing with memory across steps

Frequently Asked Questions

Add this skill to your agents

Related Skills

dspy

Orchestra-Research/AI-Research-SKILLs

Build complex AI systems with declarative programming, optimize prompts automatically, create modular RAG systems and agents with DSPy - Stanford NLP's framework for systematic LM programming

langchain

Orchestra-Research/AI-Research-SKILLs

Framework for building LLM-powered applications with agents, chains, and RAG. Supports multiple providers (OpenAI, Anthropic, Google), 500+ integrations, ReAct agents, tool calling, memory management, and vector store retrieval. Use for building chatbots, question-answering systems, autonomous agents, or RAG applications. Best for rapid prototyping and production deployments.

qdrant-vector-search

Orchestra-Research/AI-Research-SKILLs

High-performance vector similarity search engine for RAG and semantic search. Use when building production RAG systems requiring fast nearest neighbor search, hybrid search with filtering, or scalable vector storage with Rust-powered performance.

sentence-transformers

Orchestra-Research/AI-Research-SKILLs

Framework for state-of-the-art sentence, text, and image embeddings. Provides 5000+ pre-trained models for semantic similarity, clustering, and retrieval. Supports multilingual, domain-specific, and multimodal models. Use for generating embeddings for RAG, semantic search, or similarity tasks. Best for production embedding generation.

ray-train

Orchestra-Research/AI-Research-SKILLs

Distributed training orchestration across clusters. Scales PyTorch/TensorFlow/HuggingFace from laptop to 1000s of nodes. Built-in hyperparameter tuning with Ray Tune, fault tolerance, elastic scaling. Use when training massive models across multiple machines or running distributed hyperparameter sweeps.

skypilot-multi-cloud-orchestration

Orchestra-Research/AI-Research-SKILLs

Multi-cloud orchestration for ML workloads with automatic cost optimization. Use when you need to run training or batch jobs across multiple clouds, leverage spot instances with auto-recovery, or optimize GPU costs across providers.

Sponsor this space

Reach thousands of developers