aws-strands
Scannednpx machina-cli add skill Makiya1202/ai-agents-skills/aws-strands --openclawStrands Agents SDK
Build model-agnostic AI agents with the Strands framework.
Installation
pip install strands-agents strands-agents-tools
# Or with npm
npm install @strands-agents/sdk
Quick Start
from strands import Agent
from strands.tools import tool
@tool
def get_weather(city: str) -> str:
"""Get current weather for a city."""
# Implementation
return f"Weather in {city}: 72°F, Sunny"
agent = Agent(
model="anthropic.claude-3-sonnet",
tools=[get_weather]
)
response = agent("What's the weather in Seattle?")
print(response)
TypeScript/JavaScript
import { Agent, tool } from '@strands-agents/sdk';
const getWeather = tool({
name: 'get_weather',
description: 'Get current weather for a city',
parameters: {
city: { type: 'string', description: 'City name' }
},
handler: async ({ city }) => {
return `Weather in ${city}: 72°F, Sunny`;
}
});
const agent = new Agent({
model: 'anthropic.claude-3-sonnet',
tools: [getWeather]
});
const response = await agent.run('What\'s the weather in Seattle?');
Model Agnostic
Strands works with any LLM:
from strands import Agent
# Anthropic (default)
agent = Agent(model="anthropic.claude-3-sonnet")
# OpenAI
agent = Agent(model="openai.gpt-4o")
# Amazon Bedrock
agent = Agent(model="amazon.titan-text-premier")
# Custom endpoint
agent = Agent(
model="custom",
endpoint="https://your-model-endpoint.com",
api_key="..."
)
Tool Definition Patterns
Decorator Style
from strands.tools import tool
@tool
def search_database(query: str, limit: int = 10) -> list[dict]:
"""Search the product database.
Args:
query: Search query string
limit: Maximum results to return
"""
# Implementation
return results
Class Style
from strands.tools import Tool
class DatabaseSearchTool(Tool):
name = "search_database"
description = "Search the product database"
def parameters(self):
return {
"query": {"type": "string", "description": "Search query"},
"limit": {"type": "integer", "default": 10}
}
def run(self, query: str, limit: int = 10):
return self.db.search(query, limit)
ReAct Pattern
Built-in ReAct (Reasoning + Acting) support:
from strands import Agent, ReActStrategy
agent = Agent(
model="anthropic.claude-3-sonnet",
tools=[search_tool, calculate_tool],
strategy=ReActStrategy(
max_iterations=10,
verbose=True
)
)
# Agent will reason through complex multi-step tasks
response = agent("""
Find the top 3 products in our database,
calculate their average price,
and recommend if we should adjust pricing.
""")
Multi-Agent Systems
from strands import Agent, MultiAgentOrchestrator
# Specialist agents
researcher = Agent(
name="researcher",
model="anthropic.claude-3-sonnet",
tools=[web_search, document_reader],
system_prompt="You are a research specialist."
)
analyst = Agent(
name="analyst",
model="anthropic.claude-3-sonnet",
tools=[data_analyzer, chart_generator],
system_prompt="You are a data analyst."
)
writer = Agent(
name="writer",
model="anthropic.claude-3-sonnet",
tools=[document_writer],
system_prompt="You are a technical writer."
)
# Orchestrator
orchestrator = MultiAgentOrchestrator(
agents=[researcher, analyst, writer],
routing="supervisor" # or "round_robin", "intent"
)
response = orchestrator.run(
"Research AI trends, analyze the data, and write a report"
)
Streaming Responses
from strands import Agent
agent = Agent(model="anthropic.claude-3-sonnet")
# Stream response
for chunk in agent.stream("Explain quantum computing"):
print(chunk, end="", flush=True)
Memory Management
from strands import Agent
from strands.memory import ConversationMemory, SemanticMemory
agent = Agent(
model="anthropic.claude-3-sonnet",
memory=[
ConversationMemory(max_turns=10),
SemanticMemory(embedding_model="text-embedding-3-small")
]
)
# Memory persists across calls
agent("My name is Alice")
agent("What's my name?") # Remembers: "Your name is Alice"
AgentCore Integration
Use Strands with AWS Bedrock AgentCore:
from strands import Agent
from strands.tools import tool
import boto3
agentcore_client = boto3.client('bedrock-agentcore')
@tool
def query_cloudwatch(metric_name: str, namespace: str) -> dict:
"""Query CloudWatch metrics via AgentCore Gateway."""
return agentcore_client.invoke_tool(
tool_name="cloudwatch_query",
parameters={"metric": metric_name, "namespace": namespace}
)
agent = Agent(
model="anthropic.claude-3-sonnet",
tools=[query_cloudwatch]
)
Official Use Cases
Strands is featured in AWS AgentCore samples:
A2A Multi-Agent Incident Response: Uses Strands for monitoring agent
cd amazon-bedrock-agentcore-samples/02-use-cases/A2A-multi-agent-incident-response
# Monitoring agent uses Strands SDK for CloudWatch, logs, metrics
Resources
- Official Samples: https://github.com/awslabs/amazon-bedrock-agentcore-samples
- A2A Use Case: https://github.com/awslabs/amazon-bedrock-agentcore-samples/tree/main/02-use-cases/A2A-multi-agent-incident-response
- Integrations: https://github.com/awslabs/amazon-bedrock-agentcore-samples/tree/main/03-integrations
Source
git clone https://github.com/Makiya1202/ai-agents-skills/blob/master/skills/aws-strands/SKILL.mdView on GitHub Overview
Strands Agents SDK lets you build model-agnostic AI agents, enabling ReAct reasoning and multi-agent collaboration. It supports deploying production agents on AWS and integrating a variety of LLMs and tools.
How This Skill Works
Define capabilities with Python decorators or TypeScript Tool definitions, then assemble an Agent with a chosen model and tools. Optional strategies like ReActStrategy or a MultiAgentOrchestrator control reasoning steps and agent coordination.
When to Use It
- You need model-agnostic agents that work with any LLM (Anthropic, OpenAI, Bedrock, etc.).
- You’re implementing ReAct-style reasoning to handle complex, multi-step tasks.
- You’re building a system with specialist agents and a central orchestrator (multi-agent).
- You’re deploying production agents on AWS for scalable inference.
- You want to define clear tool interfaces (decorator or class-based) and reusable patterns.
Quick Start
- Step 1: Install the Strands SDK (pip install strands-agents strands-agents-tools or npm install @strands-agents/sdk).
- Step 2: Define a tool (e.g., get_weather) and instantiate an Agent with a model and the tool(s).
- Step 3: Run the agent with a natural language prompt and inspect the response; try ReAct or multi-agent configurations.
Best Practices
- Keep tools small, well-typed, and with clear parameters to avoid ambiguity.
- Use consistent tool definitions (decorator or class style) and document capabilities.
- Leverage ReActStrategy with sensible max_iterations and verbose for debugging.
- Test end-to-end prompts with real user tasks and monitor tool usage.
- Design with model-agnosticism in mind to swap LLMs without changing tooling.
Example Use Cases
- Weather lookup: a get_weather tool returning current conditions used by a simple Python example.
- TypeScript example defining a weather tool and running an agent with a chosen model.
- Model-agnostic usage showing multiple models like Claude, GPT-4o, Bedrock, or custom endpoints.
- ReAct pattern: an agent reason through finding top products, calculating average price, and recommending pricing.
- Multi-Agent System: orchestrating specialist agents (researcher, analyst, writer) to complete a task.