Strands Agents
Open source SDK from AWS for building AI agents with a model-driven approach. Functional agents in a few lines of code, with multi-model support, custom tools, MCP, multi-agent, and built-in observability.
Strands Agents is an open source SDK (Apache 2.0) created by AWS that adopts a model-driven approach to building AI agents. The premise: the language model is the orchestrator — you define tools and a prompt, and the model decides when and how to use them.
Why it matters
Most agent frameworks require defining explicit flows, decision graphs, or step chains. Strands inverts the paradigm:
- No manual orchestration — the LLM decides the flow based on context
- Functional agent in 4 lines — no boilerplate
- Model-agnostic — Bedrock, OpenAI, Anthropic, Ollama, Mistral, Gemini, and more
- Tools as Python functions —
@tooldecorator and done - Native MCP — connect MCP servers as tool sources
- Built-in observability — traces, metrics, and logs from the start
Minimal example
from strands import Agent
agent = Agent()
agent("What is the capital of France?")That's a functional agent. Uses Bedrock with Claude by default.
Custom tools
from strands import Agent, tool
@tool
def weather(city: str) -> str:
"""Get current weather for a city.
Args:
city: Name of the city to check weather for.
Returns:
Current weather description.
"""
# In production, call a real API
return f"Sunny, 22°C in {city}"
@tool
def convert_temperature(celsius: float) -> float:
"""Convert Celsius to Fahrenheit.
Args:
celsius: Temperature in Celsius.
Returns:
Temperature in Fahrenheit.
"""
return celsius * 9/5 + 32
agent = Agent(tools=[weather, convert_temperature])
agent("What's the weather in Madrid? Give me the temperature in Fahrenheit too.")The agent autonomously decides: first calls weather("Madrid"), then convert_temperature(22.0), and composes the response.
Model providers
from strands import Agent
from strands.models import BedrockModel
# Amazon Bedrock (default)
agent = Agent(model="anthropic.claude-sonnet-4-20250514-v1:0")
# With explicit configuration
bedrock = BedrockModel(
model_id="anthropic.claude-sonnet-4-20250514-v1:0",
region_name="us-west-2",
temperature=0.3,
)
agent = Agent(model=bedrock)# OpenAI
from strands.models import OpenAIModel
agent = Agent(model=OpenAIModel(model_id="gpt-4o"))
# Anthropic direct
from strands.models import AnthropicModel
agent = Agent(model=AnthropicModel(model_id="claude-sonnet-4-20250514"))
# Ollama (local)
from strands.models import OllamaModel
agent = Agent(model=OllamaModel(model_id="llama3.1"))Supported providers: Amazon Bedrock, Amazon Nova, Anthropic, OpenAI, Gemini, Ollama, Mistral, LiteLLM, LlamaAPI, SageMaker, Writer, llama.cpp, and custom providers.
MCP integration
Connect MCP servers as tool sources:
from strands import Agent
from strands.tools.mcp import MCPClient
# Connect to an MCP server
mcp = MCPClient(command="uvx", args=["my-mcp-server"])
with mcp:
agent = Agent(tools=mcp.list_tools())
agent("Use the MCP tools to complete this task")Multi-agent patterns
Agents as Tools
An agent can use other agents as tools:
from strands import Agent
researcher = Agent(
system_prompt="You are a research specialist.",
tools=[web_search]
)
writer = Agent(
system_prompt="You are a technical writer.",
tools=[researcher.as_tool(
name="research",
description="Research a topic thoroughly"
)]
)
writer("Write an article about quantum computing")Swarm
Multiple agents collaborating with handoffs:
from strands import Agent
from strands.multiagent import Swarm
triage = Agent(system_prompt="Route to the right specialist.")
billing = Agent(system_prompt="Handle billing questions.", tools=[billing_api])
technical = Agent(system_prompt="Handle technical issues.", tools=[diagnostics])
swarm = Swarm(
agents={"triage": triage, "billing": billing, "technical": technical},
entry_point="triage"
)
swarm("I can't log in and I was charged twice")Graph
Directed flows with conditions:
from strands.multiagent import Graph
graph = Graph()
graph.add_node("classify", classify_agent)
graph.add_node("respond", respond_agent)
graph.add_node("escalate", escalate_agent)
graph.add_edge("classify", "respond", condition=lambda r: r.priority == "low")
graph.add_edge("classify", "escalate", condition=lambda r: r.priority == "high")
graph.run("Customer complaint about delivery")Observability
Every invocation returns an AgentResult with traces and metrics:
result = agent("What is the square root of 144?")
# Usage metrics
print(result.metrics.get_summary())
# {
# "total_cycles": 2,
# "total_duration": 1.88,
# "accumulated_usage": {"inputTokens": 3921, "outputTokens": 83},
# "tool_usage": {"calculator": {"call_count": 1, "success_rate": 1.0}}
# }Export to OpenTelemetry for Datadog, Grafana, etc.
Streaming
import asyncio
from strands import Agent
agent = Agent(callback_handler=None)
async def stream():
async for event in agent.stream_async("Explain quantum computing"):
if "data" in event:
print(event["data"], end="", flush=True)
asyncio.run(stream())Structured output
from pydantic import BaseModel
from strands import Agent
class MovieReview(BaseModel):
title: str
rating: float
summary: str
agent = Agent()
result = agent.structured_output(
"Review the movie Inception",
output_model=MovieReview
)
print(result.title) # "Inception"
print(result.rating) # 9.2Production deployment
Strands agents are standard Python — deploy anywhere:
- AWS Lambda — serverless, pay-per-invocation
- AWS Fargate / ECS — containers
- Amazon EKS — Kubernetes
- Amazon EC2 — instances
- Amazon Bedrock AgentCore — managed agent runtime
- Docker — any container platform
When to choose it
Good fit when:
- You want functional agents with minimal code
- You need model flexibility (switch providers without rewriting)
- You already use AWS and Bedrock
- You need native MCP
- You want observability without extra configuration
- Projects going from prototype to production
Consider alternatives when:
- You need strict deterministic flows (LangGraph may be better)
- Your stack is exclusively OpenAI (OpenAI's SDK is more direct)
- You prefer TypeScript as primary language (Strands has a TS SDK but Python is more mature)
Comparison with alternatives
| Aspect | Strands Agents | LangChain/LangGraph | CrewAI | OpenAI SDK |
|---|---|---|---|---|
| Approach | Model-driven | Chain/Graph-driven | Role-based | API-driven |
| Complexity | Low | High | Medium | Low |
| Multi-model | Yes (15+) | Yes | Limited | OpenAI only |
| Native MCP | Yes | Plugin | No | No |
| Multi-agent | Swarm, Graph, A2A | LangGraph | Crews | Swarm (beta) |
| Observability | Built-in | LangSmith (paid) | Limited | Limited |
| License | Apache 2.0 | MIT | MIT | MIT |
| Backing | AWS | LangChain Inc. | CrewAI Inc. | OpenAI |
Ecosystem
- strands-agents — main SDK (Python and TypeScript)
- strands-agents-tools — community tools package
- strands-agents-builder — agent that helps build other agents
- strands-agents-mcp-server — MCP server for IDE assistants
- strands-evals — agent evaluation SDK
References
- Strands Agents Documentation — AWS, 2025. Complete official documentation.
- strands-agents/sdk-python — GitHub, 2025. Python SDK repository.
- Introducing Strands Agents — AWS, 2025. Official launch post.
- Strands Agents 1.0: Production-Ready Multi-Agent Orchestration — AWS Open Source Blog, 2025. Version 1.0 announcement.
- strands-agents on PyPI — PyPI, 2025. Official package with installation instructions.