Open source SDK from AWS for building AI agents with a model-driven approach. Functional agents in a few lines of code, with multi-model support, custom tools, MCP, multi-agent, and built-in observability.
Strands Agents is an open source SDK (Apache 2.0) created by AWS that adopts a model-driven approach to building AI agents. The premise: the language model is the orchestrator — you define tools and a prompt, and the model decides when and how to use them.
Most agent frameworks require defining explicit flows, decision graphs, or step chains. Strands inverts the paradigm:
@tool decorator and donefrom strands import Agent
agent = Agent()
agent("What is the capital of France?")That's a functional agent. Uses Bedrock with Claude by default.
from strands import Agent, tool
@tool
def weather(city: str) -> str:
"""Get current weather for a city.
Args:
city: Name of the city to check weather for.
Returns:
Current weather description.
"""
# In production, call a real API
return f"Sunny, 22°C in {city}"
@tool
def convert_temperature(celsius: float) -> float:
"""Convert Celsius to Fahrenheit.
Args:
celsius: Temperature in Celsius.
Returns:
Temperature in Fahrenheit.
"""
return celsius * 9/5 + 32
agent = Agent(tools=[weather, convert_temperature])
agent("What's the weather in Madrid? Give me the temperature in Fahrenheit too.")The agent autonomously decides: first calls weather("Madrid"), then convert_temperature(22.0), and composes the response.
from strands import Agent
from strands.models import BedrockModel
# Amazon Bedrock (default)
agent = Agent(model="anthropic.claude-sonnet-4-20250514-v1:0")
# With explicit configuration
bedrock = BedrockModel(
model_id="anthropic.claude-sonnet-4-20250514-v1:0",
region_name="us-west-2",
temperature=0.3,
)
agent = Agent(model=bedrock)# OpenAI
from strands.models import OpenAIModel
agent = Agent(model=OpenAIModel(model_id="gpt-4o"))
# Anthropic direct
from strands.models import AnthropicModel
agent = Agent(model=AnthropicModel(model_id="claude-sonnet-4-20250514"))
# Ollama (local)
from strands.models import OllamaModel
agent = Agent(model=OllamaModel(model_id="llama3.1"))Supported providers: Amazon Bedrock, Amazon Nova, Anthropic, OpenAI, Gemini, Ollama, Mistral, LiteLLM, LlamaAPI, SageMaker, Writer, llama.cpp, and custom providers.
Connect MCP servers as tool sources:
from strands import Agent
from strands.tools.mcp import MCPClient
# Connect to an MCP server
mcp = MCPClient(command="uvx", args=["my-mcp-server"])
with mcp:
agent = Agent(tools=mcp.list_tools())
agent("Use the MCP tools to complete this task")An agent can use other agents as tools:
from strands import Agent
researcher = Agent(
system_prompt="You are a research specialist.",
tools=[web_search]
)
writer = Agent(
system_prompt="You are a technical writer.",
tools=[researcher.as_tool(
name="research",
description="Research a topic thoroughly"
)]
)
writer("Write an article about quantum computing")Multiple agents collaborating with handoffs:
from strands import Agent
from strands.multiagent import Swarm
triage = Agent(system_prompt="Route to the right specialist.")
billing = Agent(system_prompt="Handle billing questions.", tools=[billing_api])
technical = Agent(system_prompt="Handle technical issues.", tools=[diagnostics])
swarm = Swarm(
agents={"triage": triage, "billing": billing, "technical": technical},
entry_point="triage"
)
swarm("I can't log in and I was charged twice")Directed flows with conditions:
from strands.multiagent import Graph
graph = Graph()
graph.add_node("classify", classify_agent)
graph.add_node("respond", respond_agent)
graph.add_node("escalate", escalate_agent)
graph.add_edge("classify", "respond", condition=lambda r: r.priority == "low")
graph.add_edge("classify", "escalate", condition=lambda r: r.priority == "high")
graph.run("Customer complaint about delivery")Every invocation returns an AgentResult with traces and metrics:
result = agent("What is the square root of 144?")
# Usage metrics
print(result.metrics.get_summary())
# {
# "total_cycles": 2,
# "total_duration": 1.88,
# "accumulated_usage": {"inputTokens": 3921, "outputTokens": 83},
# "tool_usage": {"calculator": {"call_count": 1, "success_rate": 1.0}}
# }Export to OpenTelemetry for Datadog, Grafana, etc.
import asyncio
from strands import Agent
agent = Agent(callback_handler=None)
async def stream():
async for event in agent.stream_async("Explain quantum computing"):
if "data" in event:
print(event["data"], end="", flush=True)
asyncio.run(stream())from pydantic import BaseModel
from strands import Agent
class MovieReview(BaseModel):
title: str
rating: float
summary: str
agent = Agent()
result = agent.structured_output(
"Review the movie Inception",
output_model=MovieReview
)
print(result.title) # "Inception"
print(result.rating) # 9.2Strands agents are standard Python — deploy anywhere:
Good fit when:
Consider alternatives when:
| Aspect | Strands Agents | LangChain/LangGraph | CrewAI | OpenAI SDK |
|---|---|---|---|---|
| Approach | Model-driven | Chain/Graph-driven | Role-based | API-driven |
| Complexity | Low | High | Medium | Low |
| Multi-model | Yes (15+) | Yes | Limited | OpenAI only |
| Native MCP | Yes | Plugin | No | No |
| Multi-agent | Swarm, Graph, A2A | LangGraph | Crews | Swarm (beta) |
| Observability | Built-in | LangSmith (paid) | Limited | Limited |
| License | Apache 2.0 | MIT | MIT | MIT |
| Backing | AWS | LangChain Inc. | CrewAI Inc. | OpenAI |
Autonomous systems that combine language models with reasoning, memory, and tool use to execute complex multi-step tasks with minimal human intervention.
Open protocol created by Anthropic that standardizes how AI applications connect with external tools, data, and services through a universal interface.
Field of computer science dedicated to creating systems capable of performing tasks that normally require human intelligence, from reasoning and perception to language generation.
Chronicle of building a second brain with a knowledge graph, bilingual pipeline, and agent endpoints — in days, not weeks, and what that teaches about the gap between theory and working systems.
Three-agent system that automates the bilingual MDX content lifecycle: deterministic QA auditing, surgical fixes, and full upgrades — all orchestrated with Strands Agents, Claude Sonnet 4 on Amazon Bedrock, and GitHub Actions with a diamond workflow pattern.
Architectures where multiple specialized AI agents collaborate, compete, or coordinate to solve complex problems that exceed a single agent's capability.
Design patterns where AI agents execute complex multi-step tasks autonomously, combining reasoning, tool use, and iterative decision-making.