Design patterns where AI agents execute complex multi-step tasks autonomously, combining reasoning, tool use, and iterative decision-making.
An agentic workflow is a pattern where one or more AI agents execute complex tasks autonomously, making decisions at each step about what action to take, which tool to use, and when to request human intervention. Unlike a fixed pipeline, the agent adapts its behavior based on intermediate results.
The agent evaluates its own output and improves it iteratively:
Generate → Evaluate → Refine → Evaluate → Deliver
Useful for: writing, code generation, analysis.
The agent decomposes the task into subtasks before executing:
Analyze task → Create plan → Execute step 1 → ... → Step N → Synthesize
Useful for: research, complex tasks with multiple dependencies.
The agent alternates between reasoning and action:
Think → Act → Observe → Think → Act → Observe → Respond
This is the most common pattern in frameworks like Strands Agents.
Multiple specialized agents collaborate:
Orchestrator → Research agent → Writer agent → Reviewer agent → Result
The choice of autonomy level depends on task risk and system maturity:
from strands import Agent
from strands.tools import tool
@tool
def search_docs(query: str) -> str:
"""Search relevant documents in the knowledge base."""
# Search implementation
return results
agent = Agent(
model="us.anthropic.claude-sonnet-4-20250514-v1:0",
tools=[search_docs],
system_prompt="You are a research assistant. Use search_docs to find information before answering."
)
response = agent("What are the best practices for RAG?")The agent autonomously decides when to invoke search_docs and how many times to iterate before responding.
max_turns can enter infinite loopsThe Model Context Protocol provides the tool layer that agentic workflows need. Without a standard protocol for discovering and using tools, each workflow requires ad-hoc integrations.
Agentic workflows allow LLMs to move from answering questions to executing complex multi-step tasks. Understanding their patterns — ReAct, planning, reflection — is the difference between building chatbots and building assistants that actually complete work.
Autonomous systems that combine language models with reasoning, memory, and tool use to execute complex multi-step tasks with minimal human intervention.
Open protocol created by Anthropic that standardizes how AI applications connect with external tools, data, and services through a universal interface.
Open source SDK from AWS for building AI agents with a model-driven approach. Functional agents in a few lines of code, with multi-model support, custom tools, MCP, multi-agent, and built-in observability.
Patterns and frameworks for coordinating multiple AI models, tools, and data sources in production pipelines, managing flow between components, memory, and error recovery.
Design strategies and patterns for AI agents to select, invoke, and combine external tools effectively to complete complex tasks.
Three-agent system that automates the bilingual MDX content lifecycle: deterministic QA auditing, surgical fixes, and full upgrades — all orchestrated with Strands Agents, Claude Sonnet 4 on Amazon Bedrock, and GitHub Actions with a diamond workflow pattern.
Technique that stores the internal computation of reused prompt prefixes across LLM calls, reducing costs by up to 90% and latency by up to 85% in applications with repetitive context.
Architectures where multiple specialized AI agents collaborate, compete, or coordinate to solve complex problems that exceed a single agent's capability.
LLM capability to generate structured calls to external functions based on natural language, enabling integration with APIs, databases, and real-world tools.
AWS serverless orchestration service that coordinates multiple services into visual workflows using Amazon States Language (ASL), with built-in error handling, retries, and parallel execution.