Design strategies and patterns for AI agents to select, invoke, and combine external tools effectively to complete complex tasks.
Tool use patterns define how AI agents select, invoke, and combine external tools to solve tasks. While function calling is the technical mechanism and MCP is the discovery protocol, tool use patterns are the higher-level strategies that determine when, how, and in what order to invoke tools.
Definition quality determines whether the model selects the right tool. A definition includes a name, description, and typed input schema.
{
"type": "function",
"function": {
"name": "search_knowledge",
"description": "Search the knowledge base by query. Returns matching concepts with titles, summaries, and relevance scores. Use when the user asks about a topic.",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "Search query — use specific terms, not full sentences"
},
"limit": {
"type": "integer",
"description": "Max results to return (default: 5)",
"default": 5
}
},
"required": ["query"]
}
}
}server.tool("search_knowledge", {
description: "Search the knowledge base by query. Returns matching concepts with titles, summaries, and relevance scores.",
inputSchema: {
type: "object",
properties: {
query: { type: "string", description: "Search query" },
limit: { type: "integer", description: "Max results", default: 5 }
},
required: ["query"]
}
}, async ({ query, limit }) => {
const results = await searchIndex(query, limit);
return { content: [{ type: "text", text: JSON.stringify(results) }] };
});The key difference: in function calling, the host executes the function and returns the result to the model. In MCP, the server executes the tool and the protocol handles transport. The definition schema is nearly identical — what changes is who executes and how discovery works.
The simplest case — the agent identifies one tool and invokes it once:
When tools are independent, the agent invokes them simultaneously. Significantly reduces latency:
One tool's result feeds the next tool's input. The most common pattern in agentic workflows:
The agent decides which tool to use based on the question's context:
Tool description quality is what enables the model to make this selection correctly. Ambiguous descriptions produce incorrect selections.
When an agent has access to many tools (10+), selection becomes a prompt engineering problem:
| Strategy | How it works | When to use |
|---|---|---|
| All available | All tools in the model's context | Fewer than 10 tools |
| Category filtering | Group tools by domain, load only relevant ones | 10-50 tools |
| Two-step | First LLM call selects tools, second uses them | 50+ tools |
| Hierarchical description | "Meta" tools that expose sub-tools | Complex systems with multiple domains |
Category filtering is the most practical for most systems. Example: a technical support agent loads "database" tools only when it detects a data question, and "deployment" tools only for infrastructure questions.
Tool errors are inevitable. The agent needs strategies to handle them without failing silently:
The most advanced patterns combine multiple tools in complex flows:
Linear sequence where each step transforms the previous result:
Distribute work in parallel and aggregate results:
The agent iterates until meeting a criterion:
This pattern is the foundation of the "agent loop" in frameworks like Strands Agents — the model reasons, invokes tools, evaluates results, and decides if it needs more information.
| Aspect | Function calling | MCP |
|---|---|---|
| Definition | In the model's prompt or API | In the MCP server |
| Discovery | Static — defined by the developer | Dynamic — tools/list on connect |
| Execution | The host executes the function | The MCP server executes |
| Portability | Provider-specific (OpenAI, Bedrock, etc.) | Universal — any MCP client |
| State | Stateless between calls | Can maintain state in the server |
| Use case | Simple tools, integrated in the app | Shared tools across multiple AI clients |
In practice, function calling is sufficient for an application's internal tools. MCP is necessary when tools must be accessible from multiple AI clients or when dynamic discovery matters.
Tool use patterns are what turns an LLM into an agent capable of acting in the real world. Without tools, a model can only generate text. With well-designed tools and correct invocation patterns, it can query databases, execute code, interact with APIs, and orchestrate complex flows.
The difference between a reliable agent and a fragile one is in the details: precise descriptions that guide selection, typed schemas that prevent input errors, error handling that degrades gracefully, and composition that enables solving tasks no single tool can complete. These patterns are transferable across providers — whether it's Bedrock function calling, the Anthropic API, or an MCP server.
Autonomous systems that combine language models with reasoning, memory, and tool use to execute complex multi-step tasks with minimal human intervention.
Open protocol created by Anthropic that standardizes how AI applications connect with external tools, data, and services through a universal interface.
LLM capability to generate structured calls to external functions based on natural language, enabling integration with APIs, databases, and real-world tools.
Design patterns where AI agents execute complex multi-step tasks autonomously, combining reasoning, tool use, and iterative decision-making.
The discipline of designing effective instructions for language models, combining clarity, structure, and examples to obtain consistent, high-quality responses.