LLM capability to generate structured calls to external functions based on natural language, enabling integration with APIs, databases, and real-world tools.
Function calling is the capability of an LLM to decide when to invoke an external function and generate the necessary arguments in structured format (typically JSON). The model doesn't execute the function — it generates the call specification that the host system executes.
This capability transforms LLMs from text generators into action orchestrators.
User: "What's the weather in Madrid?"
Model generates:
{
"function": "get_weather",
"arguments": { "city": "Madrid", "units": "celsius" }
}
System executes get_weather("Madrid", "celsius") → "18°C, partly cloudy"
Model responds: "In Madrid it's 18°C with partly cloudy skies."
import anthropic
client = anthropic.Anthropic()
tools = [{
"name": "get_weather",
"description": "Gets the current weather for a city",
"input_schema": {
"type": "object",
"properties": {
"city": {"type": "string", "description": "City name"},
"units": {"type": "string", "enum": ["celsius", "fahrenheit"]}
},
"required": ["city"]
}
}]
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
tools=tools,
messages=[{"role": "user", "content": "What's the weather in Madrid?"}]
)
# response.content includes a tool_use block with name and input| Feature | OpenAI | Anthropic | Gemini | Bedrock |
|---|---|---|---|---|
| Parallel calls | Yes | Yes | Yes | Model-dependent |
| Tool streaming | Yes | Yes | Yes | Yes |
| Forced mode | tool_choice: required | tool_choice: any | Mode configuration | Via toolChoice |
| Schema format | JSON Schema | JSON Schema | Protobuf or JSON | JSON Schema |
Function description quality determines model accuracy:
search_knowledge_base is better than search"enum": ["celsius", "fahrenheit"] instead of "type": "string"The Model Context Protocol standardizes how models discover and call tools. Function calling is the underlying mechanism; MCP is the protocol that makes it interoperable across different systems.
Function calling is the mechanism that turns LLMs from text generators into agents that interact with the real world. Without it, models can only respond with text. With it, they can query databases, call APIs, and execute concrete actions.
Autonomous systems that combine language models with reasoning, memory, and tool use to execute complex multi-step tasks with minimal human intervention.
Open protocol created by Anthropic that standardizes how AI applications connect with external tools, data, and services through a universal interface.
Massive neural networks based on the Transformer architecture, trained on enormous text corpora to understand and generate natural language with emergent capabilities like reasoning, translation, and code generation.
Design patterns where AI agents execute complex multi-step tasks autonomously, combining reasoning, tool use, and iterative decision-making.
Chronicle of building a second brain with a knowledge graph, bilingual pipeline, and agent endpoints — in days, not weeks, and what that teaches about the gap between theory and working systems.
Design strategies and patterns for AI agents to select, invoke, and combine external tools effectively to complete complex tasks.
Prompting technique that improves LLM reasoning by asking them to decompose complex problems into explicit intermediate steps before reaching a conclusion.
Patterns and frameworks for coordinating multiple AI models, tools, and data sources in production pipelines, managing flow between components, memory, and error recovery.