Open protocol created by Anthropic that standardizes how AI applications connect with external tools, data, and services through a universal interface.
The Model Context Protocol (MCP) is an open standard launched by Anthropic in November 2024 that defines how AI applications connect with external tools and data sources. It is to the AI ecosystem what the Language Server Protocol (LSP) was for code editors: a universal interface that eliminates the need for point-to-point integrations.
Before MCP, every AI tool needed its own custom integration with each external service. If you had 10 AI tools and 10 services, you needed 100 integrations. With MCP, each service implements an MCP server once, and any MCP client can connect to it.
MCP uses JSON-RPC 2.0 messages and defines three roles:
Functions the model can invoke. Each tool has a name, description, and typed input schema. The model decides when and how to use them.
{
"name": "create_todo",
"description": "Create a new todo item",
"inputSchema": {
"type": "object",
"properties": {
"title": { "type": "string" }
},
"required": ["title"]
}
}Data the server exposes to the model: files, database records, documentation. Unlike tools, resources are data the model reads but doesn't modify.
Instruction templates the server can offer to the host to guide user interaction with the model.
MCP supports two transport mechanisms:
| Transport | Use | Limitation |
|---|---|---|
| stdio | Local development. The host runs the server as a child process. | Single client per server. |
| HTTP + SSE | Production. Remote connection with Server-Sent Events for streaming. | Requires authentication (OAuth 2.1 since the 2025 spec). |
An MCP server registers its tools when the connection starts. The host discovers available capabilities via tools/list and presents the tools to the model as invocable functions:
server.tool("search_docs", {
description: "Search documentation by query",
inputSchema: {
type: "object",
properties: {
query: { type: "string", description: "Search query" }
},
required: ["query"]
}
}, async ({ query }) => {
const results = await searchIndex(query);
return { content: [{ type: "text", text: JSON.stringify(results) }] };
});Tools can be updated at runtime — the server notifies the client via notifications/tools/list_changed when available tools change.
Since the 2025 specification, remote servers require OAuth 2.1 with PKCE. The authorization flow follows the standard:
/.well-known/oauth-authorization-server)MCP solves a fundamental problem: integration fragmentation in the AI ecosystem. Without a standard, every combination of AI tool + external service requires custom code. With MCP:
This is especially relevant for AI agents that need to access multiple tools dynamically.
Autonomous systems that combine language models with reasoning, memory, and tool use to execute complex multi-step tasks with minimal human intervention.
Field of computer science dedicated to creating systems capable of performing tasks that normally require human intelligence, from reasoning and perception to language generation.
Development methodology where the specification is written before the code, serving as a contract between teams and as the source of truth for implementation.
Patterns and frameworks for coordinating multiple AI models, tools, and data sources in production pipelines, managing flow between components, memory, and error recovery.
Open source SDK from AWS for building AI agents with a model-driven approach. Functional agents in a few lines of code, with multi-model support, custom tools, MCP, multi-agent, and built-in observability.
Architecture design for scaling a personal second brain to a production system with AWS serverless — from the current prototype to specialized use cases in legal, research, and community building.
Chronicle of building a second brain with a knowledge graph, bilingual pipeline, and agent endpoints — in days, not weeks, and what that teaches about the gap between theory and working systems.
Production-ready serverless backend for a personal knowledge graph — DynamoDB, Lambda, Bedrock, MCP, Step Functions. The implementation of the architecture described in the 'From Prototype to Production' essay.
Demonstration of dual-interface architecture where the same business logic serves both a traditional web application and an MCP server for AI tools.
MCP server for analyzing git repository metrics and understanding team health. Published on npm as @jonmatum/git-metrics-mcp-server.
Design strategies and patterns for AI agents to select, invoke, and combine external tools effectively to complete complex tasks.
The discipline of designing effective instructions for language models, combining clarity, structure, and examples to obtain consistent, high-quality responses.
Proposed standard for publishing a Markdown file at a website's root that enables language models to efficiently understand and use the site's content at inference time.
LLM capability to generate structured calls to external functions based on natural language, enabling integration with APIs, databases, and real-world tools.
Design patterns where AI agents execute complex multi-step tasks autonomously, combining reasoning, tool use, and iterative decision-making.