Jonatan Matajonmatum.com
conceptsnotesexperimentsessays
© 2026 Jonatan Mata. All rights reserved.v2.1.1
Concepts

Model Context Protocol (MCP)

Open protocol created by Anthropic that standardizes how AI applications connect with external tools, data, and services through a universal interface.

growing#mcp#protocol#ai-tools#anthropic#json-rpc#open-standard#interoperability

What it is

The Model Context Protocol (MCP) is an open standard launched by Anthropic in November 2024 that defines how AI applications connect with external tools and data sources. It is to the AI ecosystem what the Language Server Protocol (LSP) was for code editors: a universal interface that eliminates the need for point-to-point integrations.

Before MCP, every AI tool needed its own custom integration with each external service. If you had 10 AI tools and 10 services, you needed 100 integrations. With MCP, each service implements an MCP server once, and any MCP client can connect to it.

Architecture

MCP uses JSON-RPC 2.0 messages and defines three roles:

  • Host: the AI application that initiates the connection (e.g., Claude Desktop, Kiro CLI, an IDE)
  • Client: the connector within the host that manages communication with a specific server
  • Server: the service that exposes tools, resources, and context to the model
Loading diagram...

Core capabilities

Tools

Functions the model can invoke. Each tool has a name, description, and typed input schema. The model decides when and how to use them.

{
  "name": "create_todo",
  "description": "Create a new todo item",
  "inputSchema": {
    "type": "object",
    "properties": {
      "title": { "type": "string" }
    },
    "required": ["title"]
  }
}

Resources

Data the server exposes to the model: files, database records, documentation. Unlike tools, resources are data the model reads but doesn't modify.

Prompts

Instruction templates the server can offer to the host to guide user interaction with the model.

Transports

MCP supports two transport mechanisms:

TransportUseLimitation
stdioLocal development. The host runs the server as a child process.Single client per server.
HTTP + SSEProduction. Remote connection with Server-Sent Events for streaming.Requires authentication (OAuth 2.1 since the 2025 spec).

Specification evolution

  • 2024-11: initial launch. Tools, resources, prompts, stdio transport.
  • 2025-03: remote connections with SSE, tool discovery improvements.
  • 2025-06: OAuth 2.0, resource indicators, security improvements.
  • 2025-11: mandatory OAuth 2.1 with PKCE, async execution, client metadata, enterprise readiness.

Server implementation patterns

Dynamic tool registration

An MCP server registers its tools when the connection starts. The host discovers available capabilities via tools/list and presents the tools to the model as invocable functions:

server.tool("search_docs", {
  description: "Search documentation by query",
  inputSchema: {
    type: "object",
    properties: {
      query: { type: "string", description: "Search query" }
    },
    required: ["query"]
  }
}, async ({ query }) => {
  const results = await searchIndex(query);
  return { content: [{ type: "text", text: JSON.stringify(results) }] };
});

Tools can be updated at runtime — the server notifies the client via notifications/tools/list_changed when available tools change.

Common server patterns

  • API wrapper: exposes an existing REST API as MCP tools with typed schemas
  • Database: offers queries and CRUD operations as tools, with resources for schemas and data
  • File system: controlled access to local files with granular permissions
  • Data pipeline: combines transformation tools with resources that expose datasets

Security

Since the 2025 specification, remote servers require OAuth 2.1 with PKCE. The authorization flow follows the standard:

  1. The client discovers server metadata (/.well-known/oauth-authorization-server)
  2. Initiates the authorization flow with PKCE
  3. Obtains an access token
  4. Includes the token in every request to the server

Why it matters

MCP solves a fundamental problem: integration fragmentation in the AI ecosystem. Without a standard, every combination of AI tool + external service requires custom code. With MCP:

  • Tool developers implement a server once
  • AI application developers implement a client once
  • Any client can connect to any server
  • Tools are discoverable and self-documented

This is especially relevant for AI agents that need to access multiple tools dynamically.

References

  • Model Context Protocol — Specification — Official and authoritative protocol specification.
  • Introducing the Model Context Protocol — Original Anthropic announcement, November 2024.
  • MCP GitHub Organization — Official SDKs for TypeScript and Python.
  • Python MCP SDK — Python reference implementation.
  • TypeScript MCP SDK — TypeScript reference implementation.

Related content

  • AI Agents

    Autonomous systems that combine language models with reasoning, memory, and tool use to execute complex multi-step tasks with minimal human intervention.

  • Artificial Intelligence

    Field of computer science dedicated to creating systems capable of performing tasks that normally require human intelligence, from reasoning and perception to language generation.

  • Spec-Driven Development

    Development methodology where the specification is written before the code, serving as a contract between teams and as the source of truth for implementation.

  • AI Orchestration

    Patterns and frameworks for coordinating multiple AI models, tools, and data sources in production pipelines, managing flow between components, memory, and error recovery.

  • Strands Agents

    Open source SDK from AWS for building AI agents with a model-driven approach. Functional agents in a few lines of code, with multi-model support, custom tools, MCP, multi-agent, and built-in observability.

  • From Prototype to Production: A Serverless Second Brain on AWS

    Architecture design for scaling a personal second brain to a production system with AWS serverless — from the current prototype to specialized use cases in legal, research, and community building.

  • Building a Second Brain in Public

    Chronicle of building a second brain with a knowledge graph, bilingual pipeline, and agent endpoints — in days, not weeks, and what that teaches about the gap between theory and working systems.

  • Serverless Second Brain

    Production-ready serverless backend for a personal knowledge graph — DynamoDB, Lambda, Bedrock, MCP, Step Functions. The implementation of the architecture described in the 'From Prototype to Production' essay.

  • MCP Dual Interface Demo

    Demonstration of dual-interface architecture where the same business logic serves both a traditional web application and an MCP server for AI tools.

  • Git Metrics MCP Server

    MCP server for analyzing git repository metrics and understanding team health. Published on npm as @jonmatum/git-metrics-mcp-server.

  • Tool Use Patterns

    Design strategies and patterns for AI agents to select, invoke, and combine external tools effectively to complete complex tasks.

  • Prompt Engineering

    The discipline of designing effective instructions for language models, combining clarity, structure, and examples to obtain consistent, high-quality responses.

  • llms.txt

    Proposed standard for publishing a Markdown file at a website's root that enables language models to efficiently understand and use the site's content at inference time.

  • Function Calling

    LLM capability to generate structured calls to external functions based on natural language, enabling integration with APIs, databases, and real-world tools.

  • Agentic Workflows

    Design patterns where AI agents execute complex multi-step tasks autonomously, combining reasoning, tool use, and iterative decision-making.

Concepts