The Model Context Protocol (MCP) is Anthropic's open standard for connecting AI agents to tools, data, and prompts. JSON-RPC over stdio, SSE, or HTTP. Released in late 2024, broadly adopted across Claude Desktop, Cursor, Windsurf, ADK, OpenAI Agents SDK, and more. The lingua franca of agent tooling in 2026 — if you write a tool once as an MCP server, every framework and every model can call it.
MCP is a standardised way for an AI agent to discover and call tools (function calls), read resources (files, database rows, API responses), and reuse prompts (named, parameterised templates) hosted by an external server. The agent doesn't care what's behind the server — it could be a Python script reading the local filesystem, a Cloudflare Worker hitting a SaaS API, a TypeScript service over Redis, or a database access layer over Postgres. The agent talks to all of them through one protocol.
The thesis is simple: every team writes tool integrations. Without a protocol, every framework reinvents its own way of describing tools (LangChain's Tool, OpenAI's function, ADK's tool spec, Claude's tool_use). With MCP, you write one server in one language, and any MCP-compatible client can use it. The Cursor IDE, Claude Desktop, ADK, and OpenAI Agents SDK all speak MCP — the same server works for all of them.
Released by Anthropic in November 2024, MCP went from "Anthropic spec" to "ecosystem-wide standard" in roughly a year. Cursor, Windsurf, Sourcegraph, and Continue.dev adopted it for IDE-side tool calling. Google's ADK consumes MCP servers via MCPToolset. OpenAI's Agents SDK includes an MCP client. The open-source registry at mcp.so indexes thousands of community-built servers as of 2026.
Tools — functions the agent can call, with structured input/output schemas. Like an OpenAPI endpoint but described JSON-Schema-first. Resources — read-only data the agent can fetch (a file, a Postgres query result, an HTTP response). The agent decides when to read; the server owns retrieval. Prompts — named, parameterised prompt templates the server provides. Less commonly used, but useful for "the right way to ask" patterns the server author knows about.
The protocol is JSON-RPC 2.0 messages over one of three transports. Pick the one that matches your deployment. The application-layer messages are identical regardless of transport — client and server agree on a transport at startup, then speak MCP over it.
Server runs as a child process; client communicates via stdin / stdout. The default for desktop clients (Claude Desktop, Cursor). Simple, fast, no networking.
Server-Sent Events over HTTP. Server pushes JSON-RPC messages; client posts back via separate HTTP requests. Used for hosted servers with long-running connections.
The newer transport — HTTP with optional streaming. Replaces SSE in MCP 2025-03-26 spec. Same semantics, simpler firewall story, better for production hosting.
A minimal MCP server in Python — the canonical "weather" example using the official SDK:
# pip install mcp from mcp.server.fastmcp import FastMCP mcp = FastMCP("weather") @mcp.tool() def get_weather(city: str, country: str = "ZA") -> dict: """Get current weather for a city. Returns temp, conditions, wind.""" # real implementation calls a weather API return {"temp_c": 22, "conditions": "clear", "wind_kph": 14} @mcp.resource("weather://history/{city}") def history(city: str) -> str: """Read weather history for a city.""" return read_history_csv(city) if __name__ == "__main__": mcp.run() # defaults to stdio
A minimal MCP server in TypeScript — same shape, same protocol, different language:
// npm i @modelcontextprotocol/sdk import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js"; import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js"; const server = new McpServer({ name: "weather", version: "1.0.0" }); server.tool( "get_weather", { city: "string", country: "string?" }, async ({ city, country = "ZA" }) => ({ content: [{ type: "text", text: JSON.stringify({ temp_c: 22 }) }] }) ); await server.connect(new StdioServerTransport());
Consuming an MCP server from ADK:
from google.adk.agents import Agent from google.adk.tools.mcp_tool import MCPToolset, StdioServerParameters weather_tools = MCPToolset( connection_params=StdioServerParameters( command="python", args=["weather_server.py"] ) ) agent = Agent(name="forecaster", tools=[weather_tools])
The same weather_server.py can be: registered in Claude Desktop's mcp.json, configured in Cursor's MCP settings, consumed by ADK via MCPToolset, called from OpenAI Agents SDK via its MCP client, used by Continue.dev for IDE chat. One implementation, every consumer. That's the structural shift MCP made — tools used to be framework-specific; now they're protocol-specific.
MCP adoption is broad and still accelerating. The clients that matter span IDEs (Cursor, Windsurf, Continue), desktop AI apps (Claude Desktop, ChatGPT), and agent frameworks (ADK, OpenAI Agents SDK, Anthropic Agent SDK). The server side has thousands of community implementations covering everything from Postgres access to Linear ticket management to Cloudflare Worker deploys.
| Client | MCP support | Notes |
|---|---|---|
| Claude Desktop | Native | Origin client; reference implementation. Stdio + SSE. |
| Cursor | Native | Per-workspace MCP config; instant tool access in chat. |
| Windsurf | Native | Codeium's editor; MCP support since 2025. |
| Continue.dev | Native | Open-source IDE assistant; MCP for tool integration. |
| Google ADK | MCPToolset | Wraps any MCP server as ADK tools. |
| OpenAI Agents SDK | Native MCP client | OpenAI made MCP a first-class part of the agent SDK in 2025. |
| Anthropic Agent SDK | Native | MCP is the canonical tool layer for Claude agents. |
| LangGraph | Community adapter | Wraps MCP servers as LangChain tools. |
Once a protocol has thousands of servers indexed and discoverable (mcp.so, GitHub topic mcp-server), adoption compounds. New AI clients don't have to recreate the integration ecosystem; they just speak MCP and inherit it. That's why every credible IDE-AI and agent-framework launched since mid-2025 supports MCP out of the box.
The two open agent protocols cover different boundaries. Most production agent stacks use both:
| Concern | MCP | A2A |
|---|---|---|
| What it connects | Agent ↔ tools / data | Agent ↔ agent |
| Origin | Anthropic (Nov 2024) | Google (2025), now Linux Foundation |
| Transport | stdio, SSE, Streamable HTTP | JSON-RPC over HTTP |
| Discovery | Client registers server explicitly | Agent Card at well-known URL |
| Statefulness | Per-session, server-side | Per-task, with explicit lifecycle |
| Best fit | Letting agents use your tools / read your data | Letting agents call other agents (cross-framework) |
Orchestrator agent (ADK or LangGraph) calls specialist agents via A2A when they live in other processes. The same orchestrator and specialists use MCP to access internal tools, customer data, or external APIs. A2A is for delegation; MCP is for capability. Most production agents end up with both wired in.
Data residency. MCP servers run wherever you put them. A POPIA-sensitive workload can have its database tool run as an MCP server on the SA-hosted machine that owns the database; the agent (which might be calling Claude or Gemini in the US) only sees the tool's response, not the underlying data. The protocol boundary is the privacy boundary — you choose what flows out of the server response.
Local-first MCP servers. Many practical MCP servers are tiny scripts that run on the developer's laptop or a single Hetzner droplet. mcp-server-sqlite for read-only DB queries, mcp-server-filesystem for file access, mcp-server-postgres for managed databases. None of them require GCP or AWS; all of them work over stdio for zero-network setups.
Cost: MCP itself is free — it's a spec. Hosting an MCP server costs whatever your CPU + RAM cost. For SA teams managing FX exposure on agent platforms, MCP is the seam where you can keep tools local while the model runs cloud — or vice versa, with Ollama-hosted local models calling cloud-hosted MCP servers.
MCPToolset. The reference for "ADK agent calls MCP server".