know.2nth.ai Agents A2A Protocol
agents · A2A Protocol · Skill Leaf

Agents talk. Whatever framework they're written in.

The Agent2Agent (A2A) Protocol is the open standard that lets agents from different frameworks call each other. JSON-RPC over HTTP, Agent Card discovery at /.well-known/agent-card.json, native streaming, push notifications. Originally released by Google, now governed by the Linux Foundation. The interop layer that turns "ADK + LangGraph + CrewAI in production" from an aspiration into a deployment.

Live spec Linux Foundation JSON-RPC 2.0 over HTTP Agent Card discovery Streaming + push

An open protocol, not another framework.

A2A is a wire protocol, not a framework. It defines how an agent in one process announces what it can do (Agent Card), how a remote caller discovers and invokes those capabilities (JSON-RPC over HTTP), and how long-running interactions are handled (streaming responses, push notifications, task lifecycle states). It does not define what the agent is, how it reasons, or what tools it uses internally — those are framework concerns.

The thesis is simple: agentic workloads in production will span multiple frameworks. One team builds with Google ADK because they're on GCP. Another team uses LangGraph because they need explicit graph control. A third runs CrewAI for fast prototypes. Without a shared protocol, integrating them means custom HTTP wrappers, brittle JSON contracts, and bespoke retry logic at every boundary. With A2A, every agent advertises a standard Agent Card; every caller speaks the same JSON-RPC dialect; the integration is the protocol.

Originally released by Google as part of the broader agent stack, A2A was donated to the Linux Foundation in 2026 to neutralise vendor concerns and accelerate adoption. The spec lives at a2a-protocol.org; the reference implementations and tooling are at github.com/a2aproject/A2A.

The four primitives

Agent Card — JSON descriptor at /.well-known/agent-card.json declaring capabilities, skills, authentication, and endpoints. Tasks — the unit of work, with a lifecycle (submitted → working → input-required / completed / failed / canceled). Messages — the conversation payload, JSON-RPC formatted, with support for text, files, and structured data. Streams — for long-running tasks, the server pushes intermediate updates; clients can also subscribe to push notifications via webhook for fully async patterns.

Discovery, invocation, lifecycle.

Three flows cover almost every A2A interaction. Discovery: a client fetches the Agent Card to know what an agent can do. Invocation: a client sends a JSON-RPC tasks/send with a message. Lifecycle: the client receives intermediate updates (sync or streamed) until the task completes, fails, or asks for human input.

The Agent Card is the entry point. Every A2A-compliant server hosts one at /.well-known/agent-card.json:

// /.well-known/agent-card.json
{
  "name": "pricing-agent",
  "description": "Generates competitive pricing analyses from market data",
  "version": "1.2.0",
  "protocol": "a2a/1.0",
  "endpoint": "https://pricing.example.com/a2a",
  "skills": [
    {
      "id": "price-product",
      "name": "Price a product",
      "description": "Returns recommended pricing given SKU + region",
      "tags": ["pricing", "competitive-analysis"]
    }
  ],
  "authentication": { "schemes": ["bearer"] },
  "capabilities": { "streaming": true, "pushNotifications": true }
}

Invoking the agent uses JSON-RPC 2.0 over HTTP POST. The minimum viable call is a tasks/send with a message:

# Client sends a task
POST /a2a HTTP/1.1
Authorization: Bearer <token>
Content-Type: application/json

{
  "jsonrpc": "2.0",
  "id": "req-001",
  "method": "tasks/send",
  "params": {
    "id": "task-abc-123",
    "message": {
      "role": "user",
      "parts": [{ "type": "text", "text": "Price SKU-9981 for ZA region" }]
    }
  }
}

Long-running tasks stream intermediate updates via Server-Sent Events on a tasks/sendSubscribe call, or fire push notifications to a registered webhook. The task lifecycle has explicit states: submitted, working, input-required (the agent is asking the human or another agent for clarification), completed, failed, canceled. Clients can poll with tasks/get or watch the stream — both equivalent.

Cross-framework call from ADK — the canonical example. ADK ships helpers that turn an ADK agent into an A2A server (to_a2a()) or wrap a remote A2A agent so it looks local (RemoteA2aAgent):

# Expose your ADK agent as A2A — one line
from google.adk.a2a.utils.agent_to_a2a import to_a2a
app = to_a2a(root_agent)  # serves Agent Card + JSON-RPC endpoint

# Consume a remote A2A agent (could be LangGraph, CrewAI, anything) as if local
from google.adk.agents import RemoteA2aAgent
pricing = RemoteA2aAgent(name="pricing",
                         agent_card="https://pricing.example.com")

# Use it like any other ADK sub-agent
root = Agent(name="coordinator", sub_agents=[pricing])

Why this matters more than another HTTP API

Every framework has its own way of representing an agent. ADK has Agent, LangGraph has graph nodes, CrewAI has roles, OpenAI Agents SDK has assistants. A2A flattens all of them to "a thing that takes a Message and returns a Task". The framework-specific shape stays inside its own process; the protocol layer is uniform. That's what makes "ADK orchestrator + LangGraph specialist + CrewAI sub-team" composable rather than custom integration work.

Who's shipping A2A in 2026.

A2A went from "Google project" to "Linux Foundation project" inside a year. Adoption is uneven by framework but expanding fast. Native first-class support is in ADK; LangGraph and CrewAI have community A2A bridges; OpenAI and Anthropic SDKs are MCP-first but A2A wrappers exist.

Framework / ToolA2A supportMaturity
Google ADKNative — to_a2a() + RemoteA2aAgentStable, in core
LangGraphCommunity wrapper (langgraph-a2a)Functional, third-party
CrewAICommunity adapterFunctional, third-party
OpenAI Agents SDKManual JSON-RPC; MCP is the OpenAI-preferred protocolPossible, no first-party SDK helpers
Anthropic Agent SDKManual JSON-RPC; MCP-nativePossible, no first-party SDK helpers
AutoGen / AG2Community wrapper in progressEarly

A2A vs MCP — not competing

A2A is for agent-to-agent communication. Two agents, possibly in different frameworks or different processes, calling each other for capability delegation. MCP is for tool / data access. An agent calling out to read a file, query a database, or use a tool. Both protocols often appear in the same architecture: agents talk to each other via A2A, agents talk to tools and data via MCP. They're complementary, not alternatives.

When to wire A2A. When not to.

A2A is overhead. If your whole agent stack is one framework in one process, the protocol adds latency, JSON-RPC parsing, network calls, and authentication concerns for no benefit. The honest answer: only adopt A2A when you actually have multiple frameworks or multiple deployment processes that need to talk.

Use A2A when

  • Multiple agent frameworks in production (ADK + LangGraph, etc.)
  • Agents in different processes / containers / languages
  • One team owns orchestration; other teams own specialist agents
  • Long-running tasks that benefit from streaming or push semantics
  • You want a stable contract that survives framework changes
  • External agents from partners or third parties need to integrate

Why open protocols matter more here.

A2A's open-protocol shape has practical leverage in SA delivery contexts: fewer tied-to-vendor integrations, lower switching cost, and the ability to compose agents across the cheap-or-free framework choice (ADK on Cloud Run for the orchestrator, Ollama-backed local agents on a developer's laptop for prototyping, CrewAI in a Hetzner droplet for a specialist).

Latency tolerance: A2A's optional push notification mode is genuinely useful for SA bandwidth conditions. Long-running tasks don't require the client to keep an HTTP connection open; the server fires a webhook when the task finishes. A 30-second LLM call survives a 5-second cellular hiccup that would kill a streaming connection.

POPIA + cross-border: Because the protocol is wire-level, you can mix and match where each agent runs. The orchestrator runs in africa-south1 (Johannesburg GCP); the specialist that processes PII runs on-prem; the analytics agent runs on a US edge platform. A2A doesn't care — the calls are HTTPS with bearer auth. Data residency choices follow the agent's deployment, not the protocol.

Cost: A2A itself is free — it's a spec. The cost is the network egress between agent processes, which is usually negligible compared to model inference cost. For SA teams worried about FX-billed proprietary agent platforms, A2A is the seam where you can swap a US-billed agent for a local one without touching any other code.

Where A2A links in the tree.

A2A is referenced in every agent framework leaf because cross-framework interop is the load-bearing reason to adopt it. It also touches Cloudflare Workers (Workers as A2A endpoints), Tech/Google (ADK's first-party A2A support), and the MCP leaf (the complementary tool-protocol).

Specifications and reference implementations.

A2A's spec is the canonical reference. The Linux Foundation's a2aproject/A2A repo has reference implementations in Python and TypeScript; ADK's first-party A2A integration is the most-shipped consumer example.