know.2nth.ai Agents Google ADK
agents · Google ADK · Skill Leaf

Agent development should feel like software development.

Google's open-source, code-first framework for building, evaluating, and deploying production AI agents — across Python, TypeScript, Go, and Java. Apache 2.0, optimised for Gemini, model-agnostic via LiteLLM, native A2A and MCP support, one-command deploy to Cloud Run, GKE, or Vertex AI Agent Engine. Code-first definitions, version control, unit tests, evaluation harnesses, CI/CD — not prompt soup glued together with retries.

Live since April 2025 Apache 2.0 Python · TS · Go · Java 17,500+ stars (adk-python) v2.0 Beta · v1.23 stable

An agent runtime, a tool ecosystem, and a deploy story.

Agent Development Kit (ADK) is an open-source framework, released by Google in April 2025, for building and deploying AI agents and multi-agent systems. It's licensed under Apache 2.0 and lives across four sibling repos: google/adk-python, google/adk-js, google/adk-go, and google/adk-java, with shared docs at google/adk-docs.

The core idea: agent development should feel like software development. Code-first definitions, version control, unit tests, evaluation harnesses, and CI/CD — not prompt soup glued together with retries.

ADK ships three things in one framework:

  1. An agent runtimeLlmAgent for reasoning, plus deterministic workflow agents (SequentialAgent, ParallelAgent, LoopAgent) for predictable orchestration.
  2. A tool ecosystem — pre-built tools, custom Python/TS functions, OpenAPI specs, MCP servers, and adapter classes (LangchainTool, CrewaiTool) that consume tools from other frameworks.
  3. A deploy story — local dev UI, adk api_server, then one of three production paths on Google Cloud (Cloud Run, GKE, or Vertex AI Agent Engine), or any container runtime you prefer.

ADK is optimised for Gemini but explicitly model-agnostic via a BaseLlm interface and LiteLLM integration, so OpenAI, Anthropic, Mistral, and self-hosted models all work.

Stat · reach

17,500+ stars

adk-python alone. Four language SDKs in active development.

Stat · cadence

~bi-weekly

Release cadence on Python. 25+ minor releases between launch and v2.0 Beta.

Stat · protocol

A2A · MCP · OpenAPI

Native protocol surface. A2A is now under Linux Foundation governance.

Stat · license

Apache 2.0

True open source. Fork, embed, ship. No vendor lock-in at the framework layer.

The bet ADK is making

Be the orchestration hub, not the only framework you use. ADK arrived in a crowded field — LangGraph, CrewAI, AutoGen, OpenAI Agents SDK, Anthropic Agent SDK — but brought three things most others didn't have on day one: native A2A protocol support, Gemini Live API streaming for voice and video, and a one-command path to managed deployment on Vertex AI. The framework isn't trying to replace LangChain or CrewAI; it's trying to be the glue that lets all three coexist in production.

Four primitives carry the framework.

Agents, Tools, Sessions, and Runners. Master those four and 90% of ADK clicks. The rest is which deployment surface you ship to and which protocol (A2A, MCP, OpenAPI) you pick for cross-framework interop.

The minimal agent. A working agent is roughly ten lines of Python. From the official adk-python README:

from google.adk.agents import Agent
from google.adk.tools import google_search

root_agent = Agent(
    name="search_assistant",
    model="gemini-2.5-flash",
    instruction="You are a helpful assistant. Use search when needed.",
    description="An assistant that can search the web.",
    tools=[google_search],
)

Run it with adk web (dev UI), adk run (CLI), or adk api_server (REST API).

Multi-agent hierarchies. ADK's native pattern is a tree: a root agent delegates to sub-agents, which can have their own sub-agents. The framework handles routing based on agent descriptions and the LLM's reasoning.

from google.adk.agents import Agent

greeter = Agent(name="greeter", model="gemini-2.5-flash",
                instruction="Handle greetings only.")

weather = Agent(name="weather", model="gemini-2.5-flash",
                instruction="Answer weather questions.",
                tools=[get_weather])

root = Agent(
    name="coordinator",
    model="gemini-2.5-flash",
    instruction="Route to the right specialist.",
    sub_agents=[greeter, weather],
)

Workflow agents (deterministic). When you don't want the LLM deciding the order:

  • SequentialAgent — runs sub-agents in order
  • ParallelAgent — runs sub-agents concurrently
  • LoopAgent — repeats until a condition is met

ADK 2.0 graph workflows (Beta). ADK 2.0, currently Beta in Python only, adds explicit graph-based workflows where every node is an Agent, Tool, function, or human-input step, connected by edges that can route conditionally. The Workflow class replaces deeply nested prompt instructions with explicit DAGs:

from google.adk import Agent, Workflow, Event

root_agent = Workflow(
    name="routing_workflow",
    edges=[
        ("START", classify_message, router),
        (router, {
            "BUG": handle_bug,
            "SUPPORT": handle_support,
            "LOGISTICS": handle_logistics,
        }),
    ],
)

Sessions vs Memory — deliberately separated

Sessions hold the state of a single conversation. Backends include InMemorySessionService, VertexAiSessionService, and the community-contributed FirestoreSessionService. Memory holds long-term facts across sessions. InMemoryMemoryService for tests, VertexAiMemoryBankService for production.

Tool interop. ADK consumes tools from outside its own ecosystem: MCP toolsets via MCPToolset, LangChain tools wrapped with LangchainTool, CrewAI tools via CrewaiTool, OpenAPI specs auto-generated into tools, and plain Python functions via signature introspection.

A2A: agent-to-agent. For talking to agents in other processes, languages, or frameworks, ADK uses the Agent2Agent (A2A) Protocol — originally released by Google and now an open-source project under the Linux Foundation:

# Expose your ADK agent as an A2A server
from google.adk.a2a.utils.agent_to_a2a import to_a2a
app = to_a2a(root_agent)  # serves an Agent Card at /.well-known/agent-card.json

# Consume a remote A2A agent (any framework) as if it were local
from google.adk.agents import RemoteA2aAgent
remote = RemoteA2aAgent(name="pricing", agent_card="https://pricing.example.com")

A2A advertises agent skills via an Agent Card (JSON at /.well-known/agent-card.json), uses JSON-RPC over HTTP for messages, and supports streaming and push notifications. ADK agents can call LangGraph or CrewAI agents — and vice versa — through the same protocol.

Where ADK fits in Google's stack — and against the field.

ADK sits inside Google's broader agent stack and against a field of competing frameworks. Both contexts matter: the Google integration is what makes ADK uniquely capable for managed deployment; the cross-framework comparison is what makes the framework choice non-obvious.

Where ADK fits in Google's stack.

LayerComponentWhat it does
FrameworkADKOpen-source code-first SDK (Apache 2.0)
Visual builderAgent StudioLow-code canvas, part of Vertex AI Agent Builder
TemplatesAgent GardenPrebuilt agent samples and one-click deploys
ModelsModel Garden / Gemini200+ foundation models including Gemini, Claude, Gemma, Llama
RuntimeVertex AI Agent EngineManaged runtime — sessions, memory, scaling
MemoryMemory BankLong-term, cross-session memory store
ProtocolA2ACross-framework agent communication (Linux Foundation)
Tool protocolMCPModel Context Protocol for tools and data

When to use ADK vs peer frameworks. The honest field picture as of 2026:

FrameworkBest atWatch out for
Google ADKGCP-native deployments, multi-agent hierarchies, voice/multimodal via Gemini Live, A2A interopNative experience leans Google Cloud — value drops off-platform
LangGraphExplicit graph control, LangSmith observability, the largest tool ecosystem via LangChainHigher boilerplate; steeper learning curve
CrewAIRole-based "team" metaphors, fastest to a first prototypeLimited checkpointing; less suited to dynamic flows
OpenAI Agents SDKTight integration with OpenAI models and built-in tracingOpenAI-only model support
Anthropic Agent SDKClaude-first agents, MCP-native, extended thinkingClaude-only
AutoGen / AG2Conversational multi-agent group chats, research workflowsProduction tooling still maturing

The frameworks aren't mutually exclusive

A2A means you can run an ADK orchestrator that calls a LangGraph specialist that calls a CrewAI sub-team — each in its own container, each in its own language. That's the bet ADK is making: be the orchestration hub, not the only framework you use. For most teams shipping production agents in 2026, the question isn't "which framework?" but "which framework where?", and A2A is what makes the answer composable.

Where ADK earns its keep, and how it got here.

Six patterns where ADK has shipped concrete value, drawn from Google's official samples and community deployments. Plus the timeline of how the framework went from zero to ADK 2.0 Beta inside thirteen months.

  • Customer support concierge — a root agent that routes to billing, technical, or returns sub-agents, with A2A escalation to human-handoff agents
  • BigQuery analyst — agent + BigQuery MCP toolset for natural-language data exploration, deployed on Cloud Run with Agent Engine sessions
  • Voice agents — bidirectional audio/video via Gemini Live API, the one place ADK has a meaningful edge over LangGraph and CrewAI
  • Document processing pipelinesSequentialAgent chains for extract → classify → enrich → store, with LoopAgent retries on validation failures
  • Cross-framework orchestration — ADK as the A2A hub calling LangGraph and CrewAI specialists for use cases where a single team owns the orchestration but other teams own the sub-agents
  • Internal developer agents — code review, PR triage, doc generation; Google itself uses ADK-powered agents to manage the ADK GitHub repo (per their public Q3 2025 roadmap)
Apr 9, 2025
ADK Python launches as open source under Apache 2.0. Google Developers Blog announcement.
Mid-2025
A2A Protocol announced; ADK ships native A2A integration.
Aug 2025
ADK Python v1.12 introduces YAML-based Agent Config (no-code agent authoring) and Bigtable toolset.
Oct 2025
First ADK community meeting; Cloud Run + GKE + Agent Engine deployment paths formalised.
Jan 2026
ADK Python v1.23 lands; OpenTelemetry GenAI semantic conventions adopted for tracing.
Mar 2026
ADK Java 1.0 ships; ADK 2.0 Alpha announced with graph-based workflows.
Apr 2026
Vertex AI Agent Builder rebranded to Gemini Enterprise Agent Platform at Cloud Next 2026.
May 2026
ADK 2.0 in Beta for Python; ADK TypeScript 1.0 GA; A2A Protocol now under Linux Foundation governance.

Pick ADK when. Skip ADK when.

Honest two-sided guidance. ADK isn't universally the right answer; it's the right answer for a specific shape of project and team. The shape it fits best is GCP-native, voice/multimodal-relevant, hierarchical multi-agent — with comfort for weekly framework churn during the v2.0 stabilisation window.

Use ADK when

  • You're already on Google Cloud and standardising on Gemini
  • You need bidirectional voice or video — Gemini Live integration is genuinely unique
  • You're building hierarchical multi-agent systems with clear role delegation
  • A2A interop matters — you want to mix ADK with LangGraph or CrewAI agents
  • You want a managed runtime (Agent Engine) to handle sessions, memory, and scaling
  • You're comfortable with weekly-to-bi-weekly framework churn during v2.0 stabilisation

Pre-GA pinning

ADK 2.0 is pre-GA at the time of writing. Install with pip install --pre google-adk. Don't ship it to production until it goes GA. The official 2.0 docs flag breaking changes from 1.x. For production, pin to v1.23+ stable until 2.0 GA lands.

Where ADK fits in SA delivery work.

ADK plays differently across the three tiers most SA delivery teams operate in: enterprise (banks, insurers, telcos), mid-market studio builds, and learning paths. Each tier has a different cost-and-value calculus, and ADK isn't always the answer — but it's a credible answer in the cases where it is.

Enterprise · SA banks, insurers, telcos already on GCP

For South African banks, insurers, and telcos already on Google Cloud (or running Vertex AI for ML workloads), ADK is a natural fit. Agent Engine with VertexAiSessionService and Memory Bank gives the audit trail, IAM controls, and data-residency story that POPIA and SA regulators expect. Pair ADK orchestrators with the Johannesburg or Cape Town GCP regions to keep agent-to-tool latency under 50ms for customer-facing flows. The honest constraint: if the client's data lives mostly in AWS or on-prem, the cross-cloud egress and token-billing overhead usually makes a different framework cheaper.

Studio · mid-market builds

For mid-market builds, ADK's deploy-to-Cloud-Run path is hard to beat. One container, scales to zero, no managed-runtime fees, A2A still works. Start in adk web on a developer laptop, ship to Cloud Run for staging, and only move to Agent Engine if the client wants the managed sessions and Memory Bank UI. ADK 2.0 graph workflows are interesting for studio work — but keep them out of production until GA.

Learning · agentic patterns from explicit primitives

ADK is one of the better frameworks to learn agentic patterns on, because the abstractions are explicit. You can see the agent tree, the tool calls, the session state. Pair it with the official quickstart, the free Gemini API key tier, and the open-source adk-samples repo, and you have a genuine learning path that doesn't require a credit card. The A2A Protocol is also worth learning here, because it's the open standard that lets your ADK agent talk to whatever framework comes next.

POPIA notes for SA enterprise. ADK on Vertex AI Agent Engine in JHB (africa-south1) keeps customer data in-region by default. Memory Bank stores cross-session facts — configure retention policies aligned to your data subject's POPIA rights (right to erasure, right to access). Agent Engine's audit logs flow into Cloud Logging; pipe them into your standard SIEM for the audit chain regulators expect.

Where ADK links in the tree.

ADK touches several other sub-trees: Google (the cloud platform and Gemini models), Cloudflare Workers (alternative deployment surface for agents), the agentic-protocol stack (A2A, MCP), and downstream of every other framework via A2A interop.

Primary sources only.

This page was assembled from primary sources: Google's adk-python README, CHANGELOG, and release tags; the canonical docs site at google.github.io/adk-docs; the Google Developers Blog launch post; the A2A Protocol repository and specification; and Google Cloud's official Vertex AI documentation. Anything that couldn't be verified against a primary source was removed. Last reviewed 2026-05-10.