Google's open-source, code-first framework for building, evaluating, and deploying production AI agents — across Python, TypeScript, Go, and Java. Apache 2.0, optimised for Gemini, model-agnostic via LiteLLM, native A2A and MCP support, one-command deploy to Cloud Run, GKE, or Vertex AI Agent Engine. Code-first definitions, version control, unit tests, evaluation harnesses, CI/CD — not prompt soup glued together with retries.
Agent Development Kit (ADK) is an open-source framework, released by Google in April 2025, for building and deploying AI agents and multi-agent systems. It's licensed under Apache 2.0 and lives across four sibling repos: google/adk-python, google/adk-js, google/adk-go, and google/adk-java, with shared docs at google/adk-docs.
The core idea: agent development should feel like software development. Code-first definitions, version control, unit tests, evaluation harnesses, and CI/CD — not prompt soup glued together with retries.
ADK ships three things in one framework:
LlmAgent for reasoning, plus deterministic workflow agents (SequentialAgent, ParallelAgent, LoopAgent) for predictable orchestration.LangchainTool, CrewaiTool) that consume tools from other frameworks.adk api_server, then one of three production paths on Google Cloud (Cloud Run, GKE, or Vertex AI Agent Engine), or any container runtime you prefer.ADK is optimised for Gemini but explicitly model-agnostic via a BaseLlm interface and LiteLLM integration, so OpenAI, Anthropic, Mistral, and self-hosted models all work.
adk-python alone. Four language SDKs in active development.
Release cadence on Python. 25+ minor releases between launch and v2.0 Beta.
Native protocol surface. A2A is now under Linux Foundation governance.
True open source. Fork, embed, ship. No vendor lock-in at the framework layer.
Be the orchestration hub, not the only framework you use. ADK arrived in a crowded field — LangGraph, CrewAI, AutoGen, OpenAI Agents SDK, Anthropic Agent SDK — but brought three things most others didn't have on day one: native A2A protocol support, Gemini Live API streaming for voice and video, and a one-command path to managed deployment on Vertex AI. The framework isn't trying to replace LangChain or CrewAI; it's trying to be the glue that lets all three coexist in production.
Agents, Tools, Sessions, and Runners. Master those four and 90% of ADK clicks. The rest is which deployment surface you ship to and which protocol (A2A, MCP, OpenAPI) you pick for cross-framework interop.
The minimal agent. A working agent is roughly ten lines of Python. From the official adk-python README:
from google.adk.agents import Agent from google.adk.tools import google_search root_agent = Agent( name="search_assistant", model="gemini-2.5-flash", instruction="You are a helpful assistant. Use search when needed.", description="An assistant that can search the web.", tools=[google_search], )
Run it with adk web (dev UI), adk run (CLI), or adk api_server (REST API).
Multi-agent hierarchies. ADK's native pattern is a tree: a root agent delegates to sub-agents, which can have their own sub-agents. The framework handles routing based on agent descriptions and the LLM's reasoning.
from google.adk.agents import Agent greeter = Agent(name="greeter", model="gemini-2.5-flash", instruction="Handle greetings only.") weather = Agent(name="weather", model="gemini-2.5-flash", instruction="Answer weather questions.", tools=[get_weather]) root = Agent( name="coordinator", model="gemini-2.5-flash", instruction="Route to the right specialist.", sub_agents=[greeter, weather], )
Workflow agents (deterministic). When you don't want the LLM deciding the order:
SequentialAgent — runs sub-agents in orderParallelAgent — runs sub-agents concurrentlyLoopAgent — repeats until a condition is metADK 2.0 graph workflows (Beta). ADK 2.0, currently Beta in Python only, adds explicit graph-based workflows where every node is an Agent, Tool, function, or human-input step, connected by edges that can route conditionally. The Workflow class replaces deeply nested prompt instructions with explicit DAGs:
from google.adk import Agent, Workflow, Event root_agent = Workflow( name="routing_workflow", edges=[ ("START", classify_message, router), (router, { "BUG": handle_bug, "SUPPORT": handle_support, "LOGISTICS": handle_logistics, }), ], )
Sessions hold the state of a single conversation. Backends include InMemorySessionService, VertexAiSessionService, and the community-contributed FirestoreSessionService. Memory holds long-term facts across sessions. InMemoryMemoryService for tests, VertexAiMemoryBankService for production.
Tool interop. ADK consumes tools from outside its own ecosystem: MCP toolsets via MCPToolset, LangChain tools wrapped with LangchainTool, CrewAI tools via CrewaiTool, OpenAPI specs auto-generated into tools, and plain Python functions via signature introspection.
A2A: agent-to-agent. For talking to agents in other processes, languages, or frameworks, ADK uses the Agent2Agent (A2A) Protocol — originally released by Google and now an open-source project under the Linux Foundation:
# Expose your ADK agent as an A2A server from google.adk.a2a.utils.agent_to_a2a import to_a2a app = to_a2a(root_agent) # serves an Agent Card at /.well-known/agent-card.json # Consume a remote A2A agent (any framework) as if it were local from google.adk.agents import RemoteA2aAgent remote = RemoteA2aAgent(name="pricing", agent_card="https://pricing.example.com")
A2A advertises agent skills via an Agent Card (JSON at /.well-known/agent-card.json), uses JSON-RPC over HTTP for messages, and supports streaming and push notifications. ADK agents can call LangGraph or CrewAI agents — and vice versa — through the same protocol.
ADK sits inside Google's broader agent stack and against a field of competing frameworks. Both contexts matter: the Google integration is what makes ADK uniquely capable for managed deployment; the cross-framework comparison is what makes the framework choice non-obvious.
Where ADK fits in Google's stack.
| Layer | Component | What it does |
|---|---|---|
| Framework | ADK | Open-source code-first SDK (Apache 2.0) |
| Visual builder | Agent Studio | Low-code canvas, part of Vertex AI Agent Builder |
| Templates | Agent Garden | Prebuilt agent samples and one-click deploys |
| Models | Model Garden / Gemini | 200+ foundation models including Gemini, Claude, Gemma, Llama |
| Runtime | Vertex AI Agent Engine | Managed runtime — sessions, memory, scaling |
| Memory | Memory Bank | Long-term, cross-session memory store |
| Protocol | A2A | Cross-framework agent communication (Linux Foundation) |
| Tool protocol | MCP | Model Context Protocol for tools and data |
When to use ADK vs peer frameworks. The honest field picture as of 2026:
| Framework | Best at | Watch out for |
|---|---|---|
| Google ADK | GCP-native deployments, multi-agent hierarchies, voice/multimodal via Gemini Live, A2A interop | Native experience leans Google Cloud — value drops off-platform |
| LangGraph | Explicit graph control, LangSmith observability, the largest tool ecosystem via LangChain | Higher boilerplate; steeper learning curve |
| CrewAI | Role-based "team" metaphors, fastest to a first prototype | Limited checkpointing; less suited to dynamic flows |
| OpenAI Agents SDK | Tight integration with OpenAI models and built-in tracing | OpenAI-only model support |
| Anthropic Agent SDK | Claude-first agents, MCP-native, extended thinking | Claude-only |
| AutoGen / AG2 | Conversational multi-agent group chats, research workflows | Production tooling still maturing |
A2A means you can run an ADK orchestrator that calls a LangGraph specialist that calls a CrewAI sub-team — each in its own container, each in its own language. That's the bet ADK is making: be the orchestration hub, not the only framework you use. For most teams shipping production agents in 2026, the question isn't "which framework?" but "which framework where?", and A2A is what makes the answer composable.
Six patterns where ADK has shipped concrete value, drawn from Google's official samples and community deployments. Plus the timeline of how the framework went from zero to ADK 2.0 Beta inside thirteen months.
SequentialAgent chains for extract → classify → enrich → store, with LoopAgent retries on validation failuresHonest two-sided guidance. ADK isn't universally the right answer; it's the right answer for a specific shape of project and team. The shape it fits best is GCP-native, voice/multimodal-relevant, hierarchical multi-agent — with comfort for weekly framework churn during the v2.0 stabilisation window.
ADK 2.0 is pre-GA at the time of writing. Install with pip install --pre google-adk. Don't ship it to production until it goes GA. The official 2.0 docs flag breaking changes from 1.x. For production, pin to v1.23+ stable until 2.0 GA lands.
ADK plays differently across the three tiers most SA delivery teams operate in: enterprise (banks, insurers, telcos), mid-market studio builds, and learning paths. Each tier has a different cost-and-value calculus, and ADK isn't always the answer — but it's a credible answer in the cases where it is.
For South African banks, insurers, and telcos already on Google Cloud (or running Vertex AI for ML workloads), ADK is a natural fit. Agent Engine with VertexAiSessionService and Memory Bank gives the audit trail, IAM controls, and data-residency story that POPIA and SA regulators expect. Pair ADK orchestrators with the Johannesburg or Cape Town GCP regions to keep agent-to-tool latency under 50ms for customer-facing flows. The honest constraint: if the client's data lives mostly in AWS or on-prem, the cross-cloud egress and token-billing overhead usually makes a different framework cheaper.
For mid-market builds, ADK's deploy-to-Cloud-Run path is hard to beat. One container, scales to zero, no managed-runtime fees, A2A still works. Start in adk web on a developer laptop, ship to Cloud Run for staging, and only move to Agent Engine if the client wants the managed sessions and Memory Bank UI. ADK 2.0 graph workflows are interesting for studio work — but keep them out of production until GA.
ADK is one of the better frameworks to learn agentic patterns on, because the abstractions are explicit. You can see the agent tree, the tool calls, the session state. Pair it with the official quickstart, the free Gemini API key tier, and the open-source adk-samples repo, and you have a genuine learning path that doesn't require a credit card. The A2A Protocol is also worth learning here, because it's the open standard that lets your ADK agent talk to whatever framework comes next.
POPIA notes for SA enterprise. ADK on Vertex AI Agent Engine in JHB (africa-south1) keeps customer data in-region by default. Memory Bank stores cross-session facts — configure retention policies aligned to your data subject's POPIA rights (right to erasure, right to access). Agent Engine's audit logs flow into Cloud Logging; pipe them into your standard SIEM for the audit chain regulators expect.
ADK touches several other sub-trees: Google (the cloud platform and Gemini models), Cloudflare Workers (alternative deployment surface for agents), the agentic-protocol stack (A2A, MCP), and downstream of every other framework via A2A interop.
MCPToolset. MCP is becoming the lingua franca of agent tooling.This page was assembled from primary sources: Google's adk-python README, CHANGELOG, and release tags; the canonical docs site at google.github.io/adk-docs; the Google Developers Blog launch post; the A2A Protocol repository and specification; and Google Cloud's official Vertex AI documentation. Anything that couldn't be verified against a primary source was removed. Last reviewed 2026-05-10.