The Anthropic Agent SDK is the official Claude-first framework for building production agents. Python and TypeScript SDKs, MCP-native tool layer, sub-agents via the Task tool, native extended-thinking integration. It inherits the same architectural model that powers Claude Code itself — the Anthropic CLI tool that wrote much of this site. The default for production agents on Claude, and the framework with the deepest MCP integration of any agent SDK in 2026.
The Anthropic Agent SDK (claude-agent-sdk) is Anthropic's first-party agent framework. It packages the same architectural model that powers Claude Code — Anthropic's official CLI — into a library you can build your own agents with. The SDK ships in Python and TypeScript, with both available on the standard package indices.
What makes it distinctive isn't the surface API — it's the design heritage. Claude Code is one of the most-used agent products in 2026, with a specific architectural style: flat tool-use loops rather than complex state machines, MCP-native tool integration, sub-agents via a "Task" tool rather than explicit graph orchestration, and extended thinking woven into the reasoning step. The Agent SDK exposes those exact primitives, so the production agents teams build with it inherit Claude Code's design language — for better and worse.
The "for better": the SDK is opinionated in ways that match how Claude actually performs best. Less ceremony than LangGraph, less hierarchy than ADK, less framework-thinking generally. The "for worse": the SDK is Claude-only and fast-moving. Anthropic ships changes weekly; if your stack needs vendor flexibility or stable interfaces, it's a real cost.
Three Anthropic things sound similar and aren't the same. The anthropic SDK (Python / TS) is the basic API client — chat completions, tool use, that's it. The Anthropic Agent SDK (claude-agent-sdk) is the higher-level agent framework — this leaf. Claude Code is the CLI itself, built using the Agent SDK as its foundation. If you read about "agent SDK" in Anthropic docs, that's almost always this leaf's subject.
The SDK's mental model is simple: an agent runs a tool-use loop with Claude, calling functions you provide and (optionally) sub-agents you delegate to. MCP is the canonical way to expose tools. Sub-agents are spawned via a built-in "Task" tool that mirrors how Claude Code's sub-agents work. There's no graph DSL to learn — you describe the agent's job in natural language and provide tools.
The minimal Anthropic Agent in Python:
# pip install claude-agent-sdk from claude_agent_sdk import Agent agent = Agent( model="claude-sonnet-4-6", system_prompt="You help users analyse SA reserve bank policy.", ) async for event in agent.run("Summarise the last 18 months of policy."): print(event)
Tools as plain Python functions. Decorate them; the SDK introspects the signature for the JSON-Schema:
from claude_agent_sdk import Agent, tool @tool def get_market_data(symbol: str, days: int = 30) -> dict: """Fetch recent market data for a symbol.""" return fetch_from_jse_api(symbol, days) agent = Agent( model="claude-opus-4-7", system_prompt="You are a JSE analyst.", tools=[get_market_data], )
MCP integration is first-class. Point the agent at an MCP server (stdio, SSE, or HTTP) and every tool, resource, and prompt the server exposes becomes available:
from claude_agent_sdk import Agent, McpServerConfig agent = Agent( model="claude-sonnet-4-6", mcp_servers=[ McpServerConfig(command="python", args=["weather_server.py"]), McpServerConfig(url="https://internal.example/mcp"), ], )
Sub-agents via Task. The SDK exposes a built-in Task tool the parent agent can invoke to spawn a sub-agent with its own context window. This is the same pattern Claude Code uses internally for sub-agents. Sub-agents return structured results to the parent; their conversation history is isolated, which preserves the parent's context budget:
from claude_agent_sdk import Agent, SubAgent researcher = SubAgent( name="researcher", description="Researches a topic deeply and returns a summary.", model="claude-sonnet-4-6", tools=[web_search, fetch_url], ) main = Agent( model="claude-opus-4-7", system_prompt="You synthesise research from sub-agents.", sub_agents=[researcher], # exposes Task tool )
Claude's extended-thinking mode — the model "thinks before answering" with a separately-billed thinking token budget — is exposed as a single parameter: thinking={"budget_tokens": 16000}. The agent automatically routes complex reasoning steps through the thinking pipeline. Most other frameworks require manual handling; the Anthropic SDK is the one place this comes for free, because it's Anthropic's own model.
The Anthropic Agent SDK is the bottom of a stack. Claude Code uses it. The Claude Desktop app uses it. The web Claude.ai's "Computer Use" demos use it. That shared lineage means every Anthropic-built agent product follows similar patterns — sub-agents, hooks, MCP, Skills — and learning the SDK gives you working knowledge of all of them.
Python and TypeScript packages. Apache 2.0 (Python) / MIT (TS). Self-host anywhere — Lambda, Cloud Run, your laptop.
Anthropic's official coding CLI. Built on the SDK. The reference implementation for "what good agent UX looks like" — and a daily driver for many SDK adopters.
Anthropic-originated, broadly adopted. The SDK's first-class tool integration. mcp_servers=[...] and tools just work.
Claude Code patterns the SDK supports: skills (persistent capabilities Claude can invoke by name), hooks (event-driven shell commands), slash commands (named operations users trigger). Reusable across SDK-built agents.
The Agent SDK ships changes weekly. New features, MCP improvements, model updates, and occasional breaking changes are normal. For production: pin minor versions, watch the changelog, expect to update. The model API itself is more stable than the SDK; if interface stability is critical, calling the basic anthropic SDK directly with your own loop is sometimes the lower-risk choice.
The four major production-agent SDKs make different bets. Anthropic's bet is Claude-first, MCP-native, low-ceremony. LangGraph's bet is explicit-graph control. ADK's bet is GCP managed-runtime + multi-language. OpenAI's bet is OpenAI-stack tightness. Each is right for a specific shape of project; few teams need to choose only one.
| Dimension | Anthropic SDK | LangGraph | Google ADK |
|---|---|---|---|
| Model support | Claude-only | Any model | Gemini-first, any via LiteLLM |
| Control style | Tool-use loop + sub-agents | Explicit graph | Hierarchical agents, LLM routes |
| Tool layer | MCP-native + Python/TS funcs | LangChain ecosystem | MCP + LangchainTool / CrewaiTool adapters |
| Boilerplate | Lowest of the three | Highest | Middle |
| Deploy story | Self-host anywhere | Self-host or LangGraph Platform | Cloud Run / GKE / Vertex AI Agent Engine |
| Observability | OTEL + Anthropic console | LangSmith (paid) | OTEL + Cloud Trace |
| Voice / multimodal | Vision; voice via separate API | Possible, not native | Native via Gemini Live |
| Extended thinking | Native, single param | Manual handling | Manual handling |
| Best fit | Production agents on Claude, low-ceremony | Explicit control, compliance-heavy | GCP-native, Gemini-first, voice |
Use the Anthropic Agent SDK when Claude is the model and the work fits a tool-use loop. Use LangGraph when you need explicit graphs and audit. Use ADK when you're on GCP and want managed runtime. Use all three if you have multiple agent surfaces — that's increasingly the production pattern. A2A is the seam that makes them composable; MCP is the seam that lets them share tools.
Six patterns that play to the SDK's strengths: low-ceremony tool loops, sub-agent delegation, MCP integration, and Claude's reasoning quality.
The honest guidance: this SDK is the right answer if you've already chosen Claude as the model. If you haven't, the model decision is the bigger one — the SDK question follows from it. Picking Anthropic SDK without Claude makes no sense; picking Claude usually leads here unless you also need explicit graphs (LangGraph) or GCP-managed deployment (ADK).
If the Agent SDK feels like overkill for your task, write a tool-use loop directly against the anthropic Python or TS SDK. Forty lines of code, full control, no framework cadence concerns. For genuinely simple agents (one-shot research, single-tool wrappers) this is often the right move — promote to the Agent SDK only when you actually need sub-agents, MCP, or extended thinking integration.
For SA enterprise that's standardised on Claude (which is many — Anthropic's quality reputation has won meaningful share against OpenAI in mid-market and enterprise SA), the Agent SDK is the natural framework. POPIA cross-border concerns apply — Claude's API is US-hosted and any production agent calling it makes a Section 72 transfer. Mitigations: AWS Bedrock in af-south-1 (Cape Town) hosts Claude with regional residency, or use Claude only for non-PII reasoning steps and keep PII handling in local Ollama-hosted Gemma 3.
For studios, Claude Code is already the daily driver for many SA developers (this site is partly written with it). The Agent SDK is a natural extension — build production agents with the same architectural model your tooling already uses. Claude Code skills and slash commands transfer conceptually to SDK-built agents; the muscle memory is one-to-one.
Claude is USD-billed and not cheap at frontier tier. Opus 4.7 input tokens are billed at frontier-tier rates; output tokens cost ~5x input; extended thinking burns thinking-tokens at the same rate as output. For SA studios watching FX, the practical path is: Sonnet 4.6 as the default (much cheaper than Opus, very close in quality), promote to Opus only for hard reasoning, use Haiku 4.5 for high-volume routing tasks. The SDK supports this routing trivially — sub-agents can run on different model tiers.
mcp_servers=[...] parameter is the easiest path to a working MCP-driven agent.