The Agents domain is the operational heart of know.2nth.ai going forward. Now that the data layer and systems-of-record are in place, agents are how that infrastructure turns into work. The full stack lives here: frameworks (how agents orchestrate), protocols (how they talk to each other and to tools), models (what they actually call), and inference platforms (where models run). All four layers matter. Most production agent stacks in 2026 have a strong opinion at every layer.
Each leaf follows the same content shape: hero + what-it-is + how-it-works + ecosystem + decision-guide + SA-context + connections + resources. Four bands organise the sub-tree: orchestration frameworks, the interop protocols that connect them, the models they call, and the inference platforms that run those models.
Google's open-source, code-first agent framework. Apache 2.0, Python + TS + Go + Java, native A2A and MCP, one-command deploy to Cloud Run / GKE / Vertex AI Agent Engine. Optimised for Gemini, model-agnostic via LiteLLM. The hub of the Google agent stack.
The graph-based orchestration framework from the LangChain team. Explicit state machines, the largest tool ecosystem via LangChain, LangSmith observability, the de-facto "I want full control over the graph" choice. Higher boilerplate; deeper learning curve.
Role-based "team" metaphor for multi-agent systems. Fastest path from zero to a working multi-agent prototype. Limited checkpointing, less suited to dynamic flows — but for rapid prototyping or pitching the multi-agent concept, hard to beat.
OpenAI's first-party agent framework. Tight integration with OpenAI models, built-in tracing, native function-calling. The default if you're already on OpenAI and want minimal setup; the constraint is OpenAI-only model support.
Claude-first agent framework. MCP-native, extended thinking integration, sub-agent orchestration via Claude Code's task model. The default if Claude is the model; the constraint is Claude-only.
Microsoft's conversational multi-agent framework, now community-maintained as AG2. Strongest at "research workflow" patterns where agents debate and converge on answers. Production tooling still maturing; useful for prototypes and research more than production.
Agent-to-Agent Protocol. JSON-RPC over HTTP, Agent Card discovery at /.well-known/agent-card.json, streaming and push notifications. Released by Google, now under Linux Foundation governance. The cross-framework interop layer that makes "ADK orchestrates LangGraph and CrewAI" practical.
The tool / data protocol. Anthropic-originated, broadly adopted. Lets any framework consume tools and data from any MCP server. ADK's MCPToolset, Claude Desktop's MCP integrations, Cursor's MCP support — all the same protocol. The lingua franca of agent tooling in 2026.
Google's open-weights family from the Gemini research line. Gemma 3 (March 2025) ships at 270M / 1B / 4B / 12B / 27B with multimodal vision-text and 128k context. The strongest "frontier-lab safety tuning" choice in open weights; runs locally via Ollama or in production via vLLM and Vertex AI Model Garden.
Anthropic's flagship family. Claude Opus / Sonnet / Haiku tiers. Closed model, frontier reasoning, native MCP and tool-use, extended thinking. The default for production agents that need long-context reasoning and reliable function-calling.
OpenAI's flagship family. GPT-5 / GPT-4.x / o-series reasoning models. Closed model, native function-calling, broadest API ecosystem, embedding and DALL-E integrations. The default for teams already on OpenAI infrastructure.
Meta's open-weights family. Largest community fine-tune ecosystem; Llama 3.3 / 4 series ship at 8B / 70B / 405B. The most-used open-weights workhorse for non-multimodal tasks; Meta-licence has commercial restrictions over 700M MAU.
The two strong "second-tier" open-weights families. Mistral Large 2 leads on function-calling reliability and European multilingual; Qwen 2.5 / Coder leads on code-specific work. Often the right pick when Gemma or Llama don't fit the use-case profile.
Local LLM runtime. MIT-licensed, llama.cpp under the hood, OpenAI-compatible API at localhost:11434. ollama run gemma3:4b and the model works. The default for laptop / single-machine / dev work. POPIA-friendly, FX-free, load-shedding-resilient.
Production-grade inference server for open-weights models. PagedAttention, continuous batching, OpenAI-compatible API. The default when you've moved past Ollama and need real throughput on a dedicated GPU host.
Proxy / SDK that gives every model an OpenAI-compatible interface. ADK uses it under the hood for non-Gemini models. The seam where you can swap a Claude call for a local Ollama call by changing one config line.
Hosted multiplexer over hundreds of models from dozens of providers. One API key, one billing relationship, every frontier model. The pragmatic choice for teams that want vendor flexibility without managing N billing relationships.
These show up in every framework page in the sub-tree. They're the opinions that determine how each framework is evaluated, what gets called a strength vs a weakness, and which decisions in the field actually matter.
Code-first definitions, version control, unit tests, evaluation harnesses, CI/CD. Frameworks that treat agent development as software engineering compound; ones that treat it as prompt-tweaking don't. This is the bar every leaf in the sub-tree is held to.
A2A is now Linux Foundation. MCP is broadly adopted across vendors. OpenTelemetry GenAI semantic conventions are the tracing baseline. When a framework chooses these standards over vendor-specific protocols, that's signal — it expects to coexist with frameworks it doesn't own.
Most teams in production by mid-2026 will run two or more agent frameworks. The interesting question isn't "which framework?" but "which framework where, and how do they talk?". A2A and MCP are the answer, which is why they sit in the sub-tree as first-class leaves alongside the frameworks themselves.
Agents are useful in proportion to the data they can read and the systems they can act on. The Agents sub-tree assumes the Data domain (BI, warehousing, engineering) and Business / Construction domains (ERP, CRM, BIM) are already in place. Without that substrate, agent demos look impressive and produce nothing.
Agents is the orchestration layer over the rest of the tree. Data feeds the agents (BI tools and warehouses they query). Business and Construction systems are what the agents act on (ERPs, CRMs, BIM models). Tech provides the runtime substrate (Node, Bun, Cloudflare Workers, Google Cloud, Frappe-as-API). Design covers the surfaces agents render. People is the human-loop side — coaching, leadership, review.