know.2nth.ai Agents
agents · Top-level domain hub

Frameworks, protocols, models, inference. The full stack.

The Agents domain is the operational heart of know.2nth.ai going forward. Now that the data layer and systems-of-record are in place, agents are how that infrastructure turns into work. The full stack lives here: frameworks (how agents orchestrate), protocols (how they talk to each other and to tools), models (what they actually call), and inference platforms (where models run). All four layers matter. Most production agent stacks in 2026 have a strong opinion at every layer.

6
Frameworks
2
Protocols
5
Models
4
Inference
8
Live leaves
01 · Skills

The agentic stack, mapped.

Each leaf follows the same content shape: hero + what-it-is + how-it-works + ecosystem + decision-guide + SA-context + connections + resources. Four bands organise the sub-tree: orchestration frameworks, the interop protocols that connect them, the models they call, and the inference platforms that run those models.

Frameworks · 6
Google ADK
Live
agents/google-adk

Google's open-source, code-first agent framework. Apache 2.0, Python + TS + Go + Java, native A2A and MCP, one-command deploy to Cloud Run / GKE / Vertex AI Agent Engine. Optimised for Gemini, model-agnostic via LiteLLM. The hub of the Google agent stack.

LangGraph
Live
agents/langgraph

The graph-based orchestration framework from the LangChain team. Explicit state machines, the largest tool ecosystem via LangChain, LangSmith observability, the de-facto "I want full control over the graph" choice. Higher boilerplate; deeper learning curve.

CrewAI
Soon
agents/crewai

Role-based "team" metaphor for multi-agent systems. Fastest path from zero to a working multi-agent prototype. Limited checkpointing, less suited to dynamic flows — but for rapid prototyping or pitching the multi-agent concept, hard to beat.

OpenAI Agents SDK
Soon
agents/openai-agents-sdk

OpenAI's first-party agent framework. Tight integration with OpenAI models, built-in tracing, native function-calling. The default if you're already on OpenAI and want minimal setup; the constraint is OpenAI-only model support.

Anthropic Agent SDK
Live
agents/anthropic-agent-sdk

Claude-first agent framework. MCP-native, extended thinking integration, sub-agent orchestration via Claude Code's task model. The default if Claude is the model; the constraint is Claude-only.

AutoGen / AG2
Soon
agents/autogen

Microsoft's conversational multi-agent framework, now community-maintained as AG2. Strongest at "research workflow" patterns where agents debate and converge on answers. Production tooling still maturing; useful for prototypes and research more than production.

Protocols · 2
Models · 5
Gemma
Live
agents/gemma

Google's open-weights family from the Gemini research line. Gemma 3 (March 2025) ships at 270M / 1B / 4B / 12B / 27B with multimodal vision-text and 128k context. The strongest "frontier-lab safety tuning" choice in open weights; runs locally via Ollama or in production via vLLM and Vertex AI Model Garden.

Claude
Live
agents/claude

Anthropic's flagship family. Claude Opus / Sonnet / Haiku tiers. Closed model, frontier reasoning, native MCP and tool-use, extended thinking. The default for production agents that need long-context reasoning and reliable function-calling.

GPT · OpenAI
Soon
agents/gpt

OpenAI's flagship family. GPT-5 / GPT-4.x / o-series reasoning models. Closed model, native function-calling, broadest API ecosystem, embedding and DALL-E integrations. The default for teams already on OpenAI infrastructure.

Llama
Soon
agents/llama

Meta's open-weights family. Largest community fine-tune ecosystem; Llama 3.3 / 4 series ship at 8B / 70B / 405B. The most-used open-weights workhorse for non-multimodal tasks; Meta-licence has commercial restrictions over 700M MAU.

Mistral / Qwen
Soon
agents/mistral-qwen

The two strong "second-tier" open-weights families. Mistral Large 2 leads on function-calling reliability and European multilingual; Qwen 2.5 / Coder leads on code-specific work. Often the right pick when Gemma or Llama don't fit the use-case profile.

Inference · 4
Ollama
Live
agents/ollama

Local LLM runtime. MIT-licensed, llama.cpp under the hood, OpenAI-compatible API at localhost:11434. ollama run gemma3:4b and the model works. The default for laptop / single-machine / dev work. POPIA-friendly, FX-free, load-shedding-resilient.

vLLM
Soon
agents/vllm

Production-grade inference server for open-weights models. PagedAttention, continuous batching, OpenAI-compatible API. The default when you've moved past Ollama and need real throughput on a dedicated GPU host.

LiteLLM
Soon
agents/litellm

Proxy / SDK that gives every model an OpenAI-compatible interface. ADK uses it under the hood for non-Gemini models. The seam where you can swap a Claude call for a local Ollama call by changing one config line.

OpenRouter
Soon
agents/openrouter

Hosted multiplexer over hundreds of models from dozens of providers. One API key, one billing relationship, every frontier model. The pragmatic choice for teams that want vendor flexibility without managing N billing relationships.

02 · Principles

The four positions every Agents leaf inherits.

These show up in every framework page in the sub-tree. They're the opinions that determine how each framework is evaluated, what gets called a strength vs a weakness, and which decisions in the field actually matter.

Agents are software, not prompt soup

Code-first definitions, version control, unit tests, evaluation harnesses, CI/CD. Frameworks that treat agent development as software engineering compound; ones that treat it as prompt-tweaking don't. This is the bar every leaf in the sub-tree is held to.

Open standards over vendor lock-in

A2A is now Linux Foundation. MCP is broadly adopted across vendors. OpenTelemetry GenAI semantic conventions are the tracing baseline. When a framework chooses these standards over vendor-specific protocols, that's signal — it expects to coexist with frameworks it doesn't own.

The interop layer is the question

Most teams in production by mid-2026 will run two or more agent frameworks. The interesting question isn't "which framework?" but "which framework where, and how do they talk?". A2A and MCP are the answer, which is why they sit in the sub-tree as first-class leaves alongside the frameworks themselves.

Decisions follow data and systems-of-record

Agents are useful in proportion to the data they can read and the systems they can act on. The Agents sub-tree assumes the Data domain (BI, warehousing, engineering) and Business / Construction domains (ERP, CRM, BIM) are already in place. Without that substrate, agent demos look impressive and produce nothing.

03 · Related branches

Where Agents connects.

Agents is the orchestration layer over the rest of the tree. Data feeds the agents (BI tools and warehouses they query). Business and Construction systems are what the agents act on (ERPs, CRMs, BIM models). Tech provides the runtime substrate (Node, Bun, Cloudflare Workers, Google Cloud, Frappe-as-API). Design covers the surfaces agents render. People is the human-loop side — coaching, leadership, review.