An edge runtime built on V8 isolates, paired with a git-driven static host. Together they replaced a surprising amount of the origin-server stack — but only for workloads that fit inside their constraints. Here's what the pieces actually are, and when to pick something else.
Workers is a JavaScript and WebAssembly runtime that executes on Cloudflare's network — the same network that already answers DNS, terminates TLS, and caches assets for roughly a fifth of the web. Each request lands in whichever of Cloudflare's edge locations is closest to the user, and your code runs there instead of in a region on the other side of the ocean.
Pages is the static-site sibling: connect a Git repo, Cloudflare builds it, and the output sits on that same network. When you need a dynamic endpoint, a file under functions/ compiles into a Worker route on the same deploy. Pages + Functions is, in practice, how most teams start — the "serverless" bit arrives as a side effect of deploying a site.
The problem this solves isn't exotic. It's the boring one: every request to a single-region origin is a round trip, every container cold start is a visible delay, and every byte leaving your cloud has a price on it. Workers moves the code to where the user already is; Pages moves the assets there too.
None of these are marketing adjectives. They're the reasons a particular class of workload — latency-sensitive, globally distributed, low-state — moved off traditional clouds and onto this one.
Most of Workers' behaviour — good and bad — falls out of three design decisions: how it runs code, how it serves static assets alongside it, and how it gives code access to state.
Each Worker runs in a V8 isolate — the same primitive a Chrome tab uses. An isolate spins up in roughly a millisecond, shares a single V8 process with thousands of peers, and has no filesystem, no Node.js, no native modules. That's how you get sub-millisecond cold starts and per-request pricing. It's also why you can't `require('sharp')` or shell out to ImageMagick.
// Every request gets a fresh isolate context. // No container. No VM. No cold start. export default { async fetch(req, env, ctx) { const { colo } = req.cf; return new Response(`hello from ${colo}`); } };
Pages compiles your site from a Git repo and serves the output from every edge location. Drop a file under functions/ and that route becomes a Worker on the same deploy, with automatic preview URLs per pull request. For the common case — static site with a handful of dynamic endpoints — you skip most of the serverless config ceremony entirely.
// functions/api/hello.ts — becomes /api/hello export const onRequest: PagesFunction = async (ctx) => { const body = { ok: true, city: ctx.request.cf.city }; return Response.json(body); };
Workers doesn't give your code an SDK for KV or D1 — it gives it a binding. You declare the resources you need in wrangler.toml and they arrive as properties on the env argument. No credentials to rotate, no connection strings, no client libraries to keep in sync. The primitives you get are KV (eventual key-value), D1 (SQLite), R2 (S3-compatible object storage with zero egress), Durable Objects (stateful, single-writer), and Queues.
# wrangler.toml name = "edge-api" main = "src/index.ts" [[kv_namespaces]] binding = "CACHE" id = "a1b2..." [[d1_databases]] binding = "DB" database_name = "prod" [[r2_buckets]] binding = "ASSETS" bucket_name = "uploads"
Edge compute isn't a category of one. Here's how the main options compare on the dimensions that usually decide the choice.
| Platform | Runtime | Cold start | Edge state | Best for |
|---|---|---|---|---|
| Cloudflare Workers | V8 isolate | ~0ms | KV · D1 · R2 · DO · Queues | Global APIs, static + dynamic sites, auth middleware |
| Vercel Edge Functions | V8 isolate (on Cloudflare) | ~0ms | Vercel KV, Postgres (regional) | Next.js-first teams already on Vercel |
| Deno Deploy | Deno (V8 + Rust) | ~0ms | Deno KV | TypeScript-first, standards-aligned APIs |
| AWS Lambda@Edge | Node on CloudFront | 100–300ms | None at edge | Teams deep in AWS willing to pay the latency cost |
| Fastly Compute | WebAssembly (Wasmtime) | ~1ms | KV Store | Language flexibility, polyglot shops |
| Netlify Edge Functions | Deno (on Deno Deploy) | ~0ms | Blob Store | Netlify-hosted sites needing light edge logic |
Skipping the theoretical. These are the workloads where Workers or Pages is usually the first answer rather than a compromise.
Auth check, rate limit, cache read, request coalescing — all at the edge — before a single packet hits your origin. The classic Worker shape.
Validate tokens, enrich with claims, reject unauthenticated traffic without letting it near your expensive app servers. Pairs well with Cloudflare Access.
Bucket users, set cookies, rewrite paths. Because the decision happens at the edge, cached variants stay cached and your experiment doesn't collapse latency.
Pages handles the site, Functions handles the contact form and the webhook, and a single git push deploys globally in under a minute. No Kubernetes required.
Store originals in R2, transform at the edge via the Images binding, serve WebP/AVIF with no origin round trip. Zero egress cost on R2 is the kicker.
Render React, Svelte, or Astro pages at the edge, hydrating with data from KV or D1. Faster TTFB than regional SSR for a globally distributed audience.
Workers launched as "run a bit of JS at the edge". What's interesting is how quickly that grew into a set of primitives broad enough to host full applications.
Workers is excellent for what it's excellent at, and miserable for what it isn't. The fastest way to waste a week is to force the wrong shape of workload into an isolate.
A skill node is only as valuable as the other nodes it plugs into. Here's where Workers & Pages touches the rest of the tree — and how an agent loading this context should think about those edges.
A working mental model of the Workers + Pages primitives, their performance envelope, and the places they start to hurt. Use it to answer "should this workload live at the edge?" questions without guessing, and to scaffold a Pages project with the right bindings before touching a real wrangler.toml.
When the agent-context API ships, this node will also expose deploy sequences, common wrangler errors, and the known-good shapes for KV / D1 / R2 bindings.
Official docs first, because Cloudflare's are unusually good. A few background reads for anyone who wants to understand the runtime choices rather than just use them.
Structured context bundle with deploy patterns, bindings, and wrangler.toml scaffolds. Shipping with the know.2nth.ai Worker API.