A regional map of Cloudflare's points of presence across Africa and the Middle East — where Workers execute, where requests terminate, and what the footprint means when you're deploying for users in Nairobi, Cairo, or Riyadh.
A Cloudflare point of presence (PoP) is a physical data center — racks of servers in a carrier-neutral facility or ISP cage — that terminates end-user traffic and runs the full Cloudflare stack. Unlike many CDNs where edge locations only handle static caching, Cloudflare runs every service in every data center. Workers, Pages, R2 reads, KV lookups, D1 queries, Zero Trust policy enforcement, DDoS scrubbing — all of it executes at the nearest PoP.
A physical data center running Cloudflare's full software stack. Typically sits inside an ISP's facility or an internet exchange.
Cloudflare uses IATA airport codes to identify each PoP. NBO = Nairobi, JED = Jeddah, CAI = Cairo, JNB = Johannesburg.
The same IP addresses are advertised from every PoP. BGP delivers each user's request to the topologically nearest location.
The CF-Ray response header exposes which PoP served a given request. Useful when debugging regional routing or asymmetric latency.
For apps serving African or Middle Eastern users, the difference between a nearby PoP and backhauling to Europe is typically 100–250 milliseconds per round trip. Across a page load that makes 20 requests, that's the difference between a product that feels local and one that feels offshore.
Because every PoP runs Workers, serverless compute executes at the edge closest to the user — not in a single origin region. An API hosted on Workers that a Kenyan user calls runs in Nairobi. The same API called from Johannesburg runs in Johannesburg. No configuration, no region pinning, no architectural effort.
Cloudflare's first African PoP opened in Johannesburg in December 2014 — the company's 30th globally. The network has since expanded to cover every major sub-region of the continent. Status reflects Cloudflare's public status page.
Kenya has the deepest East African footprint with both Mombasa (undersea cable landing) and Nairobi (capital / mobile traffic). Egypt is served from Cairo only — Alexandria has been named in expansion plans but isn't live. South Africa has three PoPs providing in-country redundancy across Gauteng, the Western Cape, and KwaZulu-Natal.
The Middle East has denser Cloudflare coverage per capita than most of Europe. The first wave landed in Doha, Dubai, Kuwait City, and Muscat in 2015. Saudi Arabia now has three in-country PoPs — more than any other country in the region.
Three PoPs cover Saudi Arabia's three economic axes: Riyadh (central, capital, government), Jeddah (west coast, Red Sea, Hajj traffic), and Dammam (east coast, Aramco, Gulf). When Jeddah was added to a network that already had Riyadh, median TCP RTT dropped 26% — proof that intra-country PoP density still matters even when in-country coverage exists.
Cloudflare doesn't assign users to PoPs. The internet does. Every PoP announces the same IP prefixes via BGP, and each user's ISP routes to the topologically nearest announcement. This is why the PoP serving a given user can shift — peering disputes, undersea cable cuts, or capacity management can all change which location "wins" for a region.
A user in Nairobi resolves example.com and gets a Cloudflare anycast IP. Every CF PoP on earth claims to own that IP.
The user's ISP selects the shortest AS path. Typically that's a peering connection into the nearest PoP — NBO or MBA in this case.
The PoP terminates TLS, runs Workers, serves cached assets, enforces WAF rules, and optionally fetches from origin.
If the nearest PoP is overloaded or under maintenance, anycast automatically drains to the next-nearest. Users rarely notice.
The CF-Ray response header returned on any Cloudflare-fronted request ends with the serving PoP's IATA code. curl -sI https://example.com | grep -i cf-ray will tell you where your traffic actually landed.
Cloudflare's MEA expansion tracks the maturing of undersea cable capacity and regional peering. The timeline also tracks a shift from one-PoP-per-country coverage to intra-country redundancy.
The breadth of Cloudflare's MEA coverage changes the deployment calculus for apps targeting these users. But it's not always the right call.
2nth spans four pillars — Design, Software, Hardware, Robotics. Cloudflare's edge network is a Software-pillar concern, but it reaches further than that. Here's how the footprint reshapes what one operator can ship.
Low regional latency changes what's feasible in UX. Real-time collaborative tools, instant search, and optimistic UI patterns work in Lagos the same way they work in London. Designs no longer need to degrade gracefully for "the African market."
Workers + Pages + D1 + R2 is a complete stack that deploys globally from a single operator's laptop. For MEA-targeted apps, this replaces multi-region AWS setups that would otherwise need a dedicated SRE to maintain.
IoT devices across the region can authenticate and report to the nearest PoP rather than backhauling to a single origin. Durable Objects provide a coordination primitive for fleet state without a regional database.
Edge compute + local PoPs enables hybrid robotic systems — on-device inference for latency-critical control, edge Workers for coordination and telemetry, remote origin only for training data aggregation.
All PoP listings verified against Cloudflare's public status page. Historical dates and latency figures drawn from Cloudflare's own engineering blog.