portfolio Anshul Bisen
ask my work

The Cloudflare container platform changes what I would self-host

Edge containers win for stateless batch jobs and lightweight APIs. Self-hosting still wins for stateful services and observability. The line is shifting faster than expected.

Cloudflare’s container platform moved from announcement to open beta last year, and by early 2026 it has matured enough to run real workloads. Not toy demos. Real containers running at the edge with cold start times under 500ms and per-request pricing that makes traditional always-on servers look expensive for bursty workloads.

I have been running workloads on both Cloudflare’s edge and my homelab k3s cluster for over a year. The container platform forces a re-evaluation of where the boundary between self-hosted and managed should sit.

The kind of infrastructure that teaches you by breaking.

Leadership got more concrete for me once I realized release engineering and infrastructure are really trust systems. It also builds on what I learned earlier in “Engineering teams do not need more process. They need better defaults..” The infrastructure stack, ctrlpane, and even my dotfiles all orbit the same idea now: the best teams move fast because the defaults are stable, not because the heroics are impressive.

The infrastructure mess that made the lesson stick.

What Moves to the Edge

The workloads that benefit most from Cloudflare containers share three characteristics: they are stateless, they tolerate cold starts, and they have bursty traffic patterns.

  • Lightweight APIs that serve as glue between services. My portfolio site’s chat API is a good candidate: it processes requests independently, maintains no state between requests, and has traffic that spikes when the site is featured and drops to near-zero otherwise.
  • Batch processing that can be triggered on demand. Knowledge base generation, RSS feed parsing, and content transformation jobs that run periodically and do not need to be always-on.
  • Webhook handlers that need global availability but minimal compute. Payment processor webhooks need to respond quickly from wherever the processor sends them. Edge containers handle this naturally.
  • Image optimization and content transformation. Cloudflare already handles this with their Image Resizing product, but custom containers allow arbitrary transformation pipelines at the edge.

What Stays on the Homelab

The workloads that stay on my homelab k3s cluster are the ones that Cloudflare containers handle poorly or not at all:

  • Stateful services. PostgreSQL, Redis, and any service that maintains persistent connections or on-disk state. Cloudflare containers are ephemeral. State requires durability guarantees that edge computing does not provide.
  • Observability stacks. Grafana, Loki, and Tempo ingest continuous streams of data, store them durably, and query historical data. The data volume alone makes edge hosting impractical. More importantly, I want full control over my observability infrastructure.
  • Long-running background workers. The reconciliation worker that runs 24/7 processing transaction batches needs persistent compute, not per-request pricing. An always-on container on k3s is cheaper and more predictable than edge containers for sustained workloads.
  • Development and staging environments. These need to mirror production closely, including database state and service topology. Edge containers do not replicate the full environment.

The Cost Comparison

The pricing model difference is the key factor:

Edge containers: pay per request plus compute duration. For bursty workloads with long idle periods, this is dramatically cheaper than always-on compute. A webhook handler that processes 10,000 requests per day costs pennies. The same handler running 24/7 on a dedicated container costs dollars per month regardless of traffic.

Homelab k3s: fixed cost of hardware amortized over years, plus electricity. For sustained workloads that run 24/7, the homelab wins on cost. My entire cluster runs for roughly 15/monthinelectricity.Thehardwarecostamortizestoabout15/month in electricity. The hardware cost amortizes to about 20/month over three years. For workloads that need always-on compute, $35/month total is hard to beat.

The crossover point is roughly at sustained utilization. If a workload runs at more than 30% utilization consistently, self-hosting is cheaper. If it runs at less than 10% utilization, edge containers are cheaper. The zone between 10% and 30% depends on the specific workload characteristics.

The Architecture Shift

The practical implication is a hybrid architecture where the edge handles the request layer and the homelab handles the data layer:

  • Cloudflare Workers and containers handle HTTP routing, authentication, and request processing.
  • The homelab runs PostgreSQL, Redis, and the observability stack.
  • Cloudflare Tunnels connect the two securely without exposing ports.
  • Background processing runs on the homelab because the compute is always available and the database is local.

This is not a novel architecture. It is the standard edge-plus-origin pattern that CDNs have used for years. The difference is that the edge layer can now run arbitrary containers, not just cache rules and Workers.

What Changes in Two Years

Homelab, but treated like a real environment.

This is the phase where individual scars finally turned into repeatable operating principles. I cared less about sounding clever and more about leaving behind a system that stayed sane without me in the room. That is how I build infrastructure and ctrlpane too.

The line between what to self-host and what to push to the edge is moving toward the edge faster than expected. Stateless workloads have already moved. Stateful workloads will follow as edge storage matures. The question is not if but when.

Cloudflare is adding durable storage capabilities incrementally. D1 for SQLite at the edge. R2 for object storage. Workers KV for key-value. When they add a durable relational database with PostgreSQL compatibility at the edge, the argument for self-hosting databases on a homelab weakens significantly. I expect that within two years, the only workloads remaining on my homelab will be the observability stack and services where data sovereignty requires physical control. Everything else will have a credible edge-hosted alternative.

The Cloudflare container platform announcement shifted the economics of self-hosting in a way that matters for small teams. Running containers on Cloudflare eliminates the operational overhead of managing container orchestration, networking, and scaling — the work that consumes engineering time disproportionate to its value for teams under twenty engineers. The platform does not replace Kubernetes for teams that need fine-grained control over their infrastructure. But for teams that treat infrastructure as a means to shipping product rather than an end in itself, managed container platforms represent a genuine reduction in operational burden.