portfolio Anshul Bisen
ask my work

Kubernetes 1.33 and the features that finally made me stop questioning container orchestration for small teams

Every six months someone says Kubernetes is overkill for a small team. Kubernetes 1.33 continued a trend of sane defaults and reduced overhead that invalidates that argument for teams deploying more than two services.

Kubernetes 1.33 released on April 23, 2025, and I upgraded our k3s staging cluster the same day. This is the fifth Kubernetes release I have deployed since starting at FinanceOps, and each one has reinforced the same conclusion: Kubernetes has gotten dramatically easier to operate for small teams, and the “too complex for small teams” argument is increasingly outdated.

The kind of infrastructure that teaches you by breaking.

This is where the homelab stopped being a hobby and started acting like a leadership tool. It also builds on what I learned earlier in “The on-call rotation that was just me, and why I finally admitted that was not sustainable.” The infrastructure and ctrlpane work gave me a cheap place to pressure-test release habits, GitOps discipline, and failure modes before I asked the team to trust those defaults at work.

The infrastructure mess that made the lesson stick.

The Complexity Argument

The argument against Kubernetes for small teams goes like this: Kubernetes has a steep learning curve, requires significant operational expertise, and adds infrastructure complexity that a four-person team cannot afford. You should use something simpler like Docker Compose, a PaaS like Railway or Fly.io, or just deploy to a VM with a process manager.

This argument was reasonable in 2020. It was debatable in 2023. In 2025, with k3s, sane defaults, and a mature ecosystem of Helm charts, it is wrong for any team that deploys more than two services. The complexity of Kubernetes has not decreased, but the operational overhead for standard use cases has been automated away.

The real question was never “is Kubernetes complex?” It was “is Kubernetes more complex than the alternative?” When your alternative is Docker Compose with manual service discovery, hand-rolled health checks, no rolling deploys, no automatic restarts, no resource limits, and no audit trail for changes, Kubernetes is not adding complexity. It is replacing ad hoc complexity with structured complexity that has documentation, tooling, and a community.

What Kubernetes 1.33 Improved

Each Kubernetes release has been moving toward better defaults and reduced configuration burden. Kubernetes 1.33 continued this trend with several changes that directly affect small team operations.

  • Sidecar containers reached stable, meaning init containers that run alongside the main container for the full pod lifecycle. This simplifies patterns like log shipping and proxy injection that previously required complex workarounds.
  • In-place resource resizing for pods moved closer to GA, allowing you to change CPU and memory limits without restarting the pod. For a team that is still tuning resource allocations, this means fewer unnecessary restarts.
  • Job success policy improvements make batch processing more reliable by letting you define exactly what constitutes a successful job completion.
  • Improvements to volume management reduced the number of edge cases where PersistentVolumeClaims get stuck in pending state.

None of these are individually game-changing. But collectively, each release removes one more sharp edge that used to require manual intervention or specialized knowledge. The Kubernetes of 2025 requires less babysitting than the Kubernetes of 2022.

Why k3s Changes the Equation

The secret weapon for small team Kubernetes adoption is not any specific Kubernetes release. It is k3s, the lightweight Kubernetes distribution that runs on a single binary and installs in 30 seconds.

  • Single binary: no separate etcd cluster, no complex multi-node control plane setup
  • Default storage class: local-path provisioner included, PersistentVolumes work out of the box
  • Default load balancer: ServiceLB included, no need for MetalLB or cloud provider integrations
  • Automatic TLS: internal certificate management handled automatically
  • Resource footprint: runs comfortably on 2 GB of RAM for a staging environment

Our staging cluster runs on a mini PC with 16 GB of RAM. It hosts our application, PostgreSQL, Kafka, ArgoCD, the Grafana observability stack, and Cloudflare Tunnel. Total resource usage is about 8 GB. The cluster has been running for six months with zero unplanned downtime. The operational overhead is approximately one hour per month for upgrades and occasional troubleshooting.

The Maturity Indicators

When should a small team adopt Kubernetes? Not based on team size but based on workload characteristics. Here are the indicators that told us it was time.

  • More than two services: When you have a web app, an API, a worker, and a database, Docker Compose starts requiring manual orchestration for deploys, health checks, and service discovery.
  • Need for zero-downtime deploys: If restarting a service causes user-visible downtime, you need rolling deploys. Kubernetes handles this natively.
  • Multiple environments: If you need staging and production with the same topology, Kubernetes with Kustomize overlays is cleaner than maintaining parallel Docker Compose files.
  • Audit requirements: If you need to demonstrate who deployed what and when, GitOps with ArgoCD on Kubernetes provides this automatically.
  • Resource contention: If services compete for CPU or memory and you need isolation, Kubernetes resource limits and requests are the standard solution.

The Verdict

Homelab, but treated like a real environment.

This is where the role stopped feeling like senior-engineer-plus. Every decision had a human system wrapped around it: founders, customers, auditors, tired teammates. The same systems thinking bled into infrastructure and ctrlpane, where defaults matter more than heroics.

Kubernetes in 2025, especially through k3s, is the right choice for any small team that deploys more than two services and cares about reliability. The learning curve is real but it is a one-time investment. The operational overhead is lower than managing the equivalent infrastructure without orchestration. And every Kubernetes release makes the defaults better and the sharp edges fewer.

I have stopped qualifying my Kubernetes recommendation with “but only if you have a big enough team.” The qualifier now is “only if you have more than one service to manage.” For single-service deployments, a PaaS is still simpler. For everything else, k3s on a $300 mini PC gives you production-grade orchestration at homelab prices. The complexity argument needs to be retired.