My first homelab rack: a mini PC, k3s, and the itch to self-host everything
I bought a used mini PC for $180, installed k3s, and spent a weekend migrating my side projects off Vercel free tier.
It started the way every homelab story starts: I wanted to run one thing locally. Just one. A small Grafana instance to visualize some metrics from a side project. I did not want to pay for a cloud dashboard. I did not want to set up a VPS. I just wanted a process running on hardware I owned, in my apartment, accessible when I needed it. Thirty days later I had a mini PC rack-mounted in my closet running twelve services on k3s and I was shopping for a second machine.
A lot of my month-one leadership came through infrastructure choices that looked small from the outside. It also builds on what I learned earlier in “Standing up CI/CD from scratch with GitHub Actions and zero budget.” I was building the muscle memory that later fed the infrastructure and ctrlpane projects at home: reproducible defaults, cheap feedback loops, and enough observability that I did not need to guess under pressure.
The Hardware
I bought a used Beelink SER5 off eBay for 200, which is less than three months of the Vercel Pro plan I was about to cancel.
- CPU: AMD Ryzen 5 5560U (6 cores, 12 threads). Overkill for what I needed but the used market priced it the same as weaker options.
- RAM: 16 GB DDR4. Enough for k3s control plane plus a dozen lightweight containers. I hit 60% utilization at peak.
- Storage: 500 GB NVMe. Fast enough for container image pulls and small databases. I added a 1 TB external USB drive for persistent volumes after two weeks.
- Network: Gigabit ethernet. Wi-Fi was tempting but wired connections do not drop out when your router reboots at 2am.
- Power: 15W idle, 35W under load. My electricity cost for running this 24/7 is about $4 per month.
I briefly considered a Raspberry Pi 4 but the ARM architecture means constant Docker image compatibility issues. The AMD mini PC runs standard x86 images with zero surprises. That compatibility advantage was worth the extra $100 over a Pi.
Why k3s and Not Docker Compose
Docker Compose would have been simpler for twelve services. I chose k3s for one reason: I wanted to learn Kubernetes on real hardware without the cost of a cloud cluster. FinanceOps runs on managed Kubernetes in production and I was tired of only interacting with it through Terraform and CI pipelines. I wanted to understand the system from the bottom up, and a single-node k3s cluster is the cheapest way to do that.
# Install k3s in 30 secondscurl -sfL https://get.k3s.io | sh -s - \ --disable traefik \ --write-kubeconfig-mode 644
# Verifykubectl get nodes# NAME STATUS ROLES AGE VERSION# homelab-1 Ready control-plane,master 30s v1.29.3+k3s1I disabled Traefik because I prefer Caddy as my ingress controller. k3s ships Traefik by default and it works fine, but I was already using Caddy for TLS termination on my side projects and did not want to learn two reverse proxies. Caddy’s automatic HTTPS with Let’s Encrypt is genuinely delightful.
The Gotchas on a Single Node
Running k3s on a single node has quirks that the documentation does not emphasize. I hit three of them in the first weekend.
- Storage: The default local-path provisioner creates PersistentVolumes on the node filesystem. If the drive fails, everything is gone. I added nightly rsync backups to the USB drive immediately. A real cluster would use distributed storage like Longhorn.
- Resource limits matter more: On a multi-node cluster, a runaway pod gets evicted and rescheduled elsewhere. On a single node, a runaway pod can starve the control plane and you lose kubectl access. I set CPU and memory limits on every pod after this happened on day two.
- DNS resolution: k3s uses CoreDNS and it occasionally struggled when I had more than ten services with frequent DNS lookups. Increasing the CoreDNS replica count to 2 and adding a node-local DNS cache fixed it.
The resource limits lesson was the most painful. A misconfigured Node.js service with a memory leak ate 12 GB in twenty minutes and the kubelet could not respond to API calls. I had to SSH in and kill the process manually. After that, every deployment manifest includes explicit resource requests and limits. No exceptions.
The Moment I Knew I Needed a Second Machine
Three weeks in, I wanted to run a PostgreSQL instance alongside all my application containers. PostgreSQL wants at least 2 GB of RAM for reasonable performance, and with the k3s control plane, CoreDNS, Caddy, and twelve application pods, I was already at 14 GB of 16 GB. The mini PC was full.
I ordered a second Beelink with 32 GB of RAM. The plan was to join it to the cluster as a worker node, move the stateful workloads (PostgreSQL, Grafana with persistent dashboards) to the second machine, and keep the first machine for stateless application pods. That is a story for another post, but the upgrade from single-node to two-node k3s took about forty-five minutes and worked flawlessly.
The builder phase was less glamorous than people imagine. It was mostly a series of stubborn, unfashionable choices that kept future-me out of 2 a.m. incident calls. I still make the same kind of choices inside infrastructure and ctrlpane.
A homelab is never finished. It is a perpetual side project that teaches you more about infrastructure in a month than a year of reading documentation. Start small, break things, and buy the second machine when the first one runs out of RAM.