Setting up k3s on a mini PC for staging and why every startup needs a junk drawer environment
A $300 mini PC running k3s gave our team a staging environment that mirrors production topology without the cloud bill. The real value was deployment confidence.
In December 2024 I bought a Beelink SER5 mini PC for 400 a month. The mini PC was a one-time purchase that has already paid for itself. But the cost savings are not even the main benefit.
This is where the homelab stopped being a hobby and started acting like a leadership tool. It also builds on what I learned earlier in “React 19 Server Components in production: the migration nobody warns you about.” The infrastructure and ctrlpane work gave me a cheap place to pressure-test release habits, GitOps discipline, and failure modes before I asked the team to trust those defaults at work.
Why Staging Matters More Than You Think
For our first year at FinanceOps, we deployed directly from local development to production with a CI pipeline in between. No staging environment. The CI pipeline ran tests and linting but could not catch environment-specific issues like misconfigured environment variables, missing Kubernetes secrets, incorrect resource limits, or broken health check endpoints. We caught those issues in production, usually at 10 PM.
The argument against staging at a startup is that it is overhead. You have to maintain a second environment, keep it in sync with production, and deal with configuration drift. All true. But the argument for staging is that production incidents at 10 PM are more expensive than maintaining a second environment. Especially when your clients are financial institutions that expect five nines.
What convinced me was not the late-night incidents themselves but the behavioral change they caused. Engineers became risk-averse about deploying on Friday afternoons, which meant features slipped to Monday, which compressed the sprint, which increased pressure midweek. The absence of staging was not just causing incidents. It was shaping our entire delivery rhythm in unhealthy ways.
The Setup
The hardware is a Beelink SER5 with an AMD Ryzen 5 5560U, 16 GB RAM, and a 500 GB NVMe SSD. It draws about 15 watts at idle and 35 watts under load. It sits on a shelf in my home office next to the router.
# Install k3s with specific options matching our production configcurl -sfL https://get.k3s.io | sh -s - \ --disable traefik \ --disable servicelb \ --write-kubeconfig-mode 644
# Install nginx ingress to match productionkubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.9.4/deploy/static/provider-cloud/deploy.yaml
# Install ArgoCD for GitOps deployment matching productionkubectl create namespace argocdkubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yamlThe key decision was matching the production topology as closely as possible. We use nginx ingress in production, so the staging cluster uses nginx ingress. We use ArgoCD in production, so staging uses ArgoCD pointed at the same Git repository but with a staging overlay. The database is a single PostgreSQL instance running in the cluster, not an external managed service, because we do not need production-grade durability for staging data.
The Junk Drawer Philosophy
The real value of the staging cluster goes beyond pre-production testing. It became our junk drawer, a place where engineers can try things without consequences. Want to test a Kubernetes resource limit change? Try it on staging. Want to experiment with a new ingress configuration? Deploy it on staging. Want to load test the reconciliation engine with synthetic data? Run it on staging at 2 AM without worrying about client impact.
- New Kubernetes manifest changes get deployed to staging first via ArgoCD
- Database migration scripts run against staging before touching production
- Environment variable changes are validated in staging where a mistake costs nothing
- Engineers can deploy their feature branches to staging for cross-browser testing before opening a PR
- Load testing runs against staging to catch performance regressions before they hit production
The junk drawer aspect is important because it removes the fear of experimentation. Before the staging cluster, trying a new Kubernetes configuration meant either testing in production or setting up a local kind cluster that did not match production at all. Both options discouraged experimentation. The staging cluster is cheap enough to break and realistic enough to trust.
What I Would Do Differently
The mini PC has been running for three months with essentially zero downtime. The one thing I would change is adding a second mini PC for redundancy. Right now if the hardware fails, staging is gone until I replace it. For a staging environment that is acceptable. If I ever move critical infrastructure to the homelab, redundancy becomes non-negotiable.
The other adjustment is network access. Currently the staging cluster is only accessible from my home network or through a WireGuard VPN. This works for our team of three but adding Cloudflare Tunnel access is on my list for Q1 so engineers do not need VPN credentials just to check a staging deployment.
By this stage the job had changed. I was no longer just picking a tool or fixing a bug. I was carrying the blast radius across product, compliance, sales, and hiring. That is exactly why I kept pressure-testing the same lesson inside infrastructure and ctrlpane.
Every startup needs a junk drawer environment. Not a pristine staging environment that mirrors production perfectly, but a cheap, disposable place where engineers can break things, try things, and learn things without fear. The mini PC running k3s is the best $299 I have spent on infrastructure.
If your team deploys directly to production and wonders why Friday deployments feel risky, the solution is not process. It is a staging environment that makes deploying feel safe any day of the week. A single mini PC can change the entire deployment culture of a small team.