Standing up CI/CD from scratch with GitHub Actions and zero budget
No DevOps person. No budget. I built our entire CI/CD pipeline in a single weekend using GitHub Actions free tier.
The first time I deployed FinanceOps to production, I ran the build on my laptop, copied the output to a server via scp, and restarted the process manager. It took eleven minutes and I held my breath the entire time. The second deploy went the same way. By the third, I knew that if I kept doing this manually, something catastrophic was going to happen. A missed environment variable, a build artifact from the wrong branch, a deploy during a database migration. Manual deploys do not scale past one person doing it once a week.
That Saturday morning I sat down to build a real CI/CD pipeline. The constraint was zero dollars. We had no infrastructure budget beyond our application hosting. GitHub Actions free tier gives you 2,000 minutes per month on private repos. That had to be enough.
A lot of my month-one leadership came through infrastructure choices that looked small from the outside. I was building the muscle memory that later fed the infrastructure and ctrlpane projects at home: reproducible defaults, cheap feedback loops, and enough observability that I did not need to guess under pressure.
The Pipeline Architecture
I wanted four stages and I wanted them fast. Every minute spent waiting for CI is a minute I am not shipping features. The stages, in order: lint and typecheck, run tests, build Docker image, deploy to the target environment.
name: CI/CDon: push: branches: [main, "feat/**"] pull_request: branches: [main]
jobs: quality: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: pnpm/action-setup@v3 - uses: actions/setup-node@v4 with: node-version: 22 cache: pnpm - run: pnpm install --frozen-lockfile - run: pnpm lint - run: pnpm typecheck
test: needs: quality runs-on: ubuntu-latest services: postgres: image: postgres:16 env: POSTGRES_DB: test POSTGRES_PASSWORD: test ports: - 5432:5432 steps: - uses: actions/checkout@v4 - uses: pnpm/action-setup@v3 - uses: actions/setup-node@v4 with: node-version: 22 cache: pnpm - run: pnpm install --frozen-lockfile - run: pnpm testThe quality job runs lint and typecheck in parallel within the same step. If either fails, the pipeline stops immediately. No point running tests against code that does not compile. The test job spins up a PostgreSQL service container so integration tests hit a real database. I refused to mock the database layer because our most critical bugs were always at the query boundary.
Docker Layer Caching That Actually Works
The first Docker build in CI took eight minutes and forty seconds. Eight minutes. Most of that time was installing npm packages inside the container because every push invalidated the layer cache. The fix was restructuring the Dockerfile to copy package.json and the lockfile before copying source code.
FROM node:22-slim AS depsWORKDIR /appCOPY package.json pnpm-lock.yaml ./RUN corepack enable && pnpm install --frozen-lockfile
FROM node:22-slim AS buildWORKDIR /appCOPY --from=deps /app/node_modules ./node_modulesCOPY . .RUN pnpm build
FROM node:22-slim AS runnerWORKDIR /appCOPY --from=build /app/.next ./.nextCOPY --from=build /app/public ./publicCOPY --from=build /app/node_modules ./node_modulesCOPY --from=build /app/package.json ./EXPOSE 3000CMD ["pnpm", "start"]Combined with GitHub Actions cache for Docker layers, the build dropped from eight minutes to ninety seconds. The dependency layer only rebuilds when the lockfile changes. Source code changes rebuild only the application layer. That single optimization saved us roughly six minutes per push.
Deployment Strategy
For deployment, I went with the simplest approach that could work: build the Docker image, push it to GitHub Container Registry, SSH into the production server, pull the new image, and restart the container with zero-downtime using a blue-green swap.
- Build and push Docker image tagged with the commit SHA.
- SSH into production via a deploy key stored in GitHub Secrets.
- Pull the new image.
- Start the new container on a different port.
- Health check the new container.
- Swap the Caddy upstream to point at the new container.
- Stop the old container.
The health check step is critical. If the new container fails to respond to a health probe within thirty seconds, the deploy rolls back automatically by leaving the old container as the active upstream. We had this save us twice in the first month when a bad environment variable made it into a deploy.
Six Months and Zero Dollars Later
We averaged about 1,400 CI minutes per month, well within the free tier. The pipeline caught 23 type errors, 8 lint violations, and 4 test failures before they reached production. Every one of those would have been a production bug without CI. The total time investment was one weekend to build it and about two hours per month to maintain it.
That was the pattern of my first months at FinanceOps: I did not have management scar tissue yet, so I earned trust by making technical decisions that stayed boring under pressure. The same bias toward strict defaults still shows up in portfolio, pipeline-sdk, and dotfiles today.
The best CI/CD pipeline is the one you actually build. A simple pipeline that runs on every push beats a sophisticated pipeline that lives in a planning document.
We have since added preview deployments for pull requests and Slack notifications on deploy failures, but the core pipeline is still the same four stages I built that Saturday. It costs nothing, runs in under three minutes, and has never let a broken build reach production.
Looking back, the zero-budget constraint forced better engineering decisions than any tool evaluation would have produced. We learned to treat CI configuration as production code, with the same review standards and testing expectations. The pipeline we built in those first weeks survived eighteen months of team growth and feature velocity without a major rewrite. Sometimes the best infrastructure decision is the one you can fully understand and maintain yourself.