Making technology bets: how I evaluate frameworks when the wrong choice costs a year
At a startup, picking the wrong framework is a survival threat. Here is the five-criteria scoring model I use for every significant technology decision.
Six months into FinanceOps, I needed to choose a background job framework. The candidates were BullMQ, Temporal, and our existing PostgreSQL-based queue. I spent four hours reading comparison blog posts, watching conference talks, and browsing GitHub issue trackers. At the end of those four hours, I was more confused than when I started because every source optimized for a different context and none of them were my context.
That was the last time I evaluated technology by vibes. I built a five-criteria scoring model that I now apply to every significant technology decision. It takes about 30 minutes to fill out, and it has prevented at least two decisions I would have regretted.
What felt like a technical post at the time was usually me learning management in disguise. It also builds on what I learned earlier in “Full-stack chaos: when the founder, PM, and head of engineering are all you.” I was figuring out, in public, that your first engineering culture is just the pile of defaults you create when nobody else is in the room yet.
The Five Criteria
Each criterion is scored from 1 to 5 and weighted by importance to our specific situation. The weights are not universal. A mature enterprise would weight them differently than a pre-seed startup. Here are ours.
- Community velocity (weight: 3x) — Is the project gaining momentum or losing it? I check GitHub stars trend, monthly commit frequency, issue response time, and whether the core team is actively employed to work on the project. A framework with declining activity is a framework you will be maintaining yourself in two years.
- Escape hatch cost (weight: 3x) — How hard is it to migrate away if this choice is wrong? A database ORM that wraps standard SQL has low escape hatch cost. A full-stack framework that owns your routing, data fetching, and deployment has high escape hatch cost. At a startup, reversibility matters more than optimality.
- Hiring pool depth (weight: 2x) — Can I find engineers who already know this technology? Niche tools with small communities mean longer hiring timelines and higher ramp-up costs. React has a deep hiring pool. Solid.js does not, regardless of its technical merits.
- AI tooling support (weight: 2x) — Do AI coding assistants (Copilot, Claude, GPT-4) produce correct code for this technology? This criterion sounds strange but it is increasingly practical. If I can get a 30% productivity boost from AI-assisted development with Tool A but not Tool B, that is a real competitive advantage. Mainstream technologies have better AI support because the models were trained on more examples.
- Time-to-first-deploy (weight: 1x) — How quickly can I go from zero to a working deployment? This matters less than the other criteria because it is a one-time cost, but it does break ties. A tool with a five-minute quick start beats a tool with a two-day setup guide when all else is equal.
Scorecard Example: Background Job Framework
Here is the actual scorecard I filled out for the job framework decision.
- PostgreSQL SKIP LOCKED: Community velocity 4 (PostgreSQL itself is thriving), escape hatch 5 (it is just SQL), hiring pool 5 (every backend engineer knows SQL), AI support 5 (standard SQL patterns), time-to-deploy 5 (already running). Weighted total: 66.
- BullMQ: Community velocity 4 (active project, growing), escape hatch 3 (Redis dependency, queue-specific APIs), hiring pool 3 (known in Node.js ecosystem), AI support 4 (well-documented), time-to-deploy 3 (requires Redis setup). Weighted total: 49.
- Temporal: Community velocity 5 (heavily funded, growing fast), escape hatch 1 (deep lock-in to Temporal platform and workflow model), hiring pool 1 (very niche), AI support 2 (limited training data), time-to-deploy 2 (complex self-hosted setup). Weighted total: 33.
PostgreSQL SKIP LOCKED won by a wide margin despite Temporal being the most technically sophisticated option. Temporal would have been the right choice for a team of twenty with dedicated infrastructure engineers. For a team of two at a pre-seed startup, the escape hatch cost and hiring pool depth made it a poor bet.
The Two Decisions That Surprised Me
I have filled out twelve scorecards since creating this model. Ten of them confirmed the choice I was already leaning toward. Two surprised me.
The first surprise was our ORM choice. I was leaning toward Drizzle ORM because of its TypeScript ergonomics and SQL-first philosophy. The scorecard favored Prisma: community velocity was higher, the hiring pool was deeper (Prisma is the default in most Next.js tutorials), and AI tooling support was noticeably better because Copilot produces more accurate Prisma code than Drizzle code. I went with Prisma and have not regretted it.
The second surprise was our deployment platform. I assumed we would use Vercel because we were building with Next.js. The scorecard showed that Vercel scored poorly on escape hatch cost (deep platform lock-in with serverless functions and edge runtime) and poorly on community velocity for self-hosted alternatives. We went with a self-managed Docker deployment behind Caddy, which scored higher on escape hatch and gave us more control over the runtime environment.
Why This Model Works for Startups
The model works because it forces you to score what matters and ignore what does not. Blog post comparisons optimize for technical elegance. Conference talks optimize for excitement. GitHub stars optimize for popularity. None of those map to “will this choice still be correct in 18 months at a startup that might pivot, grow, or die?”
The two highest-weighted criteria — community velocity and escape hatch cost — are both about the future, not the present. They answer the questions that matter most at a startup: will this tool still be maintained when I need it, and can I change my mind without rewriting the application?
That was the pattern of my first months at FinanceOps: I did not have management scar tissue yet, so I earned trust by making technical decisions that stayed boring under pressure. The same bias toward strict defaults still shows up in portfolio, pipeline-sdk, and dotfiles today.
Technology bets at a startup are not about finding the best tool. They are about finding the tool that leaves you the most options when your assumptions change. Score for reversibility, not for perfection.