The Prisma-to-Drizzle migration we almost did and why we stayed on Prisma
We built a prototype, benchmarked it, and stayed on Prisma because the migration cost exceeded the performance gain for our actual query patterns.
Prisma 7 shipped with its pure TypeScript client, finally removing the Rust query engine that had been the source of most complaints about Prisma performance. The release was good. Our P95 query latency dropped 15% just from upgrading. But the team had already been evaluating Drizzle ORM for three months, and the Drizzle prototype was showing even better numbers.
We almost pulled the trigger on a full migration. Here is why we did not.
By this point I cared less about sounding smart and more about making the tradeoff legible. It also builds on what I learned earlier in “Upgrading to Next.js 16 inside a Payload CMS monorepo was harder than expected.” The systems had enough history that every database or eventing opinion had receipts behind it. That is the same posture I now bring to longer-lived experiments like ftryos and pipeline-sdk: if the constraint is real, say it plainly and design around it.
The Case for Drizzle
The Drizzle evaluation started because three engineers independently complained about Prisma in the same sprint retrospective. The complaints were specific:
- Prisma’s query builder does not support all PostgreSQL features natively. We were writing raw SQL for window functions, CTEs, and lateral joins. Drizzle’s SQL-like API supports these natively.
- Prisma migrations are opaque. The generated SQL is hard to review, and the migration engine makes assumptions about the migration order that sometimes produce suboptimal DDL.
- The Prisma Client generates a large amount of TypeScript that slows down the language server in large codebases.
- Drizzle’s type inference from the schema definition is more ergonomic than Prisma’s generated types for complex queries.
These were legitimate complaints. So we built a prototype. One engineer spent two weeks porting the reconciliation service from Prisma to Drizzle. The reconciliation service is our most query-intensive component, with complex joins, aggregations, and batch operations. If Drizzle worked there, it would work everywhere.
The Benchmark Results
The Drizzle prototype was faster. Raw query execution was 20-30% faster for simple queries and 10-15% faster for complex joins. The improvement was real and consistent across our benchmark suite.
But when we measured end-to-end request latency instead of isolated query execution, the difference shrank to 3-5%. The reason: our queries are not the bottleneck. Network latency to the database, connection pool management, and application logic dominate the request lifecycle. The ORM query execution is a fraction of the total time.
This was the first signal that the migration might not be worth the cost.
The Migration Cost Analysis
We estimated the migration cost with uncomfortable precision:
- The codebase has 340 Prisma queries across 12 services. Each query needs manual conversion because the APIs are structurally different.
- Our migration history has 87 Prisma migrations. Drizzle would need a baseline migration that recreates the current schema, plus a strategy for future migrations that preserves the history.
- Three services use Prisma middleware for audit logging. Drizzle has a different extension mechanism that requires rewriting the audit layer.
- The team has two years of Prisma knowledge. Drizzle has different patterns and gotchas. The context switch cost is real.
- Estimated engineering time: 6-8 weeks for one senior engineer, or 3-4 weeks for two engineers working in parallel.
For a 3-5% improvement in end-to-end latency, we would spend 4-8 weeks of senior engineering time and accept the risk of introducing bugs in the most critical data access layer of our application. The math did not work.
What We Did Instead
We stayed on Prisma and addressed the specific complaints that started the evaluation:
- For unsupported PostgreSQL features, we standardized on Prisma’s $queryRaw for complex queries and created typed wrapper functions that give us the same type safety Drizzle would have provided.
- For migration review, we added a CI check that extracts the generated SQL from Prisma migrations and presents it in a readable diff format in the PR.
- For language server performance, we configured the TypeScript project references to isolate the Prisma generated client into its own compilation unit.
- For type ergonomics, we created a set of utility types that make Prisma’s generated types easier to work with for complex includes and selects.
These fixes took two weeks total. Not four to eight.
The Decision Framework
Earlier in this story I was mostly trying to survive the blast radius myself. Here I was trying to design a system where the team did not need heroics in the first place. The same philosophy now shapes ftryos and pipeline-sdk.
Evaluate ORMs against your actual query patterns, not against benchmarks. The benchmark measures the ORM. The end-to-end measurement tells you whether the ORM is your bottleneck.
The broader lesson is about technology migrations in general. The grass-is-greener evaluation is easy. You benchmark the new thing, see it is faster, and get excited. The hard evaluation is measuring the new thing against the full cost: migration effort, team knowledge loss, risk of introducing bugs in critical paths, and the opportunity cost of what those engineers would have built instead.
Drizzle is a great ORM. For a new project, I would seriously consider it. For a two-year-old codebase with 340 queries, 87 migrations, and a team that is productive with Prisma, the migration cost exceeds the benefit. That is not a criticism of Drizzle. It is a statement about the economics of technology migrations. The right tool is often the one you already have, used better.
The migration we almost did taught us more about technology evaluation than any migration we completed. We spent two weeks benchmarking Drizzle against Prisma on our actual query patterns, profiling cold start times, and mapping the API surface differences. The conclusion was that the performance gains were real but the migration cost exceeded the benefit for our current scale. We documented the decision with benchmarks and a trigger condition — if query latency exceeds a threshold or cold starts become a bottleneck, we revisit. Knowing when not to migrate is as valuable as knowing how to migrate.