The quarterly planning meeting that taught me product managers and engineers solve different problems
I walked into Q2 planning with technical improvements. Product walked in with customer churn data. We talked past each other for two hours before building a framework to bridge both worldviews.
Our Q2 2025 planning meeting was supposed to take 90 minutes. It took four hours and produced nothing usable until the last 45 minutes when we finally realized why we had been arguing. I walked in with a prioritized list of technical improvements: migrate the batch pipeline to streaming, upgrade the Kubernetes cluster, implement automated canary deploys. Product walked in with customer churn data showing that three of our largest clients were at risk because of feature gaps in reporting. We spent two hours debating which list mattered more before someone pointed out that we were solving different problems.
By this phase the work was no longer “just build it.” It also builds on what I learned earlier in “Our Kafka consumer lag crisis and why I stopped trusting “it works on my machine” for event-driven systems.” Every architecture choice had a people cost, an audit cost, and a recovery cost when production disagreed with the plan. That is roughly when the line between FinanceOps systems and projects like flowscape or ftryos got interesting to me: the design only counts if the operators can live with it.
Two Valid Worldviews
Engineers optimize for system health. When I look at the product, I see infrastructure risks, performance bottlenecks, and architectural debt that will slow us down in six months. My planning instinct is to fix the foundation so we can build faster later. This is not wrong. Our batch pipeline was genuinely limiting our ability to deliver the features clients wanted. The Kubernetes upgrade was blocking security patches. Canary deploys would reduce the blast radius of the bugs we shipped.
Product optimizes for user outcomes. When our PM looked at the same product, she saw clients who were about to churn because their reporting needs had outgrown what we offered. She had call recordings where clients explained exactly what was missing. She had data showing that $180,000 in annual revenue was at risk in the next 90 days if we did not ship specific reporting features. This was also not wrong.
The problem was not that either priority list was incorrect. The problem was that we had no framework for comparing items across the two lists. How do you compare “upgrade Kubernetes” against “add custom report builder”? They exist in different dimensions. One prevents future engineering pain. The other prevents current revenue loss. Both are urgent. Both are real. But they cannot be ranked against each other without a shared evaluation framework.
The Framework We Built
In the last 45 minutes of the meeting, after everyone was exhausted and willing to try something new, we built a simple scoring framework that forced both engineering and product concerns into the same evaluation space.
- Revenue impact: Does this directly affect current revenue or prevent revenue loss? Score 0 to 3.
- Velocity impact: Does this make the team faster at building and shipping features in the next two quarters? Score 0 to 3.
- Risk reduction: Does this reduce the probability or severity of a production incident or client-facing outage? Score 0 to 3.
- Client request frequency: How many clients have requested this in the last six months? Score 0 to 3.
Every item from both lists got scored across all four dimensions. The scoring forced engineering items to articulate their business value and product items to articulate their technical implications. “Upgrade Kubernetes” scored 0 on revenue impact and client requests but 3 on risk reduction and 2 on velocity impact. “Custom report builder” scored 3 on revenue impact and client requests but 0 on risk reduction and 1 on velocity impact.
The total scores were close, which validated that both lists contained genuinely important work. But the scoring also revealed the sequence. Items with high revenue risk and high client request frequency should ship first because they address immediate business survival. Items with high velocity impact and high risk reduction should ship next because they make everything after them faster and safer.
What Actually Made the Backlog
The Q2 backlog ended up being roughly 60 percent product features and 40 percent engineering improvements, but interleaved rather than sequential. We did not do “product features first, then engineering.” We alternated: two weeks of reporting features, one week of Kubernetes upgrade, two weeks of dashboard improvements, one week of canary deploy implementation.
- The interleaving prevented either side from feeling deprioritized
- Engineering improvements were sized to fit in one-week sprints between product features
- Product features were sequenced by revenue risk, with the highest-risk client gaps addressed first
- Every sprint had a mix of visible client value and invisible infrastructure improvement
The Deeper Lesson
The planning meeting taught me that product managers and engineers are not in conflict. They are looking at the same company from different angles and seeing different urgent problems. Both views are correct and incomplete. The Head of Engineering job is not to argue that technical work matters. It is to build a framework where technical and product priorities can be evaluated in the same space.
This is where the role stopped feeling like senior-engineer-plus. Every decision had a human system wrapped around it: founders, customers, auditors, tired teammates. The same systems thinking bled into ftryos and pipeline-sdk, where defaults matter more than heroics.
When engineers and product managers argue about priorities, the problem is never that one side is wrong. The problem is that they lack a shared language for comparing different kinds of value. Build the language first, then the prioritization follows.
We use the four-dimension scoring framework for every quarterly planning session now. It takes 30 minutes to score everything and the conversations are dramatically shorter because disagreements become specific. Instead of “we should do technical work” versus “we should do product work,” the discussion becomes “this item scores a 2 on velocity impact, I think it should be a 3 because of the specific dependency it unblocks.” That is a resolvable disagreement. The original argument was not.
The framework is not perfect. All scoring is subjective and reasonable people will disagree on numbers. But having a shared structure for the disagreement is infinitely better than having two teams with separate priority lists that cannot be merged. The planning meeting that taught me this lesson was painful. Every planning meeting since has been shorter and more productive.