portfolio Anshul Bisen
ask my work

I mass-rejected 200 resumes and hired the person who asked the best questions in the interview

Our first structured hiring round attracted 200+ applications. The person we hired had a weaker resume than a dozen others but asked questions that revealed deep systems thinking.

In January 2025, we posted our first properly structured engineering job listing. Not the rushed Hacker News “Who is Hiring” comment I had used to find our first two engineers, but a real job posting with a clear role description, technical requirements, and compensation range. Within two weeks, we had 214 applications. I spent three weekends screening them. Eighty percent were rejected in the first pass. The person we ultimately hired was not in my top ten on paper.

Evidence beats theater.

This phase is where the title finally started to feel expensive. It also builds on what I learned earlier in “Hiring engineer number three when you can barely keep engineer number one from burning out.” Hiring, planning, founder conversations, and bad weeks in production all piled into the same calendar. A lot of the systems thinking I kept in lifeos and flowscape showed up here too: clarity is not paperwork, it is how you stop uncertainty from leaking into people.

The meeting-room version of the technical scar.

The Resume Screening Trap

When you have 214 applications and three weekends to screen them, you develop heuristics. Mine were: relevant experience in fintech or payments, TypeScript proficiency, evidence of working on systems at some scale, and a cover letter that showed they had actually read our job description. These seem reasonable. They are also heavily biased toward people who have already had opportunities at the right companies.

The first pass rejected 170 applications. Some were clearly spray-and-pray submissions with generic cover letters. Others were experienced engineers whose background was entirely in domains unrelated to our work. About 30 were strong candidates whose resumes checked every box. Among those 30, I moved 12 to phone screens based on resume strength, portfolio quality, and relevant experience.

Deepa was candidate number 37 in my spreadsheet. She had three years of experience at a mid-size SaaS company working on internal tooling. No fintech background. No payments experience. Her resume was clean but unremarkable. I almost rejected her in the first pass. The only reason I did not was a single line in her cover letter: “I want to work somewhere where the engineering decisions have financial consequences that matter to real people.” That sentence got her to the phone screen.

What Happened During the Interviews

Our interview process had three stages: a 30-minute phone screen, a 90-minute technical interview with a system design component, and a 45-minute culture fit conversation with me and our other senior engineer. The phone screens were unremarkable. All 12 candidates could talk about their work, explain technical decisions, and answer basic architecture questions.

The technical interviews separated the field. Our system design prompt was: design a transaction reconciliation system that matches bank statements against internal records with a throughput of 10,000 transactions per hour. It is deliberately open-ended because we want to see how candidates handle ambiguity.

Most candidates dove straight into architecture. They drew boxes on the whiteboard, talked about microservices or event queues, debated database choices. Their designs were competent. A few were impressive. But they all shared a pattern: they started building before they understood the problem.

Deepa spent the first 20 minutes of the 90-minute session asking questions. What does “match” mean? Is it an exact amount match or do we handle partial matches? What happens when a match is ambiguous? How are bank statements delivered? What is the error tolerance? What does the team do when the system gets it wrong? She asked 14 questions before she drew a single box on the whiteboard.

Why Questions Reveal More Than Answers

The questions Deepa asked revealed something her resume could not: she thinks about systems as things that interact with humans and handle failure, not as technical puzzles to solve in isolation. Her question about what happens when the system gets a match wrong was more important than any architectural diagram because it showed she was thinking about the operational reality of running a reconciliation engine in production.

  • Questions about edge cases reveal how someone thinks about failure modes
  • Questions about user impact reveal whether someone connects technical decisions to business outcomes
  • Questions about existing constraints reveal whether someone designs for reality or for a whiteboard
  • The order of questions reveals thinking structure: problem-first thinkers ask about requirements before architecture
  • Questions the interviewer cannot immediately answer reveal depth that goes beyond prepared scenarios

Three of her questions made me pause and think. That had not happened with any other candidate. The ability to ask a question that the interviewer has not considered is a signal that the candidate will find problems in production that the team has not anticipated.

The Screening Rubric I Built After

After we hired Deepa, I rebuilt our screening process to weight the signals that actually predicted job performance rather than the signals that correlate with having a polished resume.

  1. Resume screen: passes if there is evidence of building systems, any domain, any scale. Fintech experience is a bonus, not a requirement.
  2. Phone screen: evaluated on communication clarity and whether they ask about our product before talking about themselves.
  3. Technical interview: scored 40 percent on questions asked, 40 percent on design quality, 20 percent on implementation details.
  4. Culture conversation: evaluated on self-awareness about past mistakes and willingness to admit what they do not know.

Weighting questions-asked at 40 percent of the technical score was the biggest change. Previously we had no formal scoring for the questions candidates asked. We evaluated them entirely on the quality of their design and their ability to discuss trade-offs. That bias toward output over inquiry meant we were optimizing for candidates who could perform well in interviews rather than candidates who could perform well in ambiguous production environments.

Six Months Later

Deepa has been on the team for six months. She ramped up on our reconciliation engine faster than anyone expected because she approached it the same way she approached the interview: by asking questions until she understood the problem before writing code. She found a subtle matching bug in her third week that had been causing false positives for months. Nobody else had caught it because nobody else had asked, “What happens when two transactions have the same amount and timestamp?”

Good hiring usually looks quieter than people expect.

Operator mode means you inherit every downstream consequence. The code path is only half the story; the other half is how the decision warps planning, trust, and execution speed. I kept relearning that lesson while building lifeos and bisen-apps.

The best predictor of engineering performance I have found is not what someone knows but how they investigate what they do not know. Hire the person who asks the questions nobody else thinks to ask.

I still feel guilty about the 170 resumes I rejected in the first pass. Statistically, some of them were probably as good as or better than the 12 I moved forward. Resume screening at scale is lossy. The best I can do is make the filter less biased over time and keep weighting curiosity over credentials. The person with the best resume is not necessarily the best engineer. The person who asks the best questions probably is.