Quick answer

A good customer interview asks about past behavior, not future intentions. Never ask whether someone would use your product. Ask about the last time they experienced the problem, what they did about it, how much it cost them, and what solutions they already tried. The goal is not to pitch your idea. It is to understand their reality so clearly that you know whether your solution fits it. Run at least 20 to 30 interviews before drawing any conclusions.

CB Insights analysed 431 VC-backed companies that shut down since 2023. The top root cause of failure, cited in 43% of cases, was poor product-market fit. Building something customers did not actually need.

Customer interviews are the cheapest way to prevent that outcome. One honest 30-minute conversation with the right person can tell you something that six months of building cannot. The problem is not that founders skip customer interviews. Most founders know they should do them. The problem is that most founders run them badly and come away with polite encouragement instead of real signal, then build anyway on the false confidence that creates.

This guide covers exactly how to run interviews that give you the truth, not the version of the truth people think you want to hear. Before you read further, it helps to understand where customer interviews sit in the broader picture of why startups succeed or fail. Our guide on why most businesses fail before they start gives you that full context.

Why Most Customer Interviews Fail

The failure mode is almost always the same. The founder sits down with a potential customer, explains their idea, and then asks some version of whether the person thinks it is a good idea, whether they would use it, and what features they would want. The person on the other end, not wanting to be rude, says it sounds interesting, that they would probably use it, and suggests a few features they would like to see.

The founder leaves feeling validated. They have data. People said yes. The problem is that none of what they heard is a business signal.

Rob Fitzpatrick, whose book The Mom Test is the definitive guide to customer interviews, identified three types of bad data that routinely derail early-stage companies: compliments, opinions, and wishlists. None of these are a business signal. All of them feel like one.

Compliments

"This is really cool." "What a great idea." "I wish something like this existed."

Feels like: validation. Is actually: politeness.

Opinions

"I would probably use this." "I think people would pay for that." "That sounds useful."

Feels like: intent. Is actually: speculation.

Wishlists

"It would be cool if it also did X." "You should add Y." "What would make this perfect is Z."

Feels like: product direction. Is actually: imagination.

The three interview mistakes that create this bad data are consistent across almost every founder who runs early-stage interviews poorly. They pitch instead of listening, turning the conversation into a sales call before they have anything to sell. They ask about hypothetical future behavior instead of real past behavior. And they talk to the wrong people, often friends and family who cannot give them honest negative feedback even when they try.

In a 2024 survey by product strategist Lenny Rachitsky, 78% of product professionals said they wished they had done more customer interviews in the early stages of their product. The most common regret: building the wrong thing for 6 to 12 months because they assumed instead of asked. That is an expensive lesson to learn when 20 conversations costing nothing could have taught it first.

The Core Principle: Past Behavior Over Future Intention

Everything in a good customer interview flows from one idea. People cannot reliably predict their own future behavior, but they can accurately describe their past behavior. When you ask someone whether they would use your product, you are asking them to predict the future. They will give you a socially comfortable answer, not a true one. When you ask them what they did the last time they faced the problem you are trying to solve, they can only tell you the truth because it already happened.

The Mom Test by Rob Fitzpatrick makes this concrete: do not ask people about hypotheticals, ask them about the past. A good question sounds like "Tell me about the last time you ran into this problem. What did you do?" A bad question sounds like "Do you think you would use this?" The first gets behavior, which is signal. The second gets opinions, which are nearly worthless.

The critical test for whether a problem is real:

If the person has not tried to solve the problem themselves, it is not a real problem. It is a complaint. Real problems motivate action. People search for solutions, pay for imperfect alternatives, build workarounds, and complain loudly in the communities they belong to. A problem that produces none of those behaviors is not urgent enough to build a business around.

This single test eliminates more bad ideas than any other.

This principle also changes how you interpret silence. If you ask someone to describe the last time they experienced your target problem and they struggle to remember a specific instance, that is data. Important data. It tells you the problem is either infrequent, not painful enough to be memorable, or may not exist in the form you assumed. For more on how to use this kind of insight to shape your entire strategy before building, read our guide on problem-first vs product-first business strategy.

Who to Interview and How to Find Them

Talking to the wrong people produces worse results than not interviewing at all, because it gives you false confidence. Your interviews are only as good as the match between the people you talk to and the people who would actually need your product.

The right person to interview is someone who currently has the problem, has tried to solve it in some way, and is frustrated enough with existing solutions that they are still looking for something better. People who have struggled enough to look for solutions are your most informative source. Recruiting is genuinely hard, but talking to the wrong people is worse than not talking at all.

Where to find the right interview candidates

LinkedIn outreach to specific job titles Best for B2B problems
Niche Reddit communities and subreddits Best for consumer problems
Industry Slack groups and Discord communities High signal, warm audience
Competitor review pages on G2, Trustpilot, Capterra People already paying for imperfect solutions
Warm introductions from your network Highest response rate
Friends and family Avoid. They cannot give honest feedback.

On how many interviews to run: aim for 15 to 30 quality conversations with strangers who match your target customer profile. After 15 to 20 interviews you will start hearing the same patterns. If you do not hear patterns after 20, your customer segment is too broad. Narrow it down and start again.

The sweet spot for each interview is 30 to 45 minutes. Shorter than 30 minutes rarely surfaces the real insight because people give their surface-level polished answer first and only get to the honest and useful version after they feel comfortable. Longer than 45 minutes risks fatigue and tangents that do not inform your core questions.

The Interview Structure That Works

The structure matters almost as much as the questions. A customer interview that jumps straight to questions feels like an interrogation. One that eases in, earns trust, and lets the conversation breathe consistently produces better and more honest information.

Before the interview

Write down your three riskiest assumptions before every session. Not your questions. Your assumptions. The things you are most likely to be wrong about that would most affect whether your business works. Then design your questions to surface evidence that either confirms or challenges each one. This prevents the most common trap in customer interviewing, which is unconsciously steering the conversation toward confirmation of what you already believe.

Prepare six to eight open-ended questions focused entirely on their past experience, not your idea. And commit to one rule you cannot break: you will not mention your product, your idea, or what you are building during a problem discovery interview. Not once. The moment you mention your solution, the conversation stops being research and starts being a sales call. You will get different answers.

Opening the conversation (first 5 minutes)

Start with context and permission. Tell them you are trying to understand how people currently deal with a specific type of problem, not pitch anything. Then ask a warm-up question about their role and their typical week. This serves two purposes: it builds rapport and it starts them talking in the descriptive mode you need them in rather than the evaluative mode they will default to if you open with your idea.

The core discovery questions (20 to 25 minutes)

These are the questions that do the work. Every one of them is anchored to past behavior, not future intention. Use them in roughly this order but follow the conversation where it leads rather than treating them as a rigid script.

Opening the problem

"Tell me about the last time you experienced [the problem]. What happened?"

Gets you a specific story with real context. Specific stories are far more informative than general descriptions.

Digging for real cost

"How much did you pay to solve it last time? What workarounds have you hacked together?"

Existing spend is your strongest demand signal. If they have never paid anything or built any workaround, the problem is not urgent enough.

Understanding existing solutions

"What did you try first to solve this? What was wrong with it?"

Maps the competitive landscape and tells you where existing solutions fall short, which is where your opportunity lives.

Measuring frequency and cost

"How often does this happen? What does it cost you in time or money when it does?"

Frequency times cost equals the total value of the problem. This is the foundation of your pricing ceiling.

Understanding the ideal outcome

"If this problem were completely solved, what would that look like for you?"

Gets you the outcome they want, not the feature they think they want. These are very different and the distinction matters.

Closing (final 5 minutes)

End every interview with two questions. First: "Who else should I talk to about this?" If your interview was useful and the person is genuinely engaged with the problem, they will almost always give you a name or two. This compounds your recruiting effort for free. Second: "Is there anything you expected me to ask that I did not?" This consistently surfaces the most important insight of the entire conversation, the thing they were waiting to say that your questions never quite reached.

Questions That Destroy Signal and What to Ask Instead

The gap between a useful customer interview and a misleading one is almost entirely in the wording of individual questions. Here are the most common bad questions, why they fail, and what to ask instead.

Bad question Why it fails Better question
"Would you use this product?" Gets social compliance, not truth. People say yes to be polite. "Tell me about the last time you faced this problem."
"Do you think this is a good idea?" Gets politeness. Asks them to evaluate your work, not describe their experience. "How are you currently handling this? Walk me through it."
"Would you pay $X for this?" Hypothetical pricing is almost always lower than real willingness to pay. "How much did you spend trying to solve this last time?"
"What features would you want?" Gets wishlists, not needs. People request features they would never actually use. "Walk me through exactly what happens when this problem occurs."
"Do you have this problem?" Leads the witness and gets a yes-or-no answer with no useful context. "Describe a recent frustrating experience with X. What happened?"
"Doesn't it bother you when X happens?" Suggests the answer. People will agree with a loaded question almost universally. "What is the most frustrating part of dealing with X right now?"

The pattern across all the bad questions is the same: they tell the person what to think rather than asking them what they actually experience. Good questions create space for the person to describe their reality. Bad questions fill that space with your assumptions before they can.

The most important thing you can do in a customer interview is stay silent after asking a question. Let the silence sit for longer than is comfortable. Most of the best insight in a customer interview comes in the second or third sentence of someone's answer, after they have said the polished version and started giving you the honest one. Jumping in to fill silence kills that moment every time.

How to Interpret What You Hear

Running 20 interviews is only valuable if you can distinguish between signal and noise in what you collect. Not everything you hear means something. Not every enthusiastic response is a buying signal. And not every hesitation is a red flag.

The green lights you are looking for are specific and behavioral. Someone using unprompted emotional language about the problem, saying things like "this drives me insane" or "I waste so much time on this." Someone who has paid money, even small amounts, trying to solve the problem with existing tools. Someone who asks whether they can sign up or be notified when you launch, without being asked. Someone who immediately refers you to three other people who have the same problem.

The red lights are equally specific. Polite interest with no urgency and no follow-up questions from them about your timeline. No current attempt to solve the problem at all, suggesting they can live with it indefinitely. Asking what your product costs before asking what it does, which suggests they are looking for a cheap alternative to something they already use rather than a genuine solution to a painful problem.

Once you have active users, product-market fit can be measured using the Sean Ellis test. Ask users: "How would you feel if you could no longer use this product?" with three options: very disappointed, somewhat disappointed, or not disappointed. If 40% or more answer very disappointed, you likely have product-market fit. Below that threshold, continued iteration is needed before scaling.

If fewer than 30% of interview subjects show strong interest or engagement after 20 interviews, that is a signal to reassess your customer segment or your problem hypothesis before going further. The problem might be real but your target customer definition might be too broad. Narrow it until the people you are talking to are the ones who feel the pain most acutely.

After the Interview: Turning Conversations Into Decisions

The interview itself is only half the work. The other half is what you do with what you heard. Most founders take scattered notes during interviews, review them once, and then proceed based on a general feeling of whether things went well. That process loses most of the signal.

Write up your notes immediately after each interview while the conversation is still clear in your mind. Capture specific quotes, specific behaviors the person described, specific numbers they gave you, and specific red or green flags from the criteria above. Do this within an hour of each conversation.

After ten interviews, start looking for patterns across your notes. Not impressions. Specific repeating phrases, behaviors, and problems. The patterns that matter are the ones that appear across multiple conversations without you prompting them. After 15 to 20 interviews, you will start hearing the same things from different people. If you do not hear consistent patterns after 20 interviews, your customer segment is too broad. Narrow it down and start another round.

Decision criteria after 20 to 30 interviews

Proceed if you have 3 or more consistent patterns in how people describe the problem, at least 30% who have paid for something trying to solve it, and at least 2 people who asked when they can sign up.

Narrow and repeat if you have mixed signals, inconsistent problem descriptions, or the enthusiasm does not convert to any behavioral signal. Your customer definition is probably too broad.

Pivot the problem if fewer than 30% show strong interest, nobody has paid anything trying to solve it, and the problem does not come up as something urgent or memorable in people's day-to-day experience.

Customer interviews do not stop once you have a product. They evolve. The questions change from "does this problem exist" to "are we solving it well enough." But the discipline of asking about past behavior rather than future intention, listening more than talking, and interpreting behavior signals over verbal signals stays constant throughout the life of any company that takes its customers seriously.

For the next steps after your interviews confirm there is a real problem worth solving, our guide on 10 questions every startup must answer before building anything gives you the full pre-build checklist. And for the financial side of validating whether the opportunity is large enough to sustain a business, see our guide on how much money you need to start a business.

Frequently Asked Questions

Most practitioners recommend 20 to 30 structured discovery interviews as the minimum to identify reliable patterns for a specific hypothesis. Start with 5 to 10 to refine your questions and hypothesis, then run another round to validate what you are hearing. Keep going until you stop hearing new problems or patterns, which researchers call thematic saturation. If you do not hear consistent patterns after 20 interviews with your target customer, your customer definition is too broad and needs to be narrowed before continuing.
The most useful questions are: Tell me about the last time you experienced this problem. What happened? How are you currently dealing with it? What have you tried so far and what was wrong with those options? How much does this cost you in time or money? How often does it happen? And at the end: who else should I talk to about this? Every question should be anchored to past behavior and specific experience, never to hypothetical future actions. The worst questions ask whether someone would use your product or what features they would want.
The best places to find interview candidates are LinkedIn outreach to specific job titles for B2B problems, niche Reddit communities and industry Slack groups where your target customer already gathers, competitor review pages on platforms like G2 or Trustpilot where people describe frustrations with existing solutions, and warm introductions from your professional network. Avoid interviewing friends and family because they cannot give you honest negative feedback. Cold outreach for interviews typically gets 2 to 10% response rates, so reach out to far more people than you need.
The Mom Test is a book by Rob Fitzpatrick that defines the core principle of useful customer interviewing: ask questions your mom could not lie to you about, meaning questions about past behavior and concrete experience rather than hypothetical future intentions. The name comes from the idea that even your mom, who wants to be supportive, cannot lie when you ask her what she did the last time she faced a specific problem. The book's central contribution is identifying that founders almost always ask the wrong kinds of questions and explains specifically how to ask better ones.
Customer interviews are live conversations that let you follow unexpected threads, ask follow-up questions, and observe body language and hesitation. They are better for discovery, when you do not yet know what to ask. Surveys are structured questionnaires that scale to many respondents but cannot follow unexpected answers. They are better for quantification, when you already know what you are measuring and want to test whether a pattern you found in interviews holds at scale. For early-stage validation, interviews come first. Surveys confirm at scale what interviews surfaced in depth.
Your interviews are giving you reliable data when the same specific problems and phrases appear across multiple conversations without you prompting them, when people describe behaviors rather than opinions, and when the enthusiasm they express is connected to something they have already done rather than something they say they would do. Unreliable data looks like: everyone is enthusiastic but nobody has tried to solve the problem, every interview goes differently with no consistent themes, or the most engaged people are friends and colleagues rather than strangers who match your target customer profile.

Want more guides like this?

Browse all free business guides on Groundwork.

Browse all guides →