Summary 

This article highlights the dangers of misusing qualitative research and long project timelines in UX and calls for a reevaluation of practices to ensure that resources are used efficiently and that businesses stay agile.

Introduction

In the realm of UX research, qualitative interviews are a powerful tool. They provide rich, in-depth insights into user behaviors, motivations, and pain points. However, a growing trend concerns me—a trend where qualitative research is being misused in the pursuit of statistical significance. I recently witnessed a UX team attempting to conduct 80 or more qualitative interviews, believing this would lead to statistically significant findings. This approach is not only a poor use of time and resources but fundamentally misunderstands the purpose of qualitative research.

The Purpose of Qualitative Research

Qualitative research is designed to explore and understand the “why” and “how” behind user behaviors. It’s about uncovering deep, nuanced insights that can inform product strategy, design, and development. Unlike quantitative research, which seeks to measure and quantify, qualitative research is inherently subjective and interpretative. The goal isn’t to reach statistical significance; it’s to gain a deeper understanding of user needs, motivations, and pain points.

The Pitfall of Chasing Numbers

Conducting 80 or more qualitative interviews to reach statistical significance is not just misguided—it’s a colossal waste of time and resources. The reality is that after 12 to 15 interviews, you’re likely to start seeing diminishing returns. Most of the key themes and insights will have emerged by this point. Any additional interviews are unlikely to bring in new information; instead, they may only serve to confirm what’s already been discovered.

Moreover, pursuing statistical significance in qualitative research is a fundamental misunderstanding of the methodology. Qualitative research doesn’t require large sample sizes to be valid. It’s not about generalizing findings to a broader population; it’s about understanding the specific experiences and perspectives of a select group of users.

The Cost of Misguided Research

When resources are poured into unnecessary interviews, not just time is wasted—there’s also a significant financial cost. More importantly, it can lead to a delay in delivering actionable insights that drive product decisions. This delay can have severe consequences in the fast-paced world of product development, where speed and agility are crucial. It can hinder the ability to innovate, slow down time to market, and ultimately impact the product’s success.

This approach can also demoralize teams. Researchers and designers alike may feel frustrated by the lack of new insights, leading to a sense of futility and disengagement. The focus shifts from generating valuable, actionable insights to simply churning out data—data that may not even be relevant or useful.

A Call to Action

UX research is more than just collecting data—it delivers insights that drive meaningful product decisions. Let’s ensure that our research practices are aligned with this goal. Instead of chasing statistical significance in qualitative research, let’s focus on delivering the deep, meaningful insights that qualitative methods are uniquely suited to provide. By doing so, we can ensure that our research is effective and efficient, delivering real value to our teams and the products we create.

Beware of Long Timelines: How Delays Can Cause You to Lose

Another red flag that businesses need to be cautious of is long timelines. If a design company is quoting 10, 12, or even 16-week timelines before you start seeing results, you’re already at a disadvantage. In today’s fast-paced market, such drawn-out processes are a sure way to lose momentum, miss opportunities, and ultimately fall behind competitors.

The product development cycle must be agile and responsive. Long timelines create a gap between research, design, and real-world validation. The longer it takes to act on research and start building or iterating on a product, the more irrelevant the insights may become as market conditions, user behaviors, and competitive landscapes shift.

The expectation should be to see tangible results—prototypes, tested iterations, or actionable insights—within weeks, not months: agility and the ability to adapt quickly based on honest user feedback set successful teams apart. In an environment where speed is often the key to success, any company not delivering early and fast insights puts your business at risk.

If you want to ensure that your product hits the mark — both for your business and your users — I invite you to explore my approach more at UX Sprint Lab.