Skip to main content

Building Better Products Using Quantitative & Qualitative Data

Disclosure: This was an internal post I wrote and shared with my colleagues back in 2015. While cleaning out my Confluence space, I came across it and thought that I'd share it with everyone.

There is a much more recent and polished post by my good friend Glenn Block on Mind the Product that you can read here: https://www.mindtheproduct.com/2018/01/need-quantitative-qualitative-data/

What Data Is/Not

  • Data cannot compensate for bad design or replace good design, but it can inform designers.
  • Data cannot compensate for listening to actual users, but it can inform the kind of user research we conduct.
  • Data cannot tell us what to build, but it can inform what we should investigate.

Example

If you can visualize a funnel report where visitors are leaving a buying process during various stages of the journey, quantitative data can give you information about where you may have a problem.

In the above example, the quantitative data (QT) informs you that 36 people are falling out of the cycle during the Payment Info page/stage. 

Quantitative and Qualitative Data

QT is useful for identifying that there is a problem, but it doesn't tell you why the problem exists or specifically what the problem is.
Qualitative data (QL) is useful for telling you why there is a problem.
So it can be helpful to think:
  • QT = What
  • QL = Why

A Process for Using Both QT and QL

  1. Identify the Biggest Problem you need to solve. You can use QT to inform you of what this may be. Look at the number of users affected, the largest opportunity loss, the highest count of critical failure points, etc. An example of a good QT tool is a funnel report.
  2. Once you've identified the problem area, use QL to understand what is causing the problem, such as observational research.
  3. Based on the QL, propose a solution hypothesis. For example: "We see 36% of the people falling out of the funnel. We observed that a lot people leave the site when they get to the promo code box and search online for promo codes, and then end up finding the same product offered on other sites and go purchase it there. So one proposed solution is to remove the promo code box."  Of course, you have to understand the reason why things were done in the first place before making cavalier proposals.
  4. Learn & Iterate. A/B test the solution. Repeat and tweak until satisfied.
  5. Return to step 1.

Another Process for Using Both QT and QL

  1. Review QL data/observations to formulate a problem hypothesis. i.e. "5 customers don't like this feature."
  2. Use QT to study the relevant metrics related to the hypothesis.
  3. Follow steps 3-5 above.
Why invert 1 and 2? Because you might want to use QT to support or refute QL findings. Sometimes a squeaky wheel is not representative of the real problem.

Comments

Popular posts from this blog

Improving Win Rates

Still here at the Gartner Local Briefing. In a session with Richard Fouts about customer win rates. Sharing Win/Loss Data Sales teams tend to advertise their wins, yet contain their losses. Why is this so? Q:  How many global IT companies conduct win/loss on an enterprise scale? Less than 5%? About 10%? About 20%? A: Less than 5% Why? CEOs and Sales people say "I know why we win/lose".  Some politics are touchy; people don't want you to expose what they did well/not (especially in competitive environments).  "I clicked the won/loss checkbox in Salesforce.com" One company evaluated their win/loss data. They looked at 140 wins, and 55 losses. Looking at this data, they noticed a correlation between the wins and the fact that they beat their competition to market with compelling announcements in over a third of the scenarios. An IT services firm attempted to sell a more sophisticated solution. Upon evaluation of the win/loss data, they fo

Product Parenthood

It has been a long time since I've been able to sit down and write a lucid blog post.  This one will not be one of them. I've come to realize that my full-time job as a Product Manager, my other full-time job as a husband, and my newest (and most demanding) full-time job as a parent (to a delightful son) consume roughly 100% of my time; leaving precious few moments in the day in which to gather and synthesize my thoughts.  Thus, I've been using Twitter more often and blogging less. " Blogging less?  What?  You only have 3 prior entries! " I used to blog quite a bit under a different moniker.  However, as a parent, I have come to the realization that composing longer, thoughtful essays are a luxury that I generally cannot afford. With that, I've added my Twitter feed to the left, and will make an effort to write "larger-than-micro" blogs here on at least a weekly basis, if not more frequently. And now it's back to full-time job #1, follo

Luck Is Not The Answer

As the 2011 season of the National Football League approaches, so does the rabid anticipation of its parasitic counterpart; fantasy football. Anyone who knows me knows that I love football (not fĂștbol); particularly the NFL . I grew up a dedicated Patriots fan and have since become more of a fan of the game itself rather than just a New England loyalist. I tend to keep up with NFL news throughout the year, and regularly download the Rich Eisen Podcast for listening during my morning commute. Rich's guest on the most recent episode was Michael Fabiano, the resident fantasy football expert at NFL.com. At one point during the podcast, Rich and Mike were exchanging viewpoints on what constitutes being an expert and how no matter how much of an expert you are, you need to be lucky to win a fantasy league. A similar sentiment was expressed during a 2010 podcast by one of the guests (I forget whom); that it's all luck and you can't really predict anything. I take exception t