Skip to main content

Building Better Products Using Quantitative & Qualitative Data

Disclosure: This was an internal post I wrote and shared with my colleagues back in 2015. While cleaning out my Confluence space, I came across it and thought that I'd share it with everyone.

There is a much more recent and polished post by my good friend Glenn Block on Mind the Product that you can read here: https://www.mindtheproduct.com/2018/01/need-quantitative-qualitative-data/

What Data Is/Not

  • Data cannot compensate for bad design or replace good design, but it can inform designers.
  • Data cannot compensate for listening to actual users, but it can inform the kind of user research we conduct.
  • Data cannot tell us what to build, but it can inform what we should investigate.

Example

If you can visualize a funnel report where visitors are leaving a buying process during various stages of the journey, quantitative data can give you information about where you may have a problem.

In the above example, the quantitative data (QT) informs you that 36 people are falling out of the cycle during the Payment Info page/stage. 

Quantitative and Qualitative Data

QT is useful for identifying that there is a problem, but it doesn't tell you why the problem exists or specifically what the problem is.
Qualitative data (QL) is useful for telling you why there is a problem.
So it can be helpful to think:
  • QT = What
  • QL = Why

A Process for Using Both QT and QL

  1. Identify the Biggest Problem you need to solve. You can use QT to inform you of what this may be. Look at the number of users affected, the largest opportunity loss, the highest count of critical failure points, etc. An example of a good QT tool is a funnel report.
  2. Once you've identified the problem area, use QL to understand what is causing the problem, such as observational research.
  3. Based on the QL, propose a solution hypothesis. For example: "We see 36% of the people falling out of the funnel. We observed that a lot people leave the site when they get to the promo code box and search online for promo codes, and then end up finding the same product offered on other sites and go purchase it there. So one proposed solution is to remove the promo code box."  Of course, you have to understand the reason why things were done in the first place before making cavalier proposals.
  4. Learn & Iterate. A/B test the solution. Repeat and tweak until satisfied.
  5. Return to step 1.

Another Process for Using Both QT and QL

  1. Review QL data/observations to formulate a problem hypothesis. i.e. "5 customers don't like this feature."
  2. Use QT to study the relevant metrics related to the hypothesis.
  3. Follow steps 3-5 above.
Why invert 1 and 2? Because you might want to use QT to support or refute QL findings. Sometimes a squeaky wheel is not representative of the real problem.

Comments

Popular posts from this blog

Social CRM: Putting Customers First

Last night I attended Social CRM: Putting Customers First , hosted by the San Francisco chapter of the Social Media Club .  It was a great event, and I'll briefly recap some of the discussion and my observations. The panelists were first asked why Social CRM (SCRM) was relevant.  Their answers were as follows: Vendors need to know who their customers are out there on the social web. Vendors need to be able to identify customer pain points from conversations on the social web.  Vendors need to know what to do with these conversations on the social web.  These discussions are the center of business today; their importance cannot be understated. The social web grows too fast for vendor solutions to keep up.  Therefore, vendor solutions must embrace the social web, and learn how to crowdsource.  There is no way that you can scale communities and community managers that fast. Customers don't care about your organization structure, they just want answers a...

Product Parenthood

It has been a long time since I've been able to sit down and write a lucid blog post.  This one will not be one of them. I've come to realize that my full-time job as a Product Manager, my other full-time job as a husband, and my newest (and most demanding) full-time job as a parent (to a delightful son) consume roughly 100% of my time; leaving precious few moments in the day in which to gather and synthesize my thoughts.  Thus, I've been using Twitter more often and blogging less. " Blogging less?  What?  You only have 3 prior entries! " I used to blog quite a bit under a different moniker.  However, as a parent, I have come to the realization that composing longer, thoughtful essays are a luxury that I generally cannot afford. With that, I've added my Twitter feed to the left, and will make an effort to write "larger-than-micro" blogs here on at least a weekly basis, if not more frequently. And now it's back to full-time job #1, follo...

Improving Win Rates

Still here at the Gartner Local Briefing. In a session with Richard Fouts about customer win rates. Sharing Win/Loss Data Sales teams tend to advertise their wins, yet contain their losses. Why is this so? Q:  How many global IT companies conduct win/loss on an enterprise scale? Less than 5%? About 10%? About 20%? A: Less than 5% Why? CEOs and Sales people say "I know why we win/lose".  Some politics are touchy; people don't want you to expose what they did well/not (especially in competitive environments).  "I clicked the won/loss checkbox in Salesforce.com" One company evaluated their win/loss data. They looked at 140 wins, and 55 losses. Looking at this data, they noticed a correlation between the wins and the fact that they beat their competition to market with compelling announcements in over a third of the scenarios. An IT services firm attempted to sell a more sophisticated solution. Upon evaluation of the win/loss data, they fo...