What are some forward-looking indicators that can demonstrate the efficacy of your user research?

I’ve discovered a number of lagging indicators, such as the use of product analysis to demonstrate progress towards a product/business result is being attained.

Have you discovered any user research leading indicators that can be used to anticipate an outcome with a high degree of accuracy? In the product discovery stage, I thought that there must be certain queries you can pose to customers or the market that will reveal trends.

6 Likes

Could you please describe your search more clearly? Do you wish to demonstrate the value of research as a practice? Do you wish to demonstrate development towards a particular long-term company goal? Do you want to know sooner if you should stick with a course of action or change it?

6 Likes

Are you asking for best practices to validate your current product idea?

Are you trying to gauge the effectiveness of your own research?

Are you trying to sell to your leadership that more research will be valuable?

Are you asking for anecdotes from the group when user research helped to predict a business metric?

Are you asking if it is possibly to accurately predict quantitative outcomes using both quantitative and qualitative research?

I’m so confused by this question…but I’ll try my best:

  • Surveys can help with total addressable market sizing.
  • Interviews can help you define pricing elasticity and identify pain points/ opportunities for target customers…
  • Usability tests can help you accurately identify pain points to help you scope and optimize your experiences, which can help you predict revenue/engagement boosts.
5 Likes

The effectiveness of user research can be measured by several leading indicators, including:

  1. Increased user satisfaction: If your user research leads to a better understanding of your users’ needs and preferences, it should result in increased user satisfaction with your product or service.
  2. Reduced user complaints: If your research identifies pain points and you address them, you should see a reduction in user complaints or negative feedback.
  3. Improved product adoption rates: If your research leads to changes in your product or service that align with user needs, you may see increased adoption rates.
  4. Enhanced user engagement: If your research helps you create more engaging user experiences, you may see increased user engagement, such as longer session times or more frequent visits.
  5. Better user retention: If your research leads to improvements in your product or service that keep users coming back, you should see better user retention rates.
  6. Increased revenue: If your research leads to a better product or service that meets user needs, you may see increased revenue as a result of higher user adoption rates and greater customer satisfaction.
  7. Positive word-of-mouth: If your research leads to a better product or service that meets user needs, you may see increased positive word-of-mouth, which can be a strong indicator of user satisfaction and loyalty.

It’s essential to track these leading indicators over time and continuously evaluate the effectiveness of your user research to ensure that you’re meeting your goals and objectives. Hope this helps.

5 Likes

Around 1 in 5 people who indicate they would buy something really do, on average. Hence, if 100 people express interest in buying something, only 20 of those people will follow through and make the purchase.

Although it’s obviously not realistic, I use it as a general rule of thumb to help me remind folks that they should severely limit their expectations on how many people show interest in a product compared to how many will really buy it. Every executive, marketer, businessperson, etc. needs to hear that all the time.

4 Likes

I built user research software (focused on remote moderated studies to uncover priority user needs and design/develop simple effective solutions) after conducting about 100 interviews, chats, and usability studies with Product Managers (and after a year of experience as a PM). Product analytics (session recording/replay) were used as a lagging indicator that a product outcome was met.

But I’m attempting to figure out whether there are any leading clues I may take into account. I have a leading indication; users can conduct unmoderated observation studies of their users utilizing their solution before it is implemented when they prototype or test it using my tool.

3 Likes

We’re doing something similar to you guys.

You’re seeing the problem that most UXRs may not have the systems view of seeing. But they are the ones the PMs really hand off this work to, depending on their level.

I think a company like Sprig is the closest but still far away, even Maze.

The iteration must happen in the real environment, that is in app. Which I think your company understands.

My approach to this is to actually let anyone augment the frontend code, like optimizely. Then go conduct the test and start some mechanism to gather qualitative feedback to test demand first.

Quant, quant in the actual work environment.

Nevertheless, my product is primarily targeted for customers or lean teams who have the daily shipping and testing approach.

2 Likes

In the process of discovery? No. Humans consistently tell lies. This is the purpose of MVPs— to provide you with this knowledge as quickly and inexpensively as possible so you can choose whether to develop anything or destroy it and start over.

1 Like

Sorry, but this is incorrect. MVPs are designed to test solutions as soon and inexpensively as feasible. Issue discovery doesn’t require an MVP and is used to determine whether or not you should test a solution at all.

1 Like

What other indicator would show that research is successful than a successful MVP? I think to answer OP’s issue, there isn’t a leading indicator. An MVP is a lagging indicator on your research and a leading indicator on your solution.

1 Like

You don’t, in my opinion, have enough data to conclude that there isn’t a leading indicator. What is a leading indicator of? While declaring “research is effective” is not a narrowly defined statement, the OP hasn’t really established what they want to lead with.

Regarding your initial remark, discovery can be conducted in an organised manner and used as a predictor of future business KPI performance, nevertheless. How many teams practise this? In my experience, relatively few people do this.

1 Like

Agree, there’s not enough clarity to answer cleanly, appreciate the insight.

1 Like

Maybe I should write something about this, but I was able to accurately model and predict the quarter by quarter KPI output of my product teams by looking at their problem discovery data. My predictions for a given quarter were usually within +/- a relative 10% of the actual results. It’s something any individual PM team can do for themselves if they want to.

1 Like

Please write it. Or if you can explain now as an MVP to the actual write up

1 Like

The model is actually quite easy to describe, so I can do that here quickly. Keep in mind this is for a team that is improving on some KPI as a primary outcome of improving their product and the business. If you are doing 0–>1 or trying to manage a end of life product, you would have a different model.

There are only a handful of things that you can change that would increase your output in a given period of time. They are:

  • the number of experiments you ran
  • the success rate (or confirmation rate) of experiments
  • the relative average impact of each experiment
  • the average collateral impact to other teams

Thinking in terms of ROI, you must also consider:

  • the cost of the team
  • other costs for experiments

There are other factors that you can decompose these elements into, like the number of experiments you ran is impacted by the average time to complete and the availability of team members and audiences to run against.

Once I have a grasp of what a team can do in a given timeframe, most of the changes we make come from the quality of discovery we do. Better discovery should improve our success rate. Overkill discovery will reduce the number of experiments. Is our discovery random or are we progressively working towards a theme that suggests success rate will increase? etc. etc.

This topic was automatically closed 180 days after the last reply. New replies are no longer allowed.