How do you set OKRs for and measure 'quality of life' metrics?

I am a Product Manager at an EdTech company. Our main OKR for the Product and Engineering organization is average 1st month user activity (which serves as a good overall indicator in our current stage I think).

I have broken OKRs further down for my team already around ease of content creation and learning. These objectives were relatively straightforward to quantify into metrics.

User Research, however, indicates that many of our users’ pain points/requests currently lean more towards users’ ability to organize and structure the content they are creating.

I find it now hard to create quantifiable metrics around content organization. Also estimating the potential impact of ‘organization features’ on user activity is tricky, compared to other potential features we could work on.

The only metric I have so far is from a recently set up regular user metric survey, in which ‘ease of organising’ is a question. But we only run this survey every three months. And I am wondering if there is something we could work towards and measure success against while iterating and experimenting.

Any recommendations or best practices on how you keep track of impact on these more ‘fuzzy’ kind of product areas? Where there are no clear conversion funnels to optimize for?



Perhaps apply concepts of process management to this? Your key result is that you cut down the clicks/time to accomplish [task] by x%.

What metrics do you have for ease of creation and learning? Couldn’t you apply the same metrics for ease of organization and structuring?


@Priya, For ease of creation I can track the quantity of content created per user as metric. And for learning the time spent and progress made on different learning modes.


This is very hard to measure and points to a real problem when taken to the extreme with some product organizations. I have worked in organizations that require ROI justification on every change, and it results in a culture that is less customer-focused and more focused on red tape and process.

Push yourself to measure everything you can, but at the end of the day, you also need to prioritize things that are going to delight customers. When you can not improve a janky UX because you don’t know how to measure the ROI on the change, you’ve lost the script as a product team.

Sometimes you have to use soft OKRs to get this done, like, “Release organization design V2 by X date”. Depends on the OKR culture at your company.

BTW – don’t read my message as “don’t try to measure it”, there is likely a metric here around the adoption of the feature, # of specific actions per user, etc. I don’t know your product… just responding to “only do things you can measure” mantras.


@Nathan, Thank you for your feedback. Luckily I am not pushed into this, just trying to find a way of holding myself and my team accountable for prioritizing the right things and making progress measurable.

In this case I would then probably also prefer to rather not set an OKR (or tie it to the survey only) instead of putting something purely output related in place like the ‘X by Y’.


@MariaWilson, In that case, good stuff! Definitely always push yourself to uncover meaningful metrics. Lmk if you want a job! :slight_smile:


Make sure your KRs are truly measurable and you can test against them before you do the work.

Try to understand the impact of how users organize their information in relation to their ability to do their job, and then formulate and test hypotheses around how altering that experience impacts other health metrics.

For example, in my experience working with content creation tools, organizing content is seldom a problem for new users but organization and retrieval become difficult as the library grows. Say you start seeing an increase of support inquiries after users amass a certain number of content items. Your discovery interviews uncover user pain in content organization.

Now you can set an Objective to improve the ease of content organization. Your KR is a decrease in organization-related support calls by X%. You can do further testing around the best way to move that bar, and then measure the impact of your incremental improvements.


If people can’t organize their content well enough, what problem does it cause? Without being familiar with your product, I’d suspect people may start taking longer and longer on some type of content list or search to try and find their stuff. That should be measurable, to see if there is some subset of users that have this problem.

Once you’ve identified the problem and measured it, you can do experiments to address the problem, and then potentially decide on a solution to implement.


Moving target. Especially when it comes to improving “ugly” UI. like, is the measure that people quit bitching about it? If so, what about when they bitch about the new improvements? how does this lead to higher revenue?

I feel your pain and in six years haven’t found a great answer outside of “we should do this” and tacking it on with other stuff with hard ROI.


Step 1: rewrite your survey. You can’t measure results if you aren’t capturing the user story.

Analyze the entire epic of content organization. Where do those user stories have direct impact in day-to-day or how do those user stories relate to the customers problem statement.

Then write 2-3 questions about that specific problem and put in your survey. Also add an open ended question and try to pull out the common critiques.

The other technical metrics will come with time and by analyze this data.


It’s hard to answer this question without knowing more about your product, goals, user JTBD, etc.

Often when users complain about complexity, what they really mean is that something is taking too much time. If your goal is to have users create more classes, a worthy metric might be “average time on site to publish class after starting process”

If it’s not an issue of complexity, but of flexibility, you can implement a new feature and see what % of users engage with the new feature.

1 Like

@Mario, For the sake of this question you could think of it like we are building an operation system. I can track how many new files users create and how often they open them.

But now users want to create folders, sort files by date of creation, last use, set the order of these files and folders manually and improve our existing tagging system.

So this is around flexibility and handling the complexity of their own ‘files’ in this analogy.

Looking at how often users would use folders or sort by a specific view would work. But what I am trying to find is some kind of overarching metric that would allow me to compare usefulness and impact of these multiple features, that ultimately serve the same purpose, against one another.

But not sure if there is something I could use for the purpose or just accept the ambiguity of that topic.

1 Like

@Maria, But why do they want to do these things? What is the user’s goal?

1 Like

@Mario, To better organize the content they have created. They would set up a lot of ‘files’ they need to study. And at some point it can become either challenging to keep an overview or they want to sort their material to better sequence their learning process.

Again the OS example could be applied. Imagine you are building an early version of Windows. Users can already create, open, rename and delete files. But quickly things become complicated. Users end up with cluttered desktops. And they are asking for better ways to organize their files.

I’d you were an early PM at Microsoft, how would you have gone about prioritizing the introduction of folders, sorting, list view etc. against other features? And how would you have gone about measuring your progress? Or placed an ROI against these features?

1 Like

@Maria, I think you are jumping to solutions before spending time in the problem space.

It sounds to be like the problem is that people want to learn things, but they have a hard time keeping track of what they learn or what’s next?

In that case, what are potentially better ways of organizing information?

And if you are just going to make folders, why is it better than the user storing content on their system?

At your core, you need an engagement metric. If you go with folders, % of users with at least x folders demonstrates that people are finding value.

1 Like

@Mario, Thanks, that is what I am trying to do. Assessing different potential solutions against one another to solve for a common, underlying problem - and measure if we are making progress.

Users are asking for us to implement folders, different sorting mechanisms, change the colors of tags.

All of these requests point to users currently not being able to organize their content as they need in order to keep an overview as they add more content over time. We already give users the ability to achieve content currently, but that doesn’t seem to be sufficient.

Before we start implementing any of the requested features or additional ideas we have to solve the underlying problem I would have loved to put a measurable OKR in place to see what ideas actually make a difference as we test and improve on MVPs. Just tracking e.g. usage of folders would not allow for that, as that metric would already be tied to a solution, instead of measuring progress against the problem.