For our 1200+ analysts, 200+ data scientists, and 300+ machine learning engineers, we built a web app.
It was created for the following reasons and functions effectively as a custom, highly integrated data catalogue:
Using search and browsing features, assist data workers in finding the data they’re looking for fast.
Due to our detailed documentation, which includes other statistics and linkages between data, they are able to not only access the tables and other information but also understand it.
Before they begin requesting or utilizing the data, provide data workers with all the information they want.
Historically, the team has only ever measured:
monthly active users
net promoter score (which is calculated by a monthly survey that goes out to see who’s happy with our app)
I feel like we could get granular into the metrics we report and collect, any ideas?
Everything you mentioned sounds awesome, but if they’re not finding it useful, then it means nothing.
Number of searches could be a barebones KPI to start with, but what is the end result of data workers using your tool to search and browse capabilities? If you’ve built it with a specific persona in mind, their success is your success in this context. It’d be good to know what their KPIs to figure out how your tool factored into them hitting their results.
Might also be a good idea to track and log search requests to build up a ‘Top 10 hitters’ to see which datasets/documentation your internal customers look at the most to figure out potential optimizations and future roadmap.
Happy to share my thoughts but first what do you feel success would look like for each of the 3 bullets you highlighted? If you asked your users who did each of the tasks above, what would they say if asked did the tool do it’s job? How do they know it did or did not?
That should be your basis for your core metrics.
You could throw in additional satisfaction tracking similar to how search engines or Ecommerce does it with “did you find what you were looking for?” or “how satisfied are you with your results?”
And MAUs makes sense but are users only expected to use it 1x a month? If more often, then you should be tracking that at that granularity: if they should be using it a few times a week, then WAUs, daily DAUs etc.
Working in a similar space, a SaaS solution provider. We started with simple metrics like “number of searches” but this tends to be “too few is bad, too many is bad”. So, we dug a bit deeper and started to define “positive actions” like clicking a link, exporting data or forwarding an email. These often happen off our platform, but we have used them to define some new features, map some more granular user journeys in our analytics and some clearer KPIs.
For example, in our research product we are aiming for an average search to click ratio, per session of between 1-5 (every login has a search, every search has 1-5 results clicked). In our analysis product we want an average session to data export of 1 within 30mins. In our monitoring tool we want an average open rate of opened alerts of >2 and a click ratio of 1 (of people who opened an alert, they come back to it/fwd it at least once, every open leads to a click).
The KPIs above are early hypothesis, so we expect them to be BS. We just like to start with something and prove ourselves wrong. Hope it helps!
I never found custom catalog or anything similar helpful. Unless someone needs it or works with it everyday It won’t be useful especially to all 1700+ data people.
Adoption: monthly or weekly new users, traffic from Confluence links or direct traffic
Activation: time to first search since session start
Engagement: I also like the suggestion from Crisgarher about search to click ratio per session, queries saved via positive actions
Retention: cohort retention in first week, first 30 days, 60 days, 3 months out - whats the natural usage pattern of the tool? do you expect to decrease over time because people got access to the data they needed or got familiar? does it correlate with the number of active projects and new employees?
High-level business impact:
Scale knowledge and cut down onboarding time for new tech employees or new projects.
Or time to insights and discovery. And reduce erroneous queries or server cost indirectly and make it easy to migrate?
It could save developer hours and enable data-centric AI… would be more useful if also integrated with data quality or monitoring tool?
Just a few thoughts, might be a bit overkill, but here you go:
Task Completion (not just # of searches but number of UU’s/visits that ended with and without a search)
Most popular searches (if the searches aren’t sensitive), this could also be used to drive new requirements, such as prebuilt reports or recommended searches
Time Spent per search, page, visit, user
Engagement/adoption, new/returning UU’s, #sessions/visits per UU per day/week/month
Adding to the other comments on how do you know if the search or content discovery yielded successful results, it might be worth adding some feedback feature (nothing fancy, just a simple yes/no), which can be used alongside qualitative feedback from a staff survey.
A few metrics you could try, depending on what’s most important for your tool / users:
Successful search %: Of the number of searches made, how many clicked through to one of the search results?
Null searches: % of searches that return 0 results
Content quality: Instrument your content pages with a Y/N survey question like “was this information helpful for you?” And follow-up with “how could it be improved”
User success: Intervene at a certain point in the journey e.g. 15 minutes after a session has stated, after viewing 10 pages, or when they try to close the window. Ask something in line with your products goals, eg:
Have you found the data you’re looking for and everything you need to know before you start querying / using the data?
I like time spent until action. i.e., how much time does someone spend until they click into a final table/page, download a file, export a dataset, etc.
Then balance that with sessions with no action. How often are people searching and leaving without finding what they were looking for. (100% doesn’t need to be the target here, but it’s a valuable metric to show you made a difference when you change something)
With that many data scientists and ML engineers, you have to have some sort of productivity stats, right? If you measure how long it takes to build a model, or how many models are made, look for that to improve.
If it doesn’t, there are (at least?) three possibilities: one, they’re not using your tool — look to the usage statistics others have mentioned; two, your tool doesn’t help much — look to the satisfaction stats others have mentioned; three — people are using the same time to produce better work.
That last one is a nice but difficult place to be. If you’re not measuring quality of work, then it’s something worth thinking about. All of this summarizes (I think) to talk to the DS and ML managers. They’re your customers, while the people who work for them are your users, and not your customers. Whatever metrics the managers use to measure, you should work to help them improve those.
Typically, with internal tools, where people are essentially compelled to use the tool for their job, or there’s not really “competition”, I find it much more helpful to rely on qualitative data over quantitative.
So your NPS survey is a good start. Can you ask more questions? Can you segment users into use cases and ask specific questions that pertain to the different personas?