Work for a B2B saas product, building MVP, hardly have 6-8 interviews. My manager suggested I transcribe them and get insights. Which does help with deep insights but it’s so much work. Can’t use a tool because they’ve used native language.
Here’s my (very generalized) approach- and it’s based on assumption that your usability evaluations have specific tasks that you’re asking people to attempt (rather than a “free form” study).
For each participant I create a matrix with tasks in columns and rows that capture this:
could person complete the task (objective yes or no)
how confidently did they complete the task (subjective based on your perspective, but you could also balance this with more objective measure like measuring how long it took)
sound bites: objective word for word literal quotes that participant mentioned
observations: your objective notes on what you saw happen (don’t need to be exhaustive, just anything that seems important)
hypothesis: your own subjective notes around what happened, why person struggled with task (I like to push myself for more than one hypotheses rather than narrowing down)
ideas: any ideas for specific changes that might resolve problems if the person attempted the task again (and tie back to hypotheses)
For some participants there may be blank cells if nothing stands out and that’s fine.
Then aggregate this info for all participants and focus on areas where there’s a lot of commonality.
I like taking a very structured approach and deliberately separating out objective, subjective, and ideas-for-changes, I have seen people take a more general approach at insights and they tend to apply a lot of their own bias into their observations.
If you have 6-8 interviews you watch them all, each 10 times.
Don’t hear what they say about the tool, but see the look on their faces, and body languages. Transcription misses those details.
Users tend to lie. And you have to learn when they do so, but digging into the why of certain comments they give. Its very hard to go back in time and do this.
The best approach is, read the Mom Test, and go back and do more interviews. I think from 20-30 is a healthy amount.
I’m a UX Researcher and I use Dovetail for analyzing interviews. Saves tons of time. The tool will transcribe the video recordings and then you can highlight the transcription. You can also add in your own terms that Dovetail will remember when transcribing interviews to help with the native language thing. Then you can tag and highlight transcriptions and notes you take and compare notes between interviews to create insights. The insights will embed the pieces of dialog, notes, and video clips for reference.
Well, there’s the short and long game for interviews. However, you’re right, interviews take time to facilitate, transcribe, and regurgitate. Unfortunately that’s just the nature of that particular beast.
Like any data point, you’re building a body of evidence for patterns and analysis. So for your B2B product workflow, you need to realize you’re going to be doing more than one stint of 6-8 interviews. More likely you’re going to be doing three reps of eight during the course of one major release for the beginning, middle and end.
As mentioned in a previous comment, build a matrix of inputs for key junctions in your product workflow. So the columns would be observation instances and the rows would be the user list. Try to keep using the same table inputs so you can set a benchmark of improvement over time.