Has anyone worked on workflow automation (with or without AI). I’m curious on if and how you identified and prioritized what to automate. What is qualitative or quantitative? And if the latter what and how did you measure (time or money or something else?)
I’ve worked on both internal workflow tools and B2B workflow SaaS.
The very first question I’d ask is: what’s the desired outcome of the automation, in a broad sense? Are you optimizing for internal efficiency (e.g. staff time/$) or customer outcomes (e.g. turnaround time for a consumer external to the workflow)?
Thanks. Good question, the answer is both. We run reports for clients and there are manual steps, that should / can be automated to decrease the turn around time. I usually take an organic approach. Staff will tell you, or you can just observe where the bottle neck is, but I’m wondering if there are more structured approached that people have successfully used.
- I’d start with the workflow and its value to the intended audience.
- What’s the impact of a workflow that’s completed in quality?
- What’s the damage of a workflow that fails to be completed or completed not in quality?
- What is the context that the audience comes in with as they start the workflow, and what is the next step when they complete it?
Context could be some trigger in the real world, or something they do on another app/product area. Same for next step upon completion. - Find gaps where automation can help and be mindful to potential damage per #1.
Value could be a reduction of human error around best practices. Automation can offer guardrails for the user. Outcomes of pushing towards best practices are hard to measure in $$$ and time.
This is a long reply to say that your north star is still the problem that the workflow is solving for. Automation is just another solution layer for that problem. You don’t need new KPIs.
It’s a good exercise to reflesh out that problem and put those measurements at the forefront when you evaluate the automation implementation.
Agreed. This caught my eye. Agree they are hard to measure. But are they impossible to measure? I’m curious to see if and how this has been done. Outcomes of pushing towards best practices are hard to measure in $$$ and time.
To tug on that thread a bit, I’m not sure that outcomes of pushing towards best practices are hard to measure in $$$ and time. A counterpoint to consider is that if some aspirational state can’t be measured (regardless of whether it’s $$$, time, or something else that’s more important), then is it a worthwhile investment right now?
And I don’t mean that in the sense of incremental thinking and becoming paralyzed by data. To the “north star” point Guy made, some aspirational state should absolutely be associated with an observable outcome.
Yep.
The outcome means that you’re successfully using the workflow to solve for the problem that the workflow was intended for.
If the automation is automating for best practices then perhaps the “before” can provide you with the leading indicator parameter towards that outcome.
So:
Define what “best practice” means both digitally and non-digitally (flesh this out with the business stakeholders to get alignment and buy-in)
Find what represents a “best practice usage” digital state for that workflow. It may be in another app/workflow (e.g. input box X is populated with Y when form content is Z). Doesn’t have to be exactly correlated with real-world best practice, just has to be agreed as “representative” by the business stakeholders.
Measure the frequency of current workflow use
Break down current usage to instances of “best practice usage” and “not best practice usage”
Communicate this number prior to go-live of automation
Check number after go-live and communicate if good
If the best practice digital state is directly covered by the proposed automation then you don’t need to wait for go-live to know the “after” numbers
I agree that the aspiration state can be associated with an observable outcome, as well, the current state is measurable. When I read “hard to measure” I was thinking hard = expensive. I could try to get staff to measure some of these tasks, but I don’t know if I can get buy-in. Capturing the data will slow them down, when they are already striving to turn things around quickly for clients.
The value vs cost of measurement is definitely an important nuance to consider. This is where thinking in broader strokes can actually help.
As a contrived example, consider a customer service team whose workflows you’re considering automating. The team may not have instrumentation to track their time, and introducing time tracking at a granular enough level to figure out what to automate could be unreasonable.
You might, however, have visibility into the created and resolved timestamps of tickets, and ultimately what you’re after is creating better customer experiences through more expedient resolution of customer issues. So, you might spend a little bit of time observing the customer service team and noting some of what seem to be more manual processes and start to form some rational hypotheses around what’s going to have leverage over that overall outcome of providing customers with better customer service experiences.
You may never be able to observe the direct impact of the automations in terms of time spent by the CS team, but you should be able to observe the total time to resolve customer issues and you may ultimately even be able to observe whether customer satisfaction improves over time as a lagging indicator. That lagging indicator is of course also the guardrail you might consider in this case to ensure you’re not over-automating (like automating away the good experience of authentic human interactions)
That’s the current approach, always nice to get some confirmation. I have high-level visibility, but have been wondering if there are ways to get more granular to help prioritize. The thoughts came in the context of determining what order to tackle opportunities to automate a work flow or part of one.
In your example, say there is one kind of common customer issue which is very complex and expensive to resolve, but will also be very expensive to automate. There is another common customer issue that is not complex or time consuming and easy (i.e. cheap) to automate. (Let’s assume the high impact - inexpensive solutions are done first, and the low impact / high cost are ignored.) What are ways to figure out prioritization? I like the idea of going back to Guy’s north star to give guidance here if you don’t have granular data to help quantify the costs and benefits.
Is your team already fully invested in the automation, and the question at this point is the order of automating?
Yes. I have buy-in from management and staff.
Do you have a high-level estimate for how long completing all of the automation work will take?
Not really, the work will never really be done, and if we fully automate or semi-automate solutions. I think if gains are being shown, the appetite with remain. So, that is a good question, as maybe the smaller, quicker wins will increase and sustain the appetite for more work.
Exactly! And eventually you’ll just run out of leverage because the gains you experience from automation won’t be observable anymore, and it won’t make sense to invest any more into automation until perhaps a later time when you can reevaluate because something has changed or you’ve grown considerably
Totally agree. Properly sizing of work has been a challenge and learning process.
For measurement when you don’t have a super direct connection to revenue, I’ve usually settled on measuring the micro and the macro, and leaning on qualitative for the messier middle.
Usually it’s very clear and measurable if the automation reduces the amount of time the specific task takes. Then extrapolate from 30 seconds / task to get bigger picture impact. At some lagging interval that effect should start to show up in overall volume / person.
So for prioritizing projects I look at through the prism of the estimated time reduction x confidence
Yep, I am setting that micro/marco improvements too. What do you mean by x confience
. Do you mean probability of success?
Yeah confidence in the effect.
Sometimes it’s super clear, sometimes depends on hard to predict aspects of user behavior
Ah got it. Right now, there is a lot of well defined pain points, so confidence is very high that what we automate will help.