Case study: designing your UI for feature discovery with user research data


#1

Hey all - created a new thread for this, from our discussion on remote UT sample sizes.

I recently published an article about some UI optimisation work that I did with remote user research tools. Check it out here: https://io.usabilityhub.com/improving-navigation-elements-a-ui-optimization-case-study-with-user-research-data-e08648fd1011

Would love to hear if you have done similar research or if you have feedback about the article - I’m new to the writing game so all feedback is welcome! Thanks!


Sample sizes for remote user testing
#2

This is a great write up and a really nice example of UI optimization. Most folks would just save this for A/B testing but I think you are able to capture much more nuance in rolling this out in the way that you have.

Nice work Hugo!

And re: UsabilityHub, how are the rest of the tools that it offers? Would love to hear more about your experiences with it.


#3

Thanks Ryan! That was the goal - to show that A/B testing isn’t the only way to get data on this kind of design challenge.

Re UsabilityHub, I’ve been a customer on and off for years, and they’re a client of mine at present, so I’m very biased :slight_smile: I think I’d be a big fan regardless though. Being able to generate data quickly and at low cost is valuable for me as a practitioner.

I’ve used it mostly for UI optimisation and IA/label testing with click tests, surveying with design concepts, and iterating on comms/messaging using 5 second tests. All useful in the enterprise world because you can take the risk out of design decisions super quickly, less than an hour in most cases. Good for startups too because you can run basic test sets with decent n values for ~$100 or less.

But because it’s unmoderated, it’s easy to misuse as well. You’ve got to be pretty clear and concise with questions and tasks, and it’s not possible to follow up on interesting nuggets of insight from participants. And you can’t quickly change course in the middle of a test, like you might in a moderated scenario. All the usual pitfalls of remote testing, really.

Also, because a lot of the results you create on there are free-text-based, there’s a temptation to take what people say at face value, rather than combining that with objective measures of the real performance of a design – or digging deeper and re-running tests with more/different/‘why-why-why’ questions that are informed by your initial ones. So be prepared to re-run tests a few times, and to dig into the results as closely as you’d have to with in-person testing data.