Thanks Ryan! That was the goal - to show that A/B testing isn’t the only way to get data on this kind of design challenge.
Re UsabilityHub, I’ve been a customer on and off for years, and they’re a client of mine at present, so I’m very biased I think I’d be a big fan regardless though. Being able to generate data quickly and at low cost is valuable for me as a practitioner.
I’ve used it mostly for UI optimisation and IA/label testing with click tests, surveying with design concepts, and iterating on comms/messaging using 5 second tests. All useful in the enterprise world because you can take the risk out of design decisions super quickly, less than an hour in most cases. Good for startups too because you can run basic test sets with decent n values for ~$100 or less.
But because it’s unmoderated, it’s easy to misuse as well. You’ve got to be pretty clear and concise with questions and tasks, and it’s not possible to follow up on interesting nuggets of insight from participants. And you can’t quickly change course in the middle of a test, like you might in a moderated scenario. All the usual pitfalls of remote testing, really.
Also, because a lot of the results you create on there are free-text-based, there’s a temptation to take what people say at face value, rather than combining that with objective measures of the real performance of a design – or digging deeper and re-running tests with more/different/‘why-why-why’ questions that are informed by your initial ones. So be prepared to re-run tests a few times, and to dig into the results as closely as you’d have to with in-person testing data.