In modern times telemetry can show how well new designs work. The industry never forgot how to measure and do user research for ui changes. We've only gotten better at it.
I've had an alternate theory for a while. Prior to verbose metrics, UIs could only be designed by experts and via small samples of feedback sessions. And UIs used to be much, much better. I suspect two things have happened:
- With a full set of metrics, we're now designing toward the bottom half of the bell curve, ie, towards the users who struggle the most. Rather than building UIs which are very good, but must be learned, we're now building UIs which must suit the weakest users. This might seem like a good thing, but it's really not. It's a race to the bottom, and robs those novice users from ever having the chance of becoming experts.
- Worse, because UIs must always serve the interests of the bottom of the bell curve, this actually is why we have constant UI churn. What's worse than a bad UI? 1,000 bad UIs which each change every 1-6 months. No one can really learn the UIs if they're always churning, and the metrics and the novice users falsely encourage teams to constantly churn their UIs.
I strongly believe that you'd see better UIs either with far fewer metrics, or with products that have smaller, expert-level user bases.
I don’t believe either is the primary driver of modern UI design. Cynical as it may be, I think the only things that get any level of thought are:
1. Which design is most effective at steering the most users to the most lucrative actions
2. What looks good in screenshots, presentations, and marketing
The rest is tertiary or an afterthought at best. Lots of modern UI is actually pretty awful for those mentioned bottom of the bell curve users and not much better for anybody else in terms of being easy to use or serving the user’s needs.
Proper use of analytics might be of assistance here, but those are also primarily used to figure out the most profitable usage patterns, not what makes a program more pleasant or to easy to use. They’re also often twisted or misused to justify whatever course of action the PM in question wants to take, which is often to degrade the user experience in some way.
There's a much simpler explanation. At some point, the UI becomes about as good as it can be. It can't really be improved any further without changing the whole paradigm, and just needs to be maintained.
But product managers inside the large corporations can't get promoted for merely maintaining the status quo. So they push for "reimagining" projects, like Google's "Material Screw You" UI.
And we get a constant treadmill of UI updates that don't really make anything better.
Just because they're measuring doesn't mean they're measuring the same things as before.
The goal in 1995 might be "The user can launch the text editor, add three lines to a file, and save it from a fresh booted desktop within 2 minutes".
The goal in 2015 might be "we can get them from a bare desktop to signing up for a value-add service within 2 minutes"
I'd actually be interested if there's a lot of "regression testing" for usability-- if they re-run old tests on new user cohorts or if they assume "we solved XYZ UI problem in 1999" and don't revisit it in spite of changes around the problem.
Telemetry may tell you the "what" but, at best, it will only allow you to infer the "why". It may provide insights into how people do things, yet it will say nothing about how they feel about it. Most of all, telemetry will only answer the questions it is designed to answer. The only surprises will be in the answers (sometimes). There is no opportunity to be surprised by how the end user responds.
In modern times telemetry can show how well new designs work. The industry never forgot how to measure and do user research for ui changes. We've only gotten better at it.