top of page
Writer's pictureJakob Nielsen

UX Roundup: World Usability Day | Bits to Atoms | Doing vs. Learning | AI Denial

Summary: World Usability Day November 14 | From bits to atoms: when do we get super-human robots? | Doing vs. learning with AI | Older managers in denial about AI progress

UX Roundup for November 11, 2024. (Ideogram)


World Usability Day

World Usability Day is November 14 this year. There are usually many exciting events to celebrate the day around the world. Check the World Usability Day website for events near you!


Whether or not you attend a dedicated event, I suggest celebrating the day by posting one small appreciation of usability to your social medium of choice: instead of complaining about bad usability (which admittedly, I do all the time), try to recognize a design (whether a complete product or just a single screen or piece of UX writing) that has improved your user experience and made something a little easier.


Happy World Usability Day. (Midjourney)


From Bits to Atoms: When Do We Get Super-Human Robots?

Most attention on AI has been on the software side. For sure, that’s my personal interest, given my background in software usability. Software is eating the world, and AI-driven software is eating it faster, with the potential to double the world economy, lifting billions of people out of poverty and finally giving them decent healthcare and education.


However, there’s a second side to AI: it’s not just bits, but also atoms, using Nicholas Negroponte’s old analogy. (Negroponte mainly talked about the move from atoms to bits, because he was an old-schooler like myself coming of age during the dot-com bubble. However, his dichotomy runs both ways.)


In a recent interview, Elon Musk (head of electric-car maker Tesla) discussed Tesla’s humanoid robot, Optimus. (Latin for “the best” — Musk has never been modest). He mentioned several interesting points:


  • In making the next version of the robot, they decided to hew more closely to the design of the human body. For example, motors to move the fingers were originally in the hand but are now being moved to the forearm. This allows more force and precision in gripping items. I'm not surprised that human anatomy has much to teach robot designers: evolution spent millions of years experimenting with the best way to make a tool-using creature (including the years working up to the hominins — and ultimately Homo Sapiens — during the evolution of the apes).

  • The upcoming robot release will cost about $40K because only a few thousand will be made, but Musk predicts that the price will drop to $25K once robots enter mass production with millions of robots produced each year. This is along the same lines as car pricing: mass production makes physical objects cheaper, especially if a big part of the expense is the engineering cost of designing the first copy. Depreciating this cost across million of copies drives cost close to the bill of materials, which is about $25K.

  • Robot performance will exceed that of human workers in 5 years, due to AI advances. At a cost of only $25K per robot, this means that virtually all physical labor will be performed by robots in rich countries where the minimum wage (plus overhead costs) exceeds $25K.


As to the last point, I personally think it’s more likely to take 10 years before robots exceed human performance. Elon Musk is the best businessman of our generation, so it’s not wise to bet against him. He has done incredible things with space launches and satellite Internet, besides his cars. I’m not shorting Tesla. Maybe they can pull it off. But the world of atoms has a nasty habit of surprising us with difficulties as we try to go that last step. Autonomous cars are a good example: they are definitely happening, and they are already safer than human drivers, but it took many years to iron out the edge cases.


Old-school industrial robotics: not shaped like a human, limited to very specific tasks for which it is designed. (Midjourney)


Elon Musk thinks that the future of industrial robots is to make them mimic humans to a large extent. Once humanoid robots become cheap enough in 5–10 years, we can have entire assembly lines staffed with flexible robots that can change their procedures as the product changes. (Midjourney)


I’ve seen people say they want AI to take over their drudgery housework. Well, Claude or Midjourney won’t do that, but humanoid robots will. Working in a human residence likely requires a humanoid robot, whereas industrial assembly lines could be reconfigured for other form factors if they prove more productive. 5 years until you can buy a robot to do everything in your house, according to Elon Musk. Or maybe 10 years, according to me. (Midjourney)


Even if it takes 10 years to perfect humanoid robots for home use, that will hopefully be soon enough for me personally to avoid the nursing home. In-home senior care will be a killer app for these robots.


Doing vs. Learning With AI

AI is a great learning tool. In particular, it serves as a seniority accelerant in the workplace due to its ability to provide just-in-time training of job skills aided by the fact that AI workflows can be easier to use than old-school workflows. (AI narrows skill gaps, according to a huge body of controlled studies.)


What about traditional education? Several new studies have the same general conclusion. (Hat tip to Ethan Mollick for alerting me to these papers.)



Three different educational levels. Three different continents. Two different topics. And of course, three different research teams. This level of variability dramatically increases the credibility of the findings. (Any single research paper can be wrong, for reasons spanning from actual scientific fraud over bad research practices like “p-hacking” to simply being irrelevant for practical situations due to study design limitations.)


Several research studies assessed students learning math and coding from middle school to university with and without AI help. Despite the different settings, the findings were the same. (Midjourney)


Two basic findings are the same across studies:


  • Students who used AI did in fact learn more, compared with a control group that was taught the old-school way (without AI).

  • The performance of the AI-taught students dropped when they were tested without access to the AI. Thus, they depended on AI for their higher performance and didn’t retain as much learning.


However, there’s a twist to that second bullet. Students did get long-term benefits from using AI when they used the AI as a personal tutor, “conversing about the topic and asking for explanations.” The students who experienced short-term performance gain but long-term reduced learning were those who used AI as a co-worker to do the “job” for them of solving practice exercises. The second group of students felt that they were learning while they were making progress on the exercises, but since they didn’t put in the work of problem solving, they didn’t learn as much from successfully progressing through the exercises.


If you follow behind as AI goes through the metaphorical maze of solving practice problems, you will not retain as much learning as if you work through the exercises yourself. (Leonardo)


I have two almost opposite takes on this research.


On the one hand, the studies prove the relevance of the 4 different metaphors for using AI. (Intern, coworker, teacher, coach.) To truly learn a topic, use AI as a teacher, not as a coworker who’s performing the work for you.


On the other hand, maybe it doesn’t matter so much whether students’ performance dropped with tested without AI. If this were a real-word work situation, employees would not have to perform their jobs without the tools they used to learn the job.


On balance, I feel that the takeaway differs depending on the student’s life stage and the topic being learned:


  • If students are young and still learning broad fundamental skills and if the topic is one such fundamental skill (say, math in general, as opposed to how to perform a specific calculation), then use AI as a teacher: to guide you through the material and clarify your misunderstandings. Don’t have AI solve practice problems for you.

  • If students are mature (in the workforce) and need to learn a very specific skill to supplement their broad understanding, then do take the shortcut and treat AI as a coworker.


If learning fundamentals, treat AI as a teacher. If learning a specific procedure that supplements those fundamentals, treat AI as a coworker and let it do the hard work while you relax. (Leonardo)


Science fiction has led people to fear becoming dependent on AI. But I ask, when would you need to do the job without your main tool? If you’ll always have AI at hand, don’t worry about leaning on it. (Midjourney)


Middle-Aged Managers in Denial About AI Progress

During the hippie era, a popular slogan exclaimed, “Don’t trust anybody over 30.” Regarding AI, we may amend this to, “Don’t trust anybody over 40,” at least according to Garry Tan, CEO of Y Combinator, the leading startup accelerator in Silicon Valley.


Y Combinator recently published an informative video about the path to superintelligent AI (YouTube, 33 min. video). Several interesting points, including that OpenAI lost the monopoly on useful AI it had when GPT 4 launched in early 2023. Among the startups they have recently funded, Claude has gone from a 5% market share half a year ago to 25% now, and Llama has gone from 0% to 8%. Such growth rates in this short time are unprecedented in the experience of the Y Combinator leadership in the video.


At the same time, OpenAI is not out of the game. Its new o1 model enabled some startups to increase the accuracy of their AI features from around 80% to 99% in two weeks. This one added feature of inference-time reasoning has now made some AI applications feasible that suffered under too many hallucinations for mission-critical use just a few weeks ago.


These fast changes are what caused Garry Tan to discuss the problem with older corporate decision-makers. Managers over 40 are used to being able to take their time to scope out technology revolutions because a decade would pass between the invention of a new technology and the time it became critical for mainstream companies to adapt it. Cloud-based computing is a prime example of this slow growth of major tech changes.


With AI, though, major changes happen within a few weeks. This is not even counting the revolutionary changes between generations of the foundation models: the AI scaling law predicts that the next generation will be released in a few months and radically change AI's capabilities.


Companies that base their AI strategies on experiments management did last year will completely lose out on these drastic changes.


Technology change is no longer a race car. With AI, it’s a turbo-charged race car with a jet engine. (Midjourney)


In Tan’s experience, the younger startup managers get the new, faster pace and quickly adapt their companies to the new AI realities. Companies with older managers, who established their expectations for the speed of change in earlier times, are not so lucky.


I think there’s a lot to Tan’s insight that many managers in their 40s and 50s are ill-prepared to cope with the dizzying pace of AI advancements. However, I don’t think middle age necessarily dooms you. People over 40 must simply recognize that their hard-won experience counts for nothing in this new age. Adapt or be a dinosaur, that’s the choice for people in this age group, but there’s no reason they can’t decide to adapt.


People who judge AI’s business potential based on experience with last year’s models are dinosaurs in today’s world. (Midjourney)


If this is a selfie of you and your manager, in terms of understanding AI’s potential, the story may not end well. (Midjourney)


As somebody in his 60s, I should add that being truly old in the tech business has an additional advantage: I have lived through so many tech revolutions that I can see clearly how different the world will be once the main AI changes have played out. Before/after PC. Before/after Web. Before/after mobile. I’ve seen the world change many times already. The one new thing is the speed, and once I understood the AI scaling law (which is not obvious by any means but has been well proved), I recognized that the world is changing much faster this time.


I’ve seen it before, many times. AI is a rocket ride, and you need to recognize its increased speed compared to the many previous technology revolutions. (Midjourney)

 

Top Past Articles
bottom of page