Summary: Design metrics that measure user value | Users don’t like to read | Children’s awareness of dark design patterns | Why Chinese design looks cluttered | One name field is better than two AI does better than humans at solving CAPTCHA challenges | Google gets 7 nuclear power plants
UX Roundup for October 21, 2024. (Ideogram)
Design Metrics That Measure User Value
In a recent podcast about AI for creators, Laura Burkhauser (VP of Product at Descript, which offers a video editing tool) mentioned how the changed business incentives of AI-driven UX have led to better metrics. She previously worked at a social media platform where the primary metric to drive design was MDU, or monetized daily users. This is almost an anti-usability metric, where you are rewarded for wasting more of users’ time.
Measuring how much time users spend with your service. If more time is better, you may have an anti-usability metric in your hands. When you design to consume less of users’ time, you respect your customers and they are likely to return. (Midjourney)
In contrast, now that she’s designing an AI tool for creators, her metrics relate to how fast users can make their videos. Spending less time with her service is good. And the more videos customers can create in less time, the more likely they are to make more videos in the future and continue as paying subscribers.
Two interesting new metrics she mentioned are:
Time to expression. How much time from new users enter the site until they have finished creating their first video?
Editing richness. Unfortunately, Burkhauser didn’t define this in the podcast, but having worked on similar metrics in the past, I can guess that it relates to the extent to which Descript’s UX encourages users to create richer or more advanced videos.
“Time to expression” is a version of the good old usability metric “time on task,” but specialized to focus more on learnability for new users than efficiency of use for the experienced users. (Learnability and efficiency are two of the five usability quality criteria.)
Creators juggle many sources of content and inspiration. If your tool leads to richer outcomes by encouraging advanced editing, creators (and their audiences) will be happy. (Midjourney)
Having a low time to expression is particularly important to combat the problem of “tourist users” which plagues AI services. People hear about an AI service, it sounds cool, and they go there to check it out. But if these users don’t get useful results very quickly, they’ll give up and not become paying customers. There are so many new AI tools out there that people embrace that tourism mentality where they take a quick selfie with the Mona Lisa and then move on to the next attraction.
Creators want to create, not spend time manipulating a user interface. The faster they’re done using an AI service, the more they’ll create and the more they’ll value your service. (Midjourney)
Users Don’t Read (Much)
Users want to get to the point and get their tasks done. They usually like to minimize reading and seldom read instructions. (Ideogram, using its new “design” style)
Children's Awareness of Dark Design Patterns
Dark design is one of the more insidious trend on modern user interface design, mostly driven by “free” products that need to acquired their money in other ways than charging an old-school purchase price. See my overview of the most common dark design patterns.
Targeting adult users with dark design is bad enough. But targeting children is worse. However, a new research study indicates that kids may have developed self-defense mechanisms against at least some dark patterns: “We’re Not That Gullible!” Revealing Dark Pattern Mental Models of 11-12-Year-Old Scottish Children, by Karen Renaud from the University of Strathclyde, Glasgow and 6 colleagues.
The study participants were 188 children aged 11-12, recruited from schools in three different areas of Scotland.
The researchers employed both structured drawing activities and unstructured interviews to gather data. Children were presented with four scenarios, three of which involved dark patterns (tricking users into revealing more personal information than they think they’re sharing, bait and switch, and confirm shaming), whereas one presented a genuine browser warning. This last scenario aimed to see whether children would raise false positives by being suspicious of this warning instead of realizing it was legitimate.
The children were asked to create drawings depicting the steps and consequences of interacting with these scenarios, followed by discussions to capture their understanding.
The children demonstrated an awareness of dark patterns and online deception but lacked precision in distinguishing between different types of deceptive practices. 53% of the participants expanded on the scenarios, imagining consequences such as tracking, hacking, and privacy breaches. Many anticipated worst-case outcomes like hacking or financial loss, but 31% mirrored the scenarios back with little elaboration. Privacy-related risks were often misunderstood, and genuine warnings were sometimes perceived as deceptive. Children frequently cited “hacking” and “scams” as potential dangers but were unclear about the exact mechanisms or motivations behind these actions.
The good news from this study is that most of these rather young children had an awareness of dark design and were able to spot it. On the other hand, almost a third of kids did not exhibit sufficient awareness.
Two pieces of bad news:
Many children misclassified some of the dark design patterns and believed that everything was in the category of “bait and switch.” The researchers speculate that this was caused by a preponderance of this dark design pattern in the participants’ prior experience. It may not matter that much whether children are capable of accurately analyzing dark design patterns as long as they recognize that something bad is happening and exhibit caution before proceeding.
There were many false positives in the study, where children thought the legitimate design was a dark pattern. While it’s good that the kids have learned from awareness campaigns or their own experiences, it’s bad if they are so scared of legitimate designs that they limit the benefits they could gain from technology.
The researchers conclude that children need a more nuanced understanding of the possible downsides of online systems and also that they should develop more realistic expectations of the consequences of dark design.
The classic fairy tale of Little Red Riding Hood visiting Grandma’s house and realizing that the wolf has dressed up as her grandmother serves as a role model for spotting dark design patterns. (Leonardo)
Why Chinese Design Seems Cluttered to Westerners
Two interesting analyses of Chinese design compared with Western design:
Juliette Xing created two carousels with top 10 design guidelines (with examples) for Chinese user interfaces: Carousel 1 and Carousel 2.
Phoebe Yu produced a video about Chinese app design (YouTube, 11 min. video) that has a bit of cultural theory but is entertaining and also rich with examples.
The two designers basically agree, but I recommend both of them since they illuminate the question of Chinese UI design in different ways, meaning that they supplement each other well.
The big question for foreigners like myself when we see Chinese UI design is always, “why is it so cluttered?” I admit that 20 years ago when the web was fairly young, I thought that the reason was that Chinese designers were still suffering under the same bad design habits as plagued the first American web designers in the late 1990s. However Chinese design keeps looking busy to foreigners, even now that the early days are long gone. Clearly, these UIs look this way because it works.
The Chinese physical environment can seem overwhelmingly busy to the foreign eye, but Chinese people are used to it. Their expectations transfer to the digital realm. (Midjourney)
The main explanations for the “busy” Chinese design seem to be:
China is a high-context culture with an increased interest in relationships and holistic understanding, whereas the USA and Europe are low-context cultures preferring a tight focus. This means that Chinese people naturally prefer to see more information.
The physical environment in China is very dense and rich with signals that bombard you at every step. It’s a densely populated country for one (in the main cities which set the tone) and the people like seeing a lot of information, as just mentioned. This experience from everyday living translates into expectations for the digital world.
Chinese characters create a more dense and compact presentation of information than Latin characters, as shown in the following example.
The same text in English and Chinese. Note how the Chinese text is much shorter due to the more complex characters. Also note how the concatenated square characters seem to give less structure to the content, at least if you don’t read Chinese. In this example, the English text requires 55% more space than the Chinese. (And a German version would be even more spaced out, requiring 13% more characters than the English version.)
One Name Field Is Better Than Two
I don’t know how many times I have said this, but we still need to push for the design guideline to allow users to enter their name into a single text field instead of having to split their name across multiple fields.
Also, what’s a person’s “first name” anyway? In China, it's the family name. Single-field name entry doesn't just simplify your UI, it automatically adds internationalization.
For that matter, on Java (Indonesia) many people use a mononym (single name) as their legal name. They have nothing to enter into a second text box. The same is true for King Charles III (no, “III” isn't His Majesty’s last name), though most websites probably don't have him as a user.
See also: a nice 4-minute video about name field usability from the Baymard Institute.
How do you expect a poor mononymic King to use your website if you require his name to be split across two name fields? (Midjourney)
AI Does Better Than Humans at Solving CAPTCHA Challenges
CAPTCHA (“Completely Automated Public Turing test to tell Computers and Humans Apart”) is an incredibly annoying user interface element that asks users to solve a puzzle before they are allowed to use a system. Bloomberg is a notable offender who’s showing a CAPTCHA in my face almost every time I want to read an article, which is getting me close to canceling my subscription. But many other services use CAPTCHA, so it’s probably unfair of me to single out Bloomberg (I mention them because they annoyed me again this morning, and so the example is fresh to mind.)
An example that’s particularly crazy is British Airways’ inflight Wi-Fi. To use Wi-Fi as a passenger on the plane, one has to recognize distorted characters, which is so difficult that I usually fail the first few tries. Why bother? Who’s going to be accessing Wi-Fi on an airplane flying across the Atlantic, other than passengers?
The idea behind CAPTCHA is to prevent computers from pretending that they are human and thus accessing online services unauthorized. This worked well in 2003 when this idea was introduced.
Now, AI is better than humans at solving CAPTCHA puzzles. Specifically, Andreas Plesner and colleagues from ETH (Switzerland’s elite technical university) have published a paper showing that their machine learning system can solve 100% of reCAPTCHAv2 (Google’s CAPTCHA service). They also ran a study with human test participants and found that the humans needed 29% more attempts than the AI to convince Google they were human. In other words, it’s easier for AI to trick Google into thinking that it’s human than it is for poor humans to be considered humans.
It's time to stop using CAPTCHA. You’re annoying your customers (and sometimes preventing legitimate humans from accessing your service — especially in the case of disabled users), but AI can beat the challenge. (And it’ll become easier and easier for AI to do so with future generations.)
Was the CAPTCHA puzzle solved by AI or by a human? You can’t tell, so stop annoying users with these disruptive tests. (Midjourney)
Google Gets 7 Nuclear Power Plants
Google has signed a deal to buy electricity for an AI data center from 7 new nuclear power plants to be built in the United States. The reactors are expected to come online starting in 2030 and ending in 2035.
Of course, it’s nice to see Google sponsor emissions-free power for AI. However, the more important point is the timeline: the last of the plants covered by this contract won’t start generating electricity for another 11 years. This again implies that Google expects the AI scaling law to continue for at least another decade, leading to the need for bigger AI data centers until at least 2035.
Now, Google doesn’t know everything, especially about the future, but it has one of the world’s leading AI research labs, so I think this bet on future AI scaling is worth noting.
Google expects to need so powerful AI data centers in 11 years that they’re funding 7 new nuclear power plants to be built. (Midjourney)