Summary: The Rise of AI-First Products | Dark Design, the Musical | Designing a non-deterministic UI for Perplexity AI | Jakob live discussion on ADPList on February 22 | User research job for Adobe’s Firefly AI product
UX Roundup for January 29, 2024. (Leonardo)
The Rise of AI-First Products
Greg Nudelman wrote a great article about the concept of “AI First” products. He points out that most of the currently popular products, such as Microsoft Copilot, are traditional designs that have had a dose of AI retrofitted on the side to improve productivity. Wo do know from early controlled experiments that this generation of “AI-enhanced” software creates immense productivity advances, on the order of 40% for those tasks that are performed with the AI-enhanced products. This is worth taking! Especially since it’s much faster to retrofit AI onto an existing software infrastructure than to build an entirely new one.
That said, Nudelman is right that the truly big gains will come from rethinking the way work is performed and organized and from building a new generation of AI-first products.
Nudelman mentions three AI-first products that all fail in my opinion:
Amazon Alexa
The Humane AI Pin
The Rabbit r1
Alexa failed because it was a voice-only UI, and voice is a poor third cousin to graphical user interfaces. There’s a very small number of use cases where Alexa wins, such as “Alexa, what’s the weather,” or “Alexa, set a timer for 30 minutes.” For the vast majority of applications, Alexa adds insufficient value. In fact, it usually presents a subpar user experience, even compared with today's mediocre websites.
The Humane Pin and the Rabbit r1 have not officially failed in the marketplace yet, but they are likely premature attempts at creating a physical product that runs on AI. They may find a niche audience, just like Zuckerberg’s Metaverse or the Apple Vision Pro AR goggles. Not useless, but more like Google Glass than iPhone in terms of customer base and application breadth.
Yes, I like something like a voice-activated Perplexity AI within reach at all times when I’m traveling. But I think it’s more likely that I would prefer to keep it on my smartphone (my current use, taking advantage of the relatively big screen for scanning the answers) or possibly access it through a smartwatch than pin an awkward Pin onto my shirt at all times.
Just one difference between screen-based output and speech-synthesized output: 2 vs. 1 dimensions. Since I’m on the record for having little faith in the added value of 3-D user interfaces, you might think that my position generalized to 2-D. Not so. A spatial layout of information on a 2-D screen affords scanning and fast random access to any piece of info, whereas a 1-D UI requires linear access, which is insufferably slow.
The user can visually scan a 2-D computer screen for fast access to any piece of information, as well as an overview of everything. We can even add more screens for rapid access to even more info. In contrast, audio-driven UI is linear, making information access slow and reducing users’ sense of place. You don’t get an overview of the full information until after listening to everything. Slow. (Midjourney)
I encourage you to read Nudelman’s full piece, which is very insightful. He lists 7 requirements for the (hopefully) forthcoming generation of AI-First products:
1. Smooth, simple, seamless
2. Personalization
3. Data privacy
4. Use existing phone, watch, earbuds, glasses, tablet, headphones, etc.
5. Security of transactions
6. Non-voice is more important than voice
7. Avoid cutesy form factors
Do you want this guy as your AI copilot? (Midjourney)
I agree with all seven of Nudelman’s recommendations, and would add a “Zeroth law” (in the spirit of Isaac Asimov’s Laws of Robotics):
0. Make sure it’s useful (has use cases that cover real user needs) and usable (does so without a high learning curve, while being incredibly efficient when people attempt those use cases)
As a final thought, I have often warned against the dangers of “Mobile-First” design. We must remember the large amount of business revenue driven by desktop users and design such that users have a good experience that’s optimized for both classes of devices.
Does this caution transfer to AI-First? For sure, it’s a warning sign and worth keeping in mind. There are quite likely cases where users will want to employ a non-AI UI for some tasks for many years to come — maybe indefinitely. However, I think the two situations are fundamentally different. The point of AI First is more like my frequent exhortation to “Design for the Medium.” An early example was that websites are not brochures that happen to be downloaded over the Internet. Another example that relates to the importance of “Mobile-Focus” (as opposed to “Mobile-First”) is that smartphones don’t work with a design consisting of regular web pages shrunk to 1/4 size.
In other words, for AI to double our productivity (as opposed to the current 40% productivity gains), we should design software that’s deeply integrated with AI and conceived from the beginning to leverage its strengths while minimizing its weaknesses.
Dark Design, the Musical
I published an article about the 12 most common dark design patterns in ADPList’s newsletter. Don’t fall for the temptation to create evil design!
To celebrate the article, I generated a short Broadway musical song about dark design with Suno:
Here’s is my more conventional way of illustrating evil design. Tell me in the comments which format you prefer: song or image.
Don’t be an evil designer. Avoid the 12 dark design patterns listed in my article. (Midjourney)
A cuter version of the evil designer, also created with Midjourney. Pro tip: the first image used a “stylize” parameter of 200, whereas this image used a value of 80. Higher values of this parameter generate more complex images with Midjourney. I rarely go below 60: that’s too simple, even for me.
Designing Non-Deterministic UX
If you’ve read my past writings, you know that I am a severe critic of most AI companies for their blatant disregard of basic usability principles and the entire UX process. However, I am a fan of Perplexity, which is an AI tool with one function: to answer your questions. Perplexity is currently the best example of how AI is replacing search engines by serving up tailor-made answers to users’ questions on a silver platter.
Your butler — sorry, Perplexity AI — has the answer you requested, Sir. (Made with Midjourney v.6 — if you click the above link, you can compare with the answer-carrying butler I made with Midjourney v.5 in November. Progress in AI is stunning: In November, I simply couldn’t get a butler, a tray, and a document all in one image.)
The “Design MBA” podcast has produced an interview with Perplexity’s Head of Design, Henry Modisett. It is well worth the 38 minutes to listen to hear what he did with this product, which has turned out very well so far.
Modisett discussed the need for designers to adapt to designing products that aren’t based on static user interfaces but are dynamic and generated algorithmically. This involves a shift from traditional design practices to ones that can accommodate the indeterminism of AI. Designers must get comfortable with the inherent ambiguity and dynamism of AI-driven systems. Modisett suggested focusing on learning how to design for and communicate about non-deterministic outcomes.
Jakob in Live Discussion with Sarah Gibbons on ADPList February 22
I will have a live discussion about current UX trends with Sarah Gibbons, hosted by ADPList on February 22. Gibbons is the best designer I have met in my 41-year career. Usability and design together, that’s the ticket.
🎟️ Talking about tickets, they’re free, but advance registration is required.
User Research Job: Adobe Firefly
Adobe Firefly is a leading product for AI-driven image generation. Much to Adobe’s credit, they are now hiring a dedicated Senior Experience Researcher to improve this product. Unfortunately, the position looks to be limited to San Francisco or Seattle, but if you’re willing to live in one of these overpriced cities, do apply. (It would have been better for Adobe — let alone the candidates — to hire worldwide.)
Hat tip to Laura Herman for making me aware of this job opening.