Summary: Making complex data more understandable, a 5 years retrospective | Save the date: OpenAI Keynote to be livestreamed over the Internet | Prompt for analyzing qualitative user data that found important insight overlooked by human experts | AI is too ready to please users by agreeing with them | 7 action items to get started with AI for UX
UX Roundup for October 30, 2023. Happy Halloween from all of us here at UX Tigers. (“All of us” just being Jakob.)
Data Visualization, The Economist Style
The newsweekly The Economist publishes some of the best data visualizations, with a focus on complex data, often covering trends in the economy. They offer a weekly newsletter called Off the Charts, which is highly recommended, though it’s only for paying subscribers.
They recently published a retrospective celebrating 5 years of their weekly “Graphic Detail” section with highlights of the best visualizations from this period. The editor’s winner is not even an illustration but a simple satellite photo of the Korean peninsula at night, showing the difference between the free-market south and the controlled-economy north: the border could as well have been a blackout curtain.
Some interesting designs:
How the absence of outliers in data is a likely indicator of fake data
Different ways of calculating an inflation index (a horribly nerdy topic, if ever there was one, but made interesting by the visual)
The full articles about these data sets require a subscription, but you can see the data visualization even as a free user.
OpenAI Keynote Livestreamed in a Week
OpenAI’s developer conference is next week, opening November 6 with a keynote address at 10:00 AM US Pacific Time (6 PM London time, 7 PM Berlin time, here’s a link to convert to your own time zone).
You can watch the live stream on YouTube. I hope they are smart enough to keep a recording of the talk at this same URL after the event, in case you miss the live stream.
I have no idea whether this will be a good or bad talk, but I plan to watch for two reasons:
OpenAI is currently the most important company in the AI space, with products like ChatCPT and DALL-E. It’ll be worth getting a feel for their thinking.
OpenAI is notoriously uncommunicative, with no user documentation or advice for using these important products. Maybe there will be a bit of useful info in the keynote.
Prompt for Analyzing Qualitative User Data
Kate Moran and I recently published an article on Getting Started with AI for UX. In a comment on this article, Jennifer Martin from Australia mentioned that her team had collected an extensive pool of qualitative user data. An AI-powered analysis revealed a profound insight that had eluded the entire team. (They thought this might be a hallucination, but checked, and AI was right.)
Jennifer graciously shared the prompt they employed, after several attempts to get the prompt right: “I want you to act as a customer experience researcher. Analyse the survey responses to identify the top five positive themes, the top five negative themes, and top five overall insights. Before responding, please ask me questions that will help improve your response. I will share the survey responses when I answer your questions.”
Team meeting about user research insights. In an Australian case study, AI discovered an additional insight that had eluded the team. (Research debrief by DallE.)
Sycophantic AI: Too Ready to Please Users by Agreeing With Them
Sometimes it’s nice that AI is eager to please us poor humans. An agreeable conversation partner is one of the things users say they like about AI companions: the use of AI as a friend, therapist, or even romantic interest. Companionship is the deepest level of the 4 degrees of anthropomorphizing of AI we discovered in user research.
Current AI chatbots encourage all levels of anthropomorphism by being as kind to users as they can be. Kindness is fine, but not when it creates misleading answers. Unfortunately, new research by Anthropic indicates that all the popular chatbots, including the granddaddy of them all, ChatGPT, are sycophantic to the extreme. They are so afraid of disagreeing with users that they generate false answers when asked leading questions.
Humans are notorious for the social psychology phenomenon of tending to agree with leading questions. This is why neutral questions are so important in user research. But that’s humans. We would have hoped for better from machines. Ask a neutral question, ask a biased question. In either case, the computer ought to give the best (most truthful) answer. At least for factual questions. But even for matters of opinion, I would prefer the best answer, not the most aggregable one. If I upload a photo of myself before giving a keynote speech and ask, “do I look good wearing this tie?” I’d prefer an honest judgment. (By all means, don’t be nasty and say that losing 20 pounds would do more for my looks than wearing that necktie. But if the color of the tie clashes with my shirt, say so!)
The takeaway action item for you, dear reader, is that when asking questions of your favorite AI, you should aim to do so in an unbiased manner. If you hint at your preferred answer, that’s what you’ll get, which is not useful, except when having afternoon tea with an AI companion.
Unless you ask carefully unbiased questions, AI advice is likely to be too eager to please instead of being accurate. Mr. Carson would never have allowed His Lordship to dress for dinner in a poorly chosen tie, but an AI butler (or fashion advisor) might cause a fashion crime.
7 Action Items to Get Started with AI for UX
Getting Started with AI for UX: Use generative-AI tools to support and enhance your UX skills — not to replace them. Start with small UX tasks, and watch out for hallucinations and bad advice.
The full article has much more detail, but here’s an infographic summarizing 7 steps you should take:
Feel free to copy or reuse this infographic, provided you give this URL as the source.