Summary: 2 new user behaviors found when using generative AI for text: Accordion Editing & Apple-Picking | Slash your digital prose by half | Pronominal dilemma: Should UX writing refer to the user in the first or second person? | Design for existing data and legacy system integration | The cognitive toll of screen exposure in early childhood — most detrimental at age one, less so at age two (but no TV during dinner!)
This week I celebrate India surpassing the UK to become the country with the third-most subscribers to my newsletter.
New User Behaviors with Generative AI: Accordion Editing & Apple-Picking
We finally have a qualitative usability study with detailed observations of individual users as they perform tasks with ChatGPT. You should read the full article, but here’s a summary of the two new user behaviors discovered from watching people use generative AI:
Accordion Editing is basically playing with your chatbot's output the same way you'd play a real accordion. Sometimes you want the thing to expand with more details, and sometimes, you just want the short version. This behavior embodies the dynamic nature of text manipulation, where users iteratively ask AI to contract or expand its outputs. Users employ tactics like forced ranking and word count reduction. Accordion Editing is particularly evident when generating low-to-mid fidelity outputs such as plans, ideas, or lists.
We named “Accordion Editing” after the musical instrument, which is played by alternating expansion and contraction. (Accordion player by Midjourney.)
Apple-Picking. Picture this: you're scrolling up and down your chat looking for that one line the bot wrote that was actually cool. You find it, and you either tweak it or use it as a diving board for your next great idea. The catch? Prepare to scroll like you’re binge-watching your Twitter feed. Apple-Picking addresses a gap in the sequential nature of AI chatbot interfaces. Users need to identify previously generated elements they wish to edit or use as context for their subsequent queries. This behavior is notably labor-intensive as it requires scrolling back through long chat windows to find relevant points, constituting a significant friction point in user experience.
Apple-picking: We may have to reach all over the tree to pick the ripest apples. (Apple tree by Midjourney.)
Conversational User Interfaces Are Awkward: Some analysts claim that conversational user interfaces (CUI) streamline the user experience by minimizing complexity. Yes, chatbot UI is linear and employs scrolling as a major interaction technique. While it seems easy enough when the only options are to go up or down, observing users reveals that they often get lost while scrolling through massive walls of poorly differentiated text with variations of AI output. CUI necessitates multi-step interactions, complicating task completion, and continuous scrolling impedes task management. Yep, every rose has its thorns: don’t believe the hype about CUI. It has plenty of usability problems, so do your own user testing if you plan to deploy this interaction style.
These 3 insights about AI use were discovered in a simple qualitative usability study with 8 users. If you run any type of AI project, there is no excuse for going without user research. If you don’t have the budget to test 8 users, don’t pursue advanced technology.
Cut the Words
My recommendation has always been to cut 50% of the words when writing for online media, defying the verbose norms imbibed during our formal education. (I am not a role model of concise writing myself.)
Taylor Dykes has 7 actionable writing tips for reducing word count. Though reminiscent of an English professor, she gives us a helpful checklist to eloquence economy.
“Brevity is Brilliance” by Ideogram.
Pronominal Quandary: My Stuff vs. Your Stuff
Perennial UX writing dilemma: What word to use when referring to the user’s stuff in a UI design?
“My stuff” — the user is reading the label and recognizing that the first-person pronoun refers to the person doing the reading, and thus leads to his or her own stuff, not the computer’s stuff, even though it’s the computer that put those words on the screen.
“Your stuff” — the computer addresses the user, speaking through the words it draws on the screen. Therefore, the computer is employing the second-person pronoun.
In these UI designs, “stuff” can be a variety of items we manipulate on computer systems. For example, the current shopping cart, the list of previous orders, the user’s favorites, subscriptions, contacts, the library of books on an e-reader, etc.
My take is that both work and have good usability according to Jakob’s Law since both are frequently encountered in a wide range of services and websites. Thus, the user will recognize and understand either word.
Still, you must pick one option and stick to it, according to Usability Heuristic # 4, Consistency. So, which word to use for a specific website? Oh, the agony of being a UX writer, where a single word can be a full day’s work.
Yuval Keshtcher briefly overviews the prevailing best practices for choosing between “my” and “your.” Consider how the user perceives the interface. Is it an extension of them (use “my”) or a separate entity (use “your”)?
Keshtcher perceptively observes that sometimes omitting the pronoun can be the most elegant solution. Shorter labels rule if they can be used without confusion. For example, the Amazon Kindle uses the one-word label “Library” for the user’s on-device collection of books. No need to call it “My Library” when you’re already holding the e-reader in your hands and the label links to the library stored on that device.
Design for Existing Data and Legacy System Integration
Bob Goodman explains the importance of UX designers engaging in data discovery and internal research with legacy systems. He argues that UX practices often overlook the integration of existing data structures and systems, mainly due to siloed data architecture and an overemphasis on new or fresh UI design. Goodman lists valuable questions to ask about data roles, lifecycle, and search capabilities aimed at fostering collaboration between designers and technical experts to enhance user experience in data-driven products.
Screen Use by Toddlers: Avoid During Meals, Less Bad at Other Times
I have updated my article “Infant Screen Use Leads to Reduced Cognitive Skills at Ages 4 & 9” with the results of a new research study that assessed children at a slightly later age. The new study of 13,763 French children focused on screen use when the kids were 2 years old. The other studies discussed in my article assessed infants aged 1. They found a substantial negative impact from screen use at age 1 on cognition and development when the same children were tested later in life.
In contrast, the new study only finds a minor negative impact from screen use at age 2 when the same children were tested at age 5.5. An interesting detail from the new research is that screen exposure during family meals (when the children were 2) had a much worse impact on their cognition at age 5.5 than screen use at other times of the day. This is true even if the “screen use” was a TV running in the background. The effect of watching TV during meals was more than 10 times worse than the effect of a 1-hour increase in screen time daily.
The research highlights that TV during family meals disrupts children’s auditory and visual focus, reducing effective communication between parents and children. This challenges a child's ability to understand language sounds and articulate themselves.
Several key differences between the new research and the previous studies:
The French study concerned screen use at age 2, whereas the Japanese and the Singapore studies concerned screen use at age 1. These studies show that screen use at age 1 is much more damaging than screen use at age 2.
The French study controlled for more factors than the two Asian studies. In particular, the French researchers measured the children’s intelligence at age 2 and collected various patenting measures unavailable in the two Asian studies. Controlling for these additional factors reduced the size of the effect that was attributed to screen use at age 2.
The new study had a less-detailed metric for screentime exposure, lumping all use of 2 hours or more per day into a single category. This may have led them to miss the impact of extensive screen use of 4 hours or more, which was especially detrimental in the two earlier studies.