top of page
Writer's pictureJakob Nielsen

UX Roundup: 2025 UX Themes | Put New AI Models to the Test | AI Slide Creation | AI Analyzing Usability Tests | AI Job Creation | AI & UX Jobs

Summary: 6 themes for UX in 2025 | Testing new AI with your own challenging tasks | AI can transform prose content into condensed slide presentations | AI middling at analyzing usability tests | AI creates more jobs than it destroys | AI’s impact on UX jobs the next 5 years

UX Roundup for January 6, 2025. Happy New Year! Let’s seek out new adventures in this new and promising year. (Midjourney)


UX Trends for 2025

I made two videos with my top 6 trends for user experience in 2025:



The music video is clearly the most entertaining. It shows off AI creation the best, but presenting complex topics in rhyming song lyrics is still less straightforward and requires more engagement on the listener’s part to understand. (Which may be good for learning but bad for quick hits.) Thus, I recommend watching the explainer video before listening to the song.


I reused my “Norwegian television presenter” avatar in the explainer video, but made a new “C-Pop Idol” avatar for the music video. The Norwegian avatar received the most clicks of any of my videos in 2024, so I hope it will also do well in 2025. For the music video, I could have reused my existing K-Pop avatar, which I made last year because I’m a big K-Pop fan. But since I have many more Chinese followers than Koreans, I decided to spend a little time making a new avatar for this song.


It's still unclear whether it’s better to retain a few consistent avatars across all videos or create new ones for each video. Making an avatar takes about half an hour, including waiting time, while HeyGen chews on the new design, so pending more data, I am tempted to reuse most avatars while experimenting with new ones to see if they attract more engagement.


It's equally unknown whether having each avatar specialize in certain content types is better. For example, use the “TV presenter” solely for explainer videos and the “C-Pop” avatar solely for songs.


(Many of the questions I’ve raised here seem ripe for a master’s thesis. If you do such research, please let me know the results.)


Educational songs about non-fiction content are a more entertaining way of communicating the material than plain text or even avatar explainer videos. But rhyming lyrics are less concrete and are at a slight metaphorical remove from the basic facts, which requires listeners to pay close attention and think about what’s being sung. (Midjourney)


Put New AI Models to the Test

We all know that current AI can’t do everything. We also know that upgraded AI tools are released every week. This means that even if you currently don’t want to do a certain task with AI, it might be perfectly feasible to do it next week.


Furthermore, it’s not just a matter of improving your currently preferred AI tool. (Say, Claude Sonnet going from version 3.5 to 4.0 sometime in 2025.) New AI products are constantly being built: maybe you should transfer your loyalty to one of the new entries. How do you know?


As suggested by AI guru Ethan Mollick, one simple idea is to maintain a set of tasks that would be useful for your job or your company if AI could perform better. Then, when something new comes out, you try these tasks with the new model and see if it’s in fact better than what came before.


You should have a set of test tasks ready for trying out new AI models with challenges from your work: mainly problems that are hard or impossible for your current tools. (Leonardo)


If your company doesn’t have a systematic process for assessing how to use emerging AI capabilities, and whether new AI products are better than old ones, your executive committee probably looks like this. Dinosaurs! You can’t rely on your conclusions from evaluating AI two months ago. (Midjourney)


AI Transforms Articles into Slides

I used the AI service Gamma to automatically create a short slide deck based on existing long-form content (Instagram, 7-slide carousel).


I fed it my article “Design Leaders Should Go 'Founder Mode'” simply by pasting the URL to the article on the UX Tigers website.


Gamma then automatically created the slides: it extracted the main points, summarized them in a few words, added some degree of visualization, and designed the slides.


I think the AI did well extracting the main points from my article and making them into simple slides. At first, Gamma produced slides in a very wide-screen aspect ratio, but it quickly made the AI auto-reformat into the square format I posted to Instagram. If I had wanted slides for a lecture, the standard HD projector format of 16:9 was also available and would have been one more click, with auto-redesigned slides.


One of the slides Gamma auto-generated from my article. This is a simple visualization, but the AI was able to extract the meaning correctly and transform it into a visual. (Gamma)


Usability was medium for anything above the bare minimum slide creation: I struggled with getting my UX Tigers logo on the slides, and I abandoned getting it on slides with a banner image. But basic usability was good: it really required no work beyond pasting the URL to create a slide deck.


I am not happy with the stock photography in the slides. I could have created better with Midjourney and Ideogram. But of course, these images came automatically, with no work on my part!


The body text font size is much too small: I can't imagine projecting these slides and having somebody from the back of the room able to read the text.


Conclusion: if you want quick slides based on an existing document, Gamma is a good AI service that delivers. I didn't even need any paid features, but there are promising options for making text longer or shorter if you take out a subscription. I’m impressed with how much it could do for free.


AI can now automatically create slide presentations from your content. (Leonardo)


The ability to automatically transform 1,558 words of dense prose into 7 simplified slides that present the main insights in 174 words (11% of the original word count) is a great example of one of the advanced AI-native capabilities: remixing of content into new modalities, reusing one set of original insights in many styles.


(As further illustrations of this point, I’ve also created an 8-minute podcast and a 2-minute avatar presentation summarizing my “Founder Mode” article. I still need to make a music video! I’ll tackle that next.)


AI Middling at Analyzing Usability Tests

Emily Kuang and three coauthors from the Rochester Institute of Technology and other universities conducted a research study in which ChatGPT 3.5 analyzed the transcripts of usability test sessions.


In total, the recordings from three different test sessions were analyzed, originating from usability studies of a desktop website, a smartphone app, and a VR headset. After ChatGPT had analyzed the written transcripts of the test users’ verbatim comments, the researchers had 24 junior or medium-experienced UX specialists (with a mean experience of 4 years) review the videos together with ChatGPT’s report on the usability problems in each of the tested designs.


Across the three designs, AI identified 14 usability problems. The 24 UX specialists who reviewed the study videos and the AI findings agreed with 78% of these findings. In other words, AI was reasonably accurate in identifying actual usability problems as opposed to raising false alarms that would waste the design team’s time.


However, the UX specialists identified many more usability problems than the AI had discovered. Assuming that the humans were right, the AI only found 41% of the total set of usability problems in the designs. (Note, though, that comparing the AI performance with the collective performance of 24 UX specialists is a little unfair. In practical projects, usability study sessions are usually reviewed by a single or two UX researchers, and it would be prohibitively expensive to employ a team of 24.)


This study is interesting, and I’m glad the researchers did it. However, the AI model used was GPT 3.5, whereas today, one would use o1-Pro (the $200 subscription is easily recovered by saving a few hours of expensive UXR staff time). This is not criticism of the scientists, because all they (or anybody) could do was to use the best available AI at the time of the study. However, the findings will not transfer to 2025 usability work. For example, GPT 3.5 was restricted to analyzing a text transcript of the user sessions. In contrast, today, we would use AI multimedia capabilities to have it watch the screen recordings and listen to the audio (including emotional analysis of the users’ voices).


Richer data and more intelligent AI will almost certainly improve today’s AI’s performance substantially above the 41% level in this study. Tomorrow’s AI will score even higher.


Also, even if today AI would still not spot 100% of usability findings (likely), this doesn’t make it useless. It is much easier for a human usability analyst to watch out for suggested usability problems when reviewing a session video than it is to identify those problems from scratch and having to write a description (instead of editing the AI’s analysis). Spending less time and effort on the AI-identified usability problems frees up human time to look for more subtle (but maybe deeper and more important) issues.

(Hat tip to Marc Busch for alerting me to this study.)


AI analyzing a usability test recording vs. a human UX specialist analyzing the same recording with help from the AI report. The study reported here used a primitive AI from 2023 which could not see the user’s actions but was restricted to analyzing a written transcript of the user’s comments. (Leonardo)


AI Creates More Jobs Than It Destroys

In a recent paper, Elina Mäkelä and Fabian Stephany from the University of Oxford analyzed 12 million American online job postings from 2018 to 2023 to see whether AI helps or hurts jobs. The findings are not super-surprising, but it’s great to have them empirically validated from a dataset of this magnitude.


The study found a decline in jobs characterized by skills for which AI can easily substitute, such as customer service, summarization, or text review. The remaining jobs in this category also had a drop in compensation.


Conversely, the study found an increase in demand for jobs with skills that complement AI technologies (digital literacy, teamwork, or resilience and agility), alongside a rising compensation for these skills within AI roles.


The bottom line is an estimate that AI has led to a net positive increase in jobs, since the growth in jobs with complementary stills was about 50% larger than the decline in jobs with substitution skills.


(Hat tip to Ross Dawson for alerting me to this paper.)


According to the new Oxford study, people who can work collaboratively with AI and have skills that complement AI’s strengths have seen job growth and salary increases. (Midjourney)


People with skills that are easily performed either wholly or partly with AI saw fewer jobs and lower salaries. (Leonardo)


AI Impact on UX Jobs

We can use the Oxford study summarized above to predict the likely impact of AI on jobs in the product design profession. Skills that are relatively easy for AI to perform either partly or fully will be the first to be substituted by AI.


The following jobs will likely see strong declines over the next two years (listed in sequence, starting with the jobs that will be substituted the earliest):


1.      UX writing

2.      Visual design

3.      UI design

4.      Facilitating user research studies


People within these 4 job categories will likely be unemployed by 2027 unless they gain the skills to be among the smaller percentage of staff (initially about half, dropping to 25% by 2030) who will still be employed to orchestrate the AI tools and communicate between AI and the rest of the company. Or, as a better choice, they upskill to work in product design jobs with more complementary skills than substitutable skills.


Working collaboratively with AI while cultivating complementary skills is essential for future jobs. (Midjourney)


On the other hand, the following job categories may grow over the next 5 years, though they will likely also be partly substituted by AI after 2030. (Again, they are listed in order, with the roles that will be substituted early listed first.)


5.      Interaction design

6.      Feature design

7.      Analyzing user research findings

8.      Content strategy

9.      Service design

10.  UX architecture and strategy


Despite this prediction, I persist in my claim that AI will not cause unemployment — also not among UX staff. On the contrary, I expect the world to have about 5 times as many UX professionals by 2040. The reason: the demand for quality design will increase hugely as (a) AI doubles GDP, and (b) software keeps eating the world. I predict 20 times more design work by 2040, meaning that even if AI does 3/4 of it, we’ll still need 5x the humans.


The roles with smaller numbers in the above lists (i.e., those that will be substituted by AI the soonest) may be 95% or more done by AI by 2040, whereas roles with higher numbers may be less than half done by AI, even by 2040. Thus, roles with numbers 1-3 may actually see reduced employment, whereas roles with numbers 8-10 could see 10x employment. Thus, my prediction of 5x the UX staff by 2040 is an average across the many roles within the larger field.


I don’t expect 1930s-style dole lines of unemployed product design professionals, even though AI will take over large parts of their current jobs. Instead, much more design work will be done. (Ideogram)

Top Past Articles
bottom of page