UX Roundup: AI as General-Purpose or Cultural Tech | AI for Teamwork | Jakob’s Law Manga | 3 Ages of UX | Action Figure
- Jakob Nielsen
- Apr 7
- 8 min read
Summary: Is AI a general-purpose technology or a cultural force | AI boosts teamwork | Jakob’s Law as a manga | 3 ages of UX | Usability action figure

UX Roundup for April 7, 2025. (Midjourney)
Is AI a General-Purpose Technology or a Cultural Force?
In a recent video, Jensen Huang of Nvidia and Arthur Mensch of Mistral AI discussed the proper framing of AI as a technology and whether every country needs its own “Sovereign AI,” as it was termed.
I have previously discussed the point that AI is a general-purpose technology that applies to almost every activity in the economy. In fact, it’s hard to conceive of anything that is more general-purpose than intelligence.
In this regard, AI is comparable to electricity and the printing press, as general-purpose inventions. Arthur Mensch explained that AI revisits how software is built, and machines are used, enabling the creation of agents that can operate across any industry, including services, public services, agriculture, and defense. Jensen Huang concurred, stating that it can be used in any vertical and thus is naturally a priority for every state, necessitating a dedicated National AI strategy.
With the historical analogies, countries do need their own domestic electricity plants and printing presses, but it doesn’t matter to their national identity whether these tools are bought from foreign vendors.
However, Huang recommended that all countries should engage with AI and build their own capabilities, as no one will care more about a specific nation's culture, language, and ecosystem than that nation itself.
This leads to the concept of AI as a cultural infrastructure, as opposed to being generic for all purposes. While the underlying technology is general-purpose, making it useful requires specialization in various verticals, cultures, and national agendas through partnerships between horizontal providers and vertical experts.
Several key points in the discussion supported the idea of AI as a cultural force:
Content Production and Social Construct: AI is a content-producing technology (text, images, voice) that interacts with society, thereby becoming a social construct that carries the values of an enterprise or a country.
Preservation of Values: If nations want their values to remain intact and not adapt a foreign provider’s values, they need to engage with AI more deeply than with technologies like electricity.
Digital Intelligence and Workforce: A nation’s digital intelligence and digital workforce are new infrastructure layers that should be shaped and potentially controlled at the national level, rather than being entirely outsourced.
Local Customization: To be truly effective and aligned with national needs, general-purpose AI models need to be customized with local norms, data, and values. This involves both “soft” encoding of culture and preferences into models through continuous training (e.g., using local content in the local language) and “hard” enforcement of policies and rules.
Cultural Reflection: AI systems inevitably reflect the values of their creators. Centralized AI models may struggle to encode universal values and expertise effectively, necessitating specialization based on specific populations’ preferences and expectations.
Sovereign AI and Digital Colonialization: If a nation doesn’t own the sovereignty of its AI, given its role as cultural infrastructure and a digital workforce, the stakes are equivalent to modern digital colonialization, where external entities could control a nation’s digital capabilities and values.
Huang and Mensch agreed that while AI’s underlying technology is broadly applicable, its real value and alignment with national interests depend on deliberate efforts to specialize and imbue it with local cultural nuances, knowledge, and values. This requires nations to actively participate in developing and shaping their AI infrastructure and talent.
To analyze this discussion, let’s first recognize that both panelists have a vested interest in “sovereign AI.” Jensen Huang runs NVIDIA, which is the largest provider of the AI chips that will be consumed in copious volumes if all countries build their own AI training clusters. And Arthur Mensch runs Mistral AI, which is the leading AI model in Europe — possibly the only frontier model outside the U.S. and China. He has a strong interest in convincing European politicians to prioritize his company over American and Chinese AI providers.
However, just because people have an interest in promoting one side of an argument doesn’t make their arguments invalid, nor does it make their position compromised. We simply should remember to bring our grain of salt when listening to them.
I find the arguments for AI as a cultural force to be convincing but not decisive. I don’t think that all countries need to build the full AI stack locally. The analogy with the printing press comes in handy: it really doesn’t matter whether a country imports all its printing presses — or for that matter, whether they import all their personal computers. What matters is to retain a local value system for the newspapers printed on those presses (or read through web browsers on those computers).
If foundation models can be kept sufficiently general-purpose, it should be possible to build local applications on top of that base layer of AI, and those retain domestic values that matter. On the other hand, if the AI labs persist using strict “safety” rules for their foundation models that enforce the values of their creators, maybe everybody does need to train their own local foundation models rather than importing those foreign values.
I honestly don’t know the answer. It’s above my pay grade as a usability specialist. I do agree that national leaders need to engage with AI sooner rather than later, no matter what the best decision may turn out to be.

Should the development of local “sovereign AI” be a priority for countries? This was the topic of a recent debate between Jensen Huang of Nvidia and Arthur Mensch of Mistral. (ChatGPT native image model)
I made a short song about Sovereign AI (YouTube, 2 min.)
AI Boosts Teamwork
There’s much research showing that AI increases the productivity of knowledge workers, especially in software development (most recently causing the rise of “vibe coding”) and creative tasks like ideation. However, that research has mostly focused on the performance of individuals working with and without AI assistance.
In real businesses, most important projects are done by teams. Yes, many tasks are done by individuals who report to the team, but team-based tasks are also important, especially in idea-development phases where AI’s ideation strength might shine.
A new study by Fabrizio Dell’Acqua from Harvard University and a large team of co-authors from Harvard, the Wharton School, and Procter & Gamble (including my favorite AI academic, Ethan Mollick — hat tip to him for the reference) now provides data on AI’s ability to enhance team performance.
Participants were 776 business professionals from Procter & Gamble (P&G — a consumer goods company famous for brands like Tide, Pepto-Bismol, and Gillette). The task was to develop new product ideas and business strategies for the specific business unit where the participants worked. The participants
The study employed a 2x2 experimental design:
Professionals working alone versus working in two-person teams
With or without help from AI (GPT 4)

Experiment design for this study.
The main dependent variable was the business quality of the resulting business proposals, as scored by independent human business experts.
The two “teams” conditions (with and without AI) utilized interdisciplinary teams consisting of one commercial business professional and one R&D specialist, which reflects both real-world product development and is expected to ensure better ideation. Interestingly, the paper reports that when interviewing senior P&G executives, the executives agreed on the importance of cross-functional teams but stated that such teams were challenging to convene in practice due to time constraints and cultural differences between divisions.
The study was conducted between May and July 2024, using GPT-4 for AI assistance. We might already expect better results if using the newer GPT-4.5, which has improved creativity — let alone the next-generation AI models soon to be released. (Watch my video about ChatGPT 4.5.)
Taking individuals working without AI as the baseline (control group), the other three conditions scored as follows, in standard deviations for the “business quality” metrics:
Teams without AI: +0.24 SD
Individuals with AI: +0.37 SD
Teams with AI: +0.39 SD
All three conditions relative to the control group are statistically significant, though the differences between the three treatment conditions fall within the confidence intervals.
Two main conclusions: First, without AI, teams do indeed outperform individuals. Second, providing an individual with AI assistance enables him or her to perform as well (or possibly even better) than a team without AI. In effect, AI works as a “virtual teammate” and provides some of the alternative perspective that a human from a different discipline would offer.
Some people might speculate that replacing team members with AI has a dehumanizing effect. However, the study found an increase in positive emotions and a decrease in negative emotions for those participants who worked with AI, both in the case of individuals with AI (+0.46 SD positive emotions and – 0.23 SD negative ones) and teams working with AI (+0.64 SD positive emotions and – 0.24 SD negative ones).
The authors also assessed the very best solutions to the problem, defined as proposals ranked in the top 10% by the independent experts. In a real business, people will always propose many more new business ideas than can practically be brought to market, so only the top proposals really matter. The bottom ones just eat up executive time before being rejected.
Both individuals and teams working without AI had comparatively few top proposals, with only 5.8% of their proposals being in the overall top 10%. In contrast, teams working with AI had 15% of their proposals in this elite group of winners. In other words, the ability to produce a winning proposal was almost 3x better for teams using AI.

AI has boosted individuals to the level of small teams and also improved the performance of teams, especially on the most successful projects. (ChatGPT native image model)
Participants in the AI conditions completed their proposals approximately 15% faster than those without AI. This is a smaller gain than usually seen for AI use and probably reflects the open-ended nature of the task.
In conclusion, I can do no better than to quote Mollick’s own writing about the results: “AI doesn't just automate existing tasks, it changes how we can think about work itself. The future of work isn't just about individuals adapting to AI, it's about organizations reimagining the fundamental nature of teamwork and management structures themselves. And that's a challenge that will require not just technological solutions, but new organizational thinking.” (OK, he is a business school professor, so he is biased in favor of organizational thinking, but I agree.)
Jakob’s Law as a Manga

Jakob’s Law of the Internet User Experience, as drawn by ChatGPT's native image model.
The 3 Ages of UX
The 3 Ages of UX: Goodbye Handmade Design, Hello Superintelligence! (5-minute video, YouTube).
Usability Action Figure

The native image mode in ChatGPT can be used for endless styles, not just comic books. Here’s an action figure toy I made.
The more serious use case for “action figure” images like this would be to make such images for team meetings or product launches. Or even as persona illustrations. However, mainly they are just for fun. It’s great that we can now use computers and AI simply to amuse ourselves.
Prompt: “Draw an action figure toy of the person in this photo. The action figure should be full-figure and displayed in its original blister pack packaging. On top of the box is the name of the toy "XXX" across a single line of text. In the blister pack packaging, next to the figure, show the toy's accessories, including YYY.” (You would replace XXX and YYY with something appropriate to your scenario. You can also include instructions to change the person’s outfit to something more fun than the photo.)
As with most AI projects, I recommend an iterative approach. For this action figure, my original prompt was to have a notebook and pen included in the accessories, but the notebook looked too boring in the picture, so I swapped it out with a stopwatch. (Which, indeed, we used to run usability studies in the old days, before using recording services like UserTesting.com that automatically timestamp everything.)
I made a small video of this action figure toy in action (YouTube, 1 minute). The video version seems more tangible than the still image, thanks to the 3D animation.