top of page
  • Writer's pictureJakob Nielsen

AI vs. Metaverse: Which Is the 5th Generation UI?

Summary: What’s hype, what’s real in next-generation user interfaces? Alternate titles for this article were: “Why Zuck Went Wrong” and “Cool Demos Don’t Predict Uptake,” which are good summaries.

Fads plague the high-tech business, and almost every year, something new is being hyped to the skies:

  • E-commerce and the entire dot-com bubble from 1995 to 2001

  • VR/AR and the “metaverse” (Mark Zuckerberg’s detour from focusing on his business)

  • NFT (non-fungible tokens, the darling of 2022)

  • AI tools (the “big thing” in 2023)

How can we tell what’s hype and what’s real? NFT has been a clear bust, whereas e-commerce is a clear win (accounting for 15.1% of retail sales in the United States as of May 2023, or about a trillion dollars per year — any time the big T enters the picture, it can’t be a fad). When either was at peak hype, you would have been hard-pressed to tell the difference based on their press coverage.


My suggestion for separating the wheat from the chaff is to look at what works for customers in user testing. Does the hyped idea help real users perform useful tasks more efficiently or pleasantly? If so, real-world uptake is likely. Does it look fabulous in demos but fail to help people perform tasks? If so, it’s probably hype that will suffer a well-deserved death within the year.


On my list of four hyped items, two have met history’s judgment, and I will also claim that it was clear at the time that e-commerce was useful, whereas NFT was a useless fad. (The question about e-commerce, circa 2000, was not whether online shopping benefited consumers, but whether one could construct efficient supply chains to ship things like dog food profitably.)


What about the metaverse and AI? Both are shooting for the skies, but one of those rockets is exploding on the launchpad.


To give away the conclusion, AI tools are real, and the metaverse is hype. Let’s see why.


The Metaverse Seduction

The first 4 generations of mainstream user interface styles were:

  1. Batch processing: the entire workflow was handed off at a single point in time, making for a zero-dimensional UI. 1945-1964.

  2. Command lines (e.g., Unix, DOS): textual commands typed one line at a time, making for a one-dimensional UI. 1964-1980.

  3. Full-screen terminals (e.g., IBM mainframes): still text-only, but now a screenful at a time, and the user could move the cursor around on the screen, for example, to fill in form fields. This was a two-dimensional UI. 1971-1984. [Note some overlap between generations 2 and 3, both of which were in use in parallel.)

  4. Graphical user interfaces (GUI, such as the Mac and Windows), which also used a two-dimensional computer screen, but with overlapping windows, adding a bit of z-axis dimensionality, making for a two-and-a-half-dimensional UI. 1984-2024.

(For more on these UI generations, see my article on the 3 paradigms of user interfaces.)


Given this historical progression of 0 -> 1 -> 2 -> 2.5 dimensions, it’s understandable that many people thought that the 5th UI generation would complete the count and be three-dimensional, as represented by virtual reality — or the “metaverse” as it’s being hyped.

I never thought so: see my 1998 article, 2D is Better Than 3D.


3D fails to be superior to 2D because most of the information we manipulate on computers is N-dimensional, where N tends to be much bigger than 3. Consider, for example, financial planning and investment management. How many dimensions are there for different investment strategies and different instruments? Plus, additional dimensions for each investor’s circumstances and preferences. It doesn’t matter much if you’re squeezing this many dimensions down to 2 or 3: usability will depend on your visualization skills and not on the difference between 2D and 3D. But a 3D UI will usually be inferior because it’s much harder to manipulate and scan visually than a 2D representation of the same data and features.


Instead of 3D, the next UI generation will be a hybrid of intent-based outcome specification (driven by AI) combined with a visualization-based graphical user interface with traditional command-driven interaction design, mainly in 2D. We won’t move into more geometrical dimensions but into more conceptual dimensions, combining multiple interface styles.

Of course, 3D is helpful in some applications: medical surgery planning, repairing complex equipment like airplane engines, and definitely computer games. So we’ll have some 3D interfaces that supplement the predominating 2D/AI hybrid designs.


But even a use case like choosing a hotel room that seems to benefit from a 3D visualization of various rooms will likely be primarily non-3D, relying more on extensive photos and some video and less on VR. Most websites will make more money by investing in better photography than by deploying VR. After all, making more money is the goal of UX in business. It’s not your job to ship cool design that costs more to implement while bringing in less revenue.


Apple’s “Vision Pro” AR headset has superior hardware, with better resolution and no lag. It also reportedly has an exemplary user interface for controlling the 3D environment. (Though I defer my final judgment until I have seen the results of independent user testing, as opposed to journalists reporting on carefully staged demos.) It’s too expensive and heavy for real use, but the next release will be cheaper and lighter. And version 3 might be good enough for affluent consumers in rich countries to buy for watching 3D movies. I doubt many more use cases will be sufficiently appealing: the launch demos are amazingly uncompelling. And even 3D movies might be a minor use case if we go by the experience of this genre so far: other than the novelty effect, people rarely watch 3D movies, and even some of Hollywood’s best creators fail when trying their hand at 3D movies. (By “fail,” I mean that the story is just as engaging when watched in 2D, giving you no premium for locking yourself inside goggles or headsets.)


(As an aside, we used to say about Microsoft that one should wait for Release 3.0 of any of their products before buying. Today, this is what I say about Apple. In contrast, Microsoft has released several recent products that were good enough for productive use in v.1, including Bing Create and Bing Chat. Though since the latter is powered by ChatGPT version 4, the release count might approach 3 if we take the average of the front-end UI and the back-end large-language model.)


Every time 3D has been tried for general business applications or non-game consumer applications, it’s failed.


How could Mark Zuckerberg make such a blunder? All reports indicate he has a high IQ and a solid technical background. Likely below Bill Gates on both accounts, but having a lower IQ than Bill is no shame (I’m pretty sure I belong to the IQ<billg club myself).

I don’t have any insider information from Facebook/Meta. (If I did, I would not write about them.) But I think there are two explanations: one that goes back to childhood and one more current.


3D is seductive because it’s the dominant UI in many science fiction movies and TV shows, most prominently Star Trek and Star Wars. We nerds, including Zuck, grew up with a steady diet of tempting 3D products. Except, of course, that they were not genuine products (with real customers and actual use cases) but simply fictional devices to move the plot along. And I can’t deny that 3D UI looks good on film. However, SF designs are not user interfaces; they are audience interfaces. They don’t exist to support task performance but to support narrative goals. Very seductive. Very misleading for what will work in an earthly business, as opposed to the starship Enterprise.


3D is seductive because it is indeed persuasive in demos. Watch somebody else navigate a 3D UI while explaining how the visualizations work, and you will easily be convinced that it’s a great design. Looks cool. Must be good. Stop that zombie thinking and apply some critical analysis, please!


A critical analysis based on decades of UX experience tells us that demos are irrelevant for judging usability and utility. Actual use by representative users performing real business tasks is what matters. Watch users, not demos. That’s where the Metaverse fails again and again. People get a headset, use it for a few fun experiences, and then it sits on a shelf while they conduct their business on a flat screen.


AI Tools Work

In contrast to the Metaverse, AI tools work for genuine business use cases. I recently surveyed 3 studies of AI tools used for business: by customer support agents answering customer inquiries, business professionals writing standard business documents, and programmers implementing real code. In all 3 cases, the AI tools led to big, statistically significant increases in user productivity. Across the 3 case studies, productivity increased by 66% when using the AI tools. This separates hype and reality: huge business gains make AI real.


The 66% productivity increase was realized using previous-generation AI tools based on ChatGPT 3.5. The current release (v.4) is already much better, and we should expect even better performance from the next release. Furthermore, in 2 of the 3 case studies, users were only measured during their first exposure to the AI tools. User interfaces benefit from a learning curve, where people get more productive with more experience. In sum, the 66% productivity gain is a very low estimate for what we will see next year: many business implementations of AI tools are likely to double employee productivity. (AKA, cut costs in half.)


Making money is how you can tell that AI is no longer hype (even though it was in past decades). Any company that doesn’t have an AI strategy will be toast in a few years. The same goes for you, dear reader. Get with the program and learn how to utilize AI in your own job to double your work performance.


The future of UI is 2D, likely multimodal, likely hybrid, likely multi-device. Limited 3D. (“Futuristic UI” image by Midjourney.)


Conclusion: Productivity Gains Win the Day

Real use is the differentiator between hype and reality.


Use cases that improve profitability for a broad range of businesses indicate what’s real. The same is true for the ability of representative users to perform useful tasks in usability testing.

In contrast, fancy demos don’t predict actual use. In fact, a technology that you only ever see employed for demos is likely to be hype.


Imagine placing a Zoom call to any executive anywhere on Earth: USA, China, Germany, it doesn’t matter. You tell him or her, “We have an AI tool for your industry that’ll cut your costs in half.” Will the executive take that sales call and, more important, listen to your pitch? You betcha.


Imagine another Zoom call to a handful of executives in a few select industries: “We have an XR tool for your industry that’ll make your maintenance engineers 20% more productive.” Will those executives take the call? Probably yes, and they may even buy the tool if your VR/AR is actually 20% better than what they have.


Two differences between these scenarios:

  • You can go on an AI sales call to any company worldwide — at least 10 years from now, when the specialized applications have been developed. Your Metaverse salesperson can only call on prospects in a handful of industries.

  • Your pitch is 100% vs. 20% — bigger savings close more sales.

So here are your guidelines for separating reality from hype:

  • Do you have realistic use cases for profitable use in real businesses?

  • Are these use cases broadly generalizable, or will they stay narrow?

  • Does the technology perform better in demos or better in user research?

  • How much do you move the needle, as estimated by measurement studies of actual tasks?

Applying these guidelines to the Metaverse and AI-driven applications makes for a straightforward conclusion: to stay ahead of the curve, forget the metaverse and embrace the power of AI. Metaverse is hype. AI is real. You heard it here.

Top Past Articles
bottom of page