top of page
Writer's pictureJakob Nielsen

4 Metaphors for Working with AI: Intern, Coworker, Teacher, Coach

Summary: Viewing AI as an eager but unskilled intern was appropriate in early 2023, but this metaphor is now too limiting for good AI utilization and your professional growth and learning. There’s no single new metaphor. Instead, think of AI in different ways, depending on the problem to be solved.

 

It’s rather an overused cliché to liken AI to an eager intern who still has much to learn and needs close supervision. I first saw this metaphor in May 2023 in articles by Ethan Mollick, including On-boarding your AI Intern. The “intern” analogy was an excellent metaphor for using AI in early 2023, but by late 2024, it had become limiting. Thinking of AI as an intern leads to a naïve understanding of its potential for helping humans grow. AI can now do much more for us than being a helpful but error-prone intern, and I’ll discuss several more fruitful metaphors in this article.


I hasten to say that in criticizing Mollick’s metaphor, I’m not criticizing him. AI-as-intern was indeed a great insight in May 2023, only two months after the start of the modern AI era with the launch of ChatGPT-4 in March 2023. Mollick has continued to produce insightful analyses of AI as it has evolved, and to this day, I recommend subscribing to his email newsletter, “One Useful Thing,” as the single best source of AI insights. (It takes a lot for me to praise a business school professor, but he’s that good. Very clued-in to practical use cases in real businesses and not at all an irrelevant academic as most of his colleagues.)


AI as an eager intern who requires close supervision to avoid blunders. This was a great metaphor in early 2023 but is naïve and limiting in late 2024 and will be misleading when the next generation of AI launches in 2025. (Midjourney)


A key insight is that AI proceeds through generations of capability, driven by the AI scaling law. We need an order of magnitude more training compute to build a fundamentally different AI, and this takes time. Currently, generations are about two years apart: The current generation is GPT-4 level intelligence, and all the current frontier models are about equally good, remaining at that level. The next generation (whether it’ll be called GPT-5 or something different) is expected by early 2025, or possibly December 2024, when Elon Musk has announced that Grok 3 will finish its training run.


These are the expected AI generations:


  • 2023 generation: ChatGPT 4 and all the best frontier models in use as of September 2024. AI has the capabilities of a smart high school graduate.

  • 2025 generation: AI will be as good as a smart college graduate with a slew of BS and BA degrees.

  • 2027 generation: AI will be like a team of 200 Ph.D.s

  • 2030 generation: super-intelligence (ASI). AI will be better than any human.


Next-generation AI will be much more capable by 2025 than the intern-level AI we had when Ethan Mollick formulated his intern metaphor. This way of thinking about AI may still be appropriate for some tasks that will remain beyond the ability of AI to do on its own until the subsequent generation (2027) or even the super-intelligence expected to arrive between 2030 and 2033. One task I think will remain intern-level until 2027 is the analysis of qualitative usability studies. Don’t just have AI watch recordings of remote tests and report its findings and redesign recommendations to the design team without oversight. (Though as early as 2025, I think it will be more fruitful to treat AI as a coach, as discussed below, even for such tasks that it can’t do satisfactorily on its own.)


AI as Coworker

There are two forms of tasks AI can do as a coworker:


  • Independent assignments. You ask AI to complete a task, and it gives you the deliverable.

  • Collaboration: You and AI work together to complete the task, alternating who takes the lead, depending on what subtasks lean more on which party’s strengths.


 A coworker AI either completes an assignment independently (as in this image) or collaborates closely with its human. (Midjourney)


AI can already perform a wide range of tasks independently. In UX, examples include mass-analyzing qualitative data, turning it into quant data, and tracking content strategy metrics over time. Both of these examples leverage AI’s strengths while not being overly exposed to its weaknesses:


  • Once you have set up the AI to perform the task, you can let it loose to do the work at scale. Analyzing a million customer support tickets? AI can do it, but a human could never read more than a few hundred. Two million tickets? Makes no difference to AI, other than maybe running for a few more hours that night.

  • Hallucinations or infrequent errors don’t matter. If you’re doing sentiment analysis of millions of social media postings, it doesn’t matter if a few hundred get misclassified. It’s not even going to change the first value after the decimal point, though it could change the value at the second position after the decimal point. But if you make different decisions depending on whether a metric was 32.71% or 32.72%, you don’t deserve to be let anywhere near any numbers. Similarly, suppose AI analyzed a million support tickets to pull out 200 for human attention because they are assessed to describe critical opportunities for redesign. In that case, it doesn’t matter if it gave you 5 tickets that turn out to be unimportant. (Yes, you wasted a few minutes thinking about those tickets in vain, but without AI, you would never have seen any of the important needles hiding in that million-straw haystack.)


The proverbial haystack. If something is hiding inside, AI can find it, but humans can’t. (Midjourney)


On the other hand, if every detail must be correct, you can’t let current AI do the task independently. This will probably remain true for the 2025 generation of AI as well. After 2027, we can hope to get a handle on hallucinations, or maybe they’ll always be with us. However, remember that humans aren’t perfect, to say the least. People are notoriously error-prone, so the appropriate criterion should be to assess whether humans or AI make the most mistakes for any given task.


The second model is close collaboration for those tasks that can’t be performed autonomously by AI. This is typically how I recommend approaching design ideation. Alternate who takes the lead and proceed through the prompting diamond toward the final design. AI is brilliant at creative ideas, including many you would never have thought of. But many of its ideas will never work, so AI creativity needs to be tempered by human judgment. The two together make a beautiful team.

AI as Teacher

In the teacher metaphor, AI guides the user through the acquisition of new skills and knowledge. It doesn’t actually matter whether AI can do the task on its own (the independent coworker model). All AI has to do it to teach the material, and you will proceed to do the work once you have learned enough.


AI as a teacher: it’s great at instructing people in new material and guiding them through a proper learning path. (Midjourney)


There’s a famous saying, “Those who can, do. Those who can’t, teach.” (And those who can’t even teach, consult.) While there’s some accuracy to this saying, it unfairly undervalues teaching which is a skill in its own right. How would a new person ever get to the “can” stage without first passing through a “learn” stage? But when you don’t know what you don’t know, learning can be hard without a teacher’s assistance.


Climbing the tower of learning alone, without a teacher, is hard. (Midjourney)


AI is perfect for constructing lesson plans for anything you care to learn. It’s also usually great at explaining complex issues in a simple manner. The cliché prompt asks AI to “explain XXX as if I were a 5-year-old.” Maybe not, but you can certainly ask for a simplified explanation, and if that doesn’t make sense, issue a follow-up prompt to simplify even more. And if there’s one particular point you don’t understand, ask for that to be explained.


Here’s Claude explaining Jakob’s Law as if I were a 5-year-old:


“Imagine you have a favorite playground that you love to visit. You know where all the fun things are — the swings, the slide, and the monkey bars. You feel happy and comfortable there because you know how everything works.
Now, Jakob's Law is like saying that when you go to a new playground, you expect it to be similar to your favorite one. You look for swings in the same place, and you hope the slide works the same way.
On the internet, it's the same idea. When people use websites or apps, they like them to work in ways they already know. If things are too different or strange, it can make people confused or frustrated, just like you might feel if you went to a playground where everything was upside down!
So, people who make websites try to make them work in ways that most people are already used to. This way, everyone can have fun and find what they need easily, just like at their favorite playground.”

This is a good explanation, but not how I would want a UX newbie to learn about my law. Asking for more and more simplified explanations, alternatives, and additional examples until you understand is the way to go.


A human teacher might despair at having to explain things again and again in different ways until you get it. AI has infinite patience.


In the past, rich people had personal tutors. (Famously, Alexander the Great received private tutoring from Aristotle as a young prince.) Even though it’s the best, one-on-one instruction is economically infeasible with human teachers. But AI is yours alone and it’ll teach you at exactly your optimal pace. It also provides just-in-time teaching, including the teaching of micro-topics just when you need to know them.


Alexander the Great had the world’s greatest philosopher as his private tutor growing up. With AI as a tutor, you can receive the same quality of learning and become great, too. (Midjourney)


AI as a teacher is already a great model. But the puzzle pieces are coming together for even better AI teaching:


  • Advanced voice mode from OpenAI (and similar voice models from other AI labs) is a natural and engaging learning method.

  • Animated AI avatars present a compelling visual, engaging you even more. (The halo principle says that humans pay more attention to attractive and tall speakers because those are the people who would have been good hunters in the evolutionary past and, therefore, worth listening to when trying to pick up hunting tricks.)

  • Particularly starting in 2025, AI will have a vast knowledge base, far beyond any human teacher. AI will be able to synthesize lessons from across many disciplines. This makes it particularly good at teaching material from fields where you have little current knowledge.


AI as Coach

Whereas AI as a teacher is good for learning new material, AI as a coach is for honing the skills you already have. Even in areas where your performance is far superior to that of current-generation (or even next-generation) AI, if it were to be a coworker, AI as a coach can help you reach new heights.


This is similar to how a gymnastics coach might be too old and fat to make the jump and stick the landing, but he or she can finetune the details of their young athletes’ performance and make them dramatically better.


Floor exercise as envisioned by Midjourney. I’m not a gymnastics coach, but I would probably advise this athlete to keep the left leg straight. A true coach would have more precise and detailed adjustments.


When treating AI as a coach, you perform the job the usual way, but ask AI to follow along and criticize you. Even just having AI ask you questions can help you identify weaknesses in your current approach. “Did you consider X and Y?” Maybe X is completely irrelevant to the task at hand (which is why you don’t give it to AI as a coworker), but Y is essential and something you missed.


AI as coach: it follows you on your path and gives hints on how to do better at each step. (Midjourney)


I was inspired to include the coaching metaphor in this article by a post by Hamel Husain who discussed ways to use AI to become a better programmer. (For example, instead of asking AI to write a program, you can feed it your code and ask it “Are there ways of refactoring this code to be more modular, readable, and robust?” Maybe some of the AI ideas for refactoring are bad, but others will spur you to unprecedented improvements.)


In UX, you could give AI a usability test plan you wrote and ask it to list ways in which a task could be misinterpreted. (Writing unambiguous tasks is especially important for unmoderated studies, where an entire session can be wasted if the user attempts to do something different than what you wanted to study.)


Or, for a design, ask it to recommend things to remove.


Or, for information architecture, ask, “Are there ways to restructure this navigation to make it more intuitive? How can we improve the information hierarchy based on user flows?” (Many of these AI ideas may not improve the IA, which is why you don’t assign it the job of making up an IA, but it’ll give you a good deal of fodder for thought.)


Or, for UX writing, ask, “Analyze the microcopy in this design. Are there opportunities to make it clearer, more friendly, or better aligned with user intent?” (Same caveat: you may not use the AI rewrites, but it’ll point out opportunities for improvement.)


For all these coaching ideas, you will get ideas for improvements of your current deliverable. But more importantly, by being challenged to think in new ways, you’ll grow as a professional.


Conclusion: Multiple Roles for AI

My main recommendation in this article is that you should move beyond the obsolete metaphor of AI being limited to serve as an eager but unskilled intern. While this perspective can still be useful in a few cases, the other 3 metaphors will likely be more helpful to your project and your personal growth.


I don’t recommend any one metaphor as the one to rule them all. Rather, all of these metaphors have their place, depending on the task at hand and the evolving capabilities of the coming generations of AI.


The 4 metaphors for AI discussed in this article, clockwise from upper left: Intern (rabbit), coworker (bear), teacher (owl) and coach (alligator). (Midjourney) Feel free to reproduce this image in articles or presentations, as long as you credit this article as the source.


A final point: one of the most important imperatives for professional growth is to get better at employing AI to improve your productivity and the quality of your work. These improvements come from engaging with AI in ever-more advanced ways. A limiting metaphor for how to use AI will ultimately limit yourself.


The more different ways you think about AI, the more different ways you will discover to use it.


Just viewing the enchanted land of AI from afar won’t give you any accomplishments. However, AI itself can act as your tour guide: use it for many different roles. (Midjourney)


Podcast Version of This Article

I used Google’s new NotebookLM service to automatically make a podcast (8 min. video) based on this article.


I made the podcast for two reasons:


  • To serve fans who prefer to listen, rather than read.

  • To experiment with NotebookLM. I find it rather impressive, how it transformed a written article into a bantering podcast. It did take it upon itself to make up some new examples that were not in my article, but overall, it did a good job.


The two podcasters had a great time delving into my article. They don’t exist, so I had Midjourney draw them.


I added the images to the video since NotebookLM currently only creates audio. It's not a big leap to expect them to add generated video of the two podcast hosts at a later date.


The ability to use AI to automatically produce an entirely new media form (bantering spoken conversations between two presenters instead of an article that is decidedly based on written language) is a great example of AI remixing. The same source material can be repurposed in as many different media forms as you care to have made, including infographics. (I made the following infographic manually, which is probably why it’s a little boring.)


Quiz: The 4 Metaphors for Working with AI

Here are 10 questions (plus a bonus question) about this article, to help you check your understanding of these important concepts.


The answers are at the end, after the infographic. No peeking!


Take the test! (Midjourney)


Q1: What is Jakob Nielsen’s main criticism of the “AI as an intern” metaphor?

A)           It is demeaning to interns.

B)           It does not accurately reflect the capabilities of modern AI.

C)          It is not useful for any tasks.

D)          It was only relevant in the 1990s.

 

Q2: What is the key driver of AI’s increasing capabilities?

A)           Moore’s Law

B)           The AI scaling law

C)          Quantum computing

D)          The development of new programming languages

 

Q3: Which task does Jakob Nielsen believe will likely remain beyond the capabilities of AI until at least 2027?

A)           Analyzing qualitative usability studies

B)           Mass-analyzing qualitative data

C)          Tracking content strategy metrics over time

D)          Turning qualitative data into quantitative data

 

Q4: What are the two forms of tasks AI can perform as a coworker?

A)           Teaching and coaching

B)           Research and development

C)          Independent assignments and collaboration

D)          Data analysis and creative brainstorming

 

Q5: What is the best approach for design ideation with AI?

A)           Let AI handle the entire process.

B)           Collaborate closely with AI, alternating who takes the lead.

C)          Use AI only for generating initial ideas.

D)          Use AI only for refining human-generated ideas.

 

Q6: In the teacher metaphor, what is AI’s primary role?

A)           Performing the task for the user.

B)           Providing entertainment and distraction.

C)          Evaluating the user’s performance.

D)          Guiding the user in acquiring new skills and knowledge.

 

Q7: How does Jakob Nielsen suggest using AI to learn a new subject effectively?

A)           Rely on AI to create a single, perfect explanation.

B)           Focus on memorizing facts and figures provided by AI.

C)          Start with complex explanations and work your way down to simpler ones.

D)          Ask for simplified explanations, alternatives, and examples until you understand.

 

Q8: What historical figure does Jakob Nielsen use to illustrate the benefits of one-on-one instruction?

A)           Albert Einstein

B)           Alexander the Great

C)          Leonardo da Vinci

D)          Marie Curie

 

Q9: What is the primary benefit of using AI as a coach to critique your work, even if its overall performance is inferior to yours?

A)           It can help you to automate tedious tasks.

B)           AI can often identify errors that humans miss.

C)          It forces you to justify your decisions and consider alternative approaches.

D)          It can provide you with positive reinforcement.

 

Q10: What is Jakob Nielsen’s overall recommendation regarding the use of AI metaphors?

A)           Avoid using metaphors altogether.

B)           Develop your own unique AI metaphors.

C)          Choose one metaphor and stick with it.

D)          Use different metaphors depending on the task and AI’s capabilities.

 

Bonus Question: Why did Jakob Nielsen use animals to represent the 4 metaphors for using AI?

  • Intern: rabbit

  • Coworker: bear

  • Teacher: owl

  • Coach: alligator

Jakob Nelsen used 4 animals to represent the 4 metaphors for working with AI.


Infographic Summarizing This Article

Feel free to reproduce this infographic, providing you credit this article as the source.


Quiz Answers


Q1: What is Jakob Nielsen’s main criticism of the “AI as an intern” metaphor?

Correct answer: B) It does not accurately reflect the capabilities of modern AI.


Q2: What is the key driver of AI’s increasing capabilities?

Correct answer: B) The AI scaling law


Q3: Which task does Jakob Nielsen believe will likely remain beyond the capabilities of AI until at least 2027?

Correct answer: A) Analyzing qualitative usability studies


Q4: What are the two forms of tasks AI can perform as a coworker?

Correct answer: C) Independent assignments and collaboration


Q5: What is the best approach for design ideation with AI?

Correct answer: B) Collaborate closely with AI, alternating who takes the lead.


Q6: In the teacher metaphor, what is AI’s primary role?

Correct answer: D) Guiding the user in acquiring new skills and knowledge.


Q7: How does Jakob Nielsen suggest using AI to learn a new subject effectively?

Correct answer: D) Ask for simplified explanations, alternatives, and examples until you understand.

 

Q8: What historical figure does Jakob Nielsen use to illustrate the benefits of one-on-one instruction?

Correct answer: B) Alexander the Great, who had Aristotle as his personal tutor


Q9: What is the primary benefit of using AI as a coach to critique your work, even if its overall performance is inferior to yours?

Correct answer: C) It forces you to justify your decisions and consider alternative approaches.


Q10: What is Jakob Nielsen’s overall recommendation regarding the use of AI metaphors?

Correct answer: D) Use different metaphors depending on the task and AI’s capabilities.


Bonus Question: Why did Jakob Nielsen use animals to represent the 4 metaphors for using AI?

Answer: I used animals to represent the four AI metaphors because I didn’t want to draw a robot four times. Robots are usually the best way to depict AI in illustrations; see my article on how to visualize AI.

Top Past Articles
bottom of page