As go the young, so goes society. Young adults were early adopters of cell phones, social media, and the internet. Now all of these technologies are universal. So how are members of Gen Z using generative AI today? How do they feel about it? What promising use cases have they discovered? And what are the implications for employers?
When it comes to gen AI, the habits, attitudes, and ideas of Gen Z are a harbinger of the future of work—and how the rest of us will feel when we get there.
In October 2025, we partnered with Gallup and the Walton Family Foundation to survey a representative sample of nearly 2,500 U.S. adults between the ages of 18 and 28 years old. Our goal was to produce more comprehensive, precise, and unbiased insights than any prior study. (More on our methodology below.)
The data we collected reveal strikingly different patterns from those suggested in online forums and by tech companies. While OpenAI CEO Sam Altman recently claimed that “older people use ChatGPT as a Google replacement” whereas “people in their 20s and 30s use it like a life advisor,” our data tell a story of young users prioritizing productivity over social uses.
What’s more, we uncovered deep ambivalence about AI. And in respondents’ open-ended comments, we found suggestions for how employers should approach integrating AI into their workplaces.
Our survey reveals that Gen Z’s relationship with AI is more pragmatic than personal. While headlines suggest young people treat chatbots as confidants and companions, the data tell a different story.
Three out of four (74%) young adults in the U.S. used an AI chatbot at least once in the last month. This represents a considerable jump from the 58% of young adults in the U.S. who reported “ever” using ChatGPT in a February 2025 Pew survey. However, our estimates track a more recent (July 2025) study conducted by NORC at the University of Chicago showing 74% of young adults use AI to find information “at least some of the time.”
Two out of three (65%) respondents reported using an AI chatbot as a replacement for Google searches in the last month. Just over half (52%) used AI to “help [them] with [their] work.” And 46% used AI to “help [them] with writing” (which may overlap with work use).
Roughly a third (32%) of young adults reported turning to AI for help with their personal life, including “advice about relationships or life decisions.” One in four (23%) young adults report using chatbots “as a friend,” and one in ten (10%) young adults said they used an AI chatbot “as a girlfriend or boyfriend.”
This pattern contradicts recent popular reports that companionship and therapy are some of the leading uses of generative AI. But it aligns with objective log reports of Claude and ChatGPT released by Anthropic and OpenAI, respectively, which both report that conversations about work outnumber those about advice or companionship.
One in six (16%) young adults in our survey said they’d used AI for tasks when they were “specifically told not to.”
By definition, cheating is breaking rules dishonestly. Leaving aside whether employers should ban the use of AI, our results indicate that many young workers are using AI anyway. Thus, the relevant question isn’t whether young employees will use AI, but rather whether they will hide it from their employers.
Our survey reveals that Gen Z’s relationship with AI is fraught. Even as they use AI extensively, they harbor concerns about its long-term effects on human capability. It may be that—as young adults observe themselves and their peers offload more and more cognitive work to AI—they wonder whether convenience today brings diminished capacities tomorrow.
Specifically, 79% of young adults expressed concern that AI makes people lazier, and 62% worried it makes people less smart.
In open-ended responses, young adults in our survey elaborated on these concerns. Three major worries stood out. Notably, concerns about AI hallucinations and misinformation (“AI is often wrong or misleading but presents information very confidently…”) were raised by only 15% of respondents. As we explain below, Gen Z anxiety about gen AI is valid, as demonstrated by separate empirical research.
Sixty-eight percent of Gen Z adults are anxious that offloading cognitive tasks to AI means missing out on the skill-building that comes with effortful engagement: “Bots do the work for people, so they don’t have to learn anything.” “The mind is a muscle like any other. When you don’t use it […] that muscle atrophies incredibly fast. Any regular use of AI to outsource thinking […] is as bad for you as a pack of cigarettes or a hit of heroin.”
Relatedly, a widely publicized study from MIT’s Media Lab concluded that using AI induced “cognitive debt.” Although the sample was small (only 54 undergraduates were randomly assigned to write an essay either with AI, with Google, or on their own), the results were dramatic: When writing with AI, EEG scans indicated decreased brain activity compared to other conditions, and the majority of students who’d used AI were later unable to quote a sentence verbatim from what they’d written.
Sixty-five percent of respondents suggested that AI discourages engaging with ideas and information in a deep or critical way: “[AI] promotes instant gratification, not real understanding.” “Chatbots allow you to access information, not process it.”
A new experiment led by Shiri Melumad from The Wharton School substantiates this concern. All participants were asked to do research on planting a garden. Half were given access to AI; half were given access to a standard Google search. AI users expended less effort on the research task and generated shallower recommendations.
Finally, 61% of respondents said they worry that AI displaces learning from people—including peers and mentors: “It replaces conversation with real people.” “You learn less when you isolate yourself with AI.”
It is no secret that, coincident with an increase in screen time, in-person socializing has plummeted over the last decade, particularly among young adults (and adolescents). The advent of AI as a co-pilot, a coach, and even a counselor may exacerbate these trends, and like many of the respondents in our survey, we worry about a dystopian workplace where more and more hours are spent “alone together.”
In sum, Gen Z worries that AI is an ever-improving substitute for human effort, human thinking, and human-to-human interaction, and research suggests that these concerns are decidedly valid.
Is deskilling an inevitable consequence of using AI? We don’t think so. While in the minority, there were Gen Z adults in our sample who volunteered several different capacity-building AI use cases—all of which resonated with our own personal experience.
AI sometimes offers novel viewpoints: “I can see perspectives outside my own circle.” “I’m seeing a perspective that isn’t mine or [in] my circles that involves […] experiences I would’ve otherwise not been able to get.”
AI can break complex problems into simpler components, each of which is more learnable: “If you use it as something that teaches you step by step, it can help you.” “[AI] can enhance your knowledge by helping you understand things by breaking them down further.”
AI can free up time for more complex and meaningful work by handling tedious tasks: “AI can make you more efficient and able to work on ‘smart’ things while it handles the tedium.” “I generally use AI to make my administrative work fast, so I have more time for my analytical work.”
In addition to these use cases, in our own research, we’ve found an unexpected benefit of using AI to get work done. Sometimes AI does things better than we can–and just like looking over the shoulder of a more capable colleague, we find inspiration in their example.
The potential for AI to upgrade the learning environment was recently revealed to us in a random-assignment experiment in which everyone completed a tutorial on evidence-based principles for professional writing and then practiced writing cover letters with or without an AI tool that enabled copy-pasting. Consistent with Gen Z worries, working with AI as a co-pilot discouraged effort—the writing task took less time, required less typing, and felt easier. However, when tested a day later (without AI), participants who’d practiced with AI had improved their writing skills more than those who’d practiced without it. How? Our follow-up experiments suggest that AI can teach by example. Showing the user a well-written letter gave participants a concrete example they could reverse-engineer.
In short, Gen Z and our own research findings are telling us that AI has the potential to improve the overall quality of the learning environment. And though how much we learn is undeniably a function of how many hours we devote to learning, it is also a function of the rate at which we learn. Put differently, it is not only the quantity, but also the quality of time that determines our progress. Hence the adage, often attributed to Stephen Covey, that some people get 20 years of experience, while others get one year of experience 20 times.
The trends we document in our Gallup survey of young adults lead to three practical recommendations for employers:
- Acknowledge AI ambivalence. Concerns that as AI gets smarter and more efficient, human users will grow less intelligent and lazier are legitimate.
- Don’t ban AI. Banning workers from using platforms like ChatGPT or Claude is not realistic simply because many young workers use AI when explicitly told not to.
- Remove fear of the unknown. In our survey, people who used AI more frequently worried less about its effects on motivation and intelligence. Why? It could be that techno-skepticism deters experimentation with AI. However, it’s also likely that experiencing firsthand “aha moments” with AI reduces anxiety.
- Highlight use cases for AI that enhance human capability rather than erode it. In particular, look for ways that AI can improve the quality of the learning environment. And consider how outsourcing tedious tasks to AI can free up time for what AI will never substitute for: authentic human-to-human interaction.
Nelson Mandela was fond of reminding the world that “the future belongs to our youth.” This has always been true—and never more relevant than today. As leaders endeavor to navigate the new world of gen AI, we recommend listening carefully to, and in some cases following the lead of, Gen Z.
METHODOLOGICAL NOTE We believe this snapshot of how the next generation is using generative AI is more accurate than prior attempts to characterize AI use (in any age group). All attempts rely on one of three sources of evidence. First and most directly, usage logs have been made available by companies like Anthropic and OpenAI. While usage logs directly capture behavior, they are restricted to a single platform. For example, characterizing conversations with Claude or ChatGPT is interesting but given the multiplicity of AI chatbot platforms does not illuminate the question of how AI chatbots are being used in general, nor how many individuals are not using AI at all. An additional limitation is that user logs do not reveal attitudes toward AI, either among active users or those who are not using AI. This omission is important because behavior and attitudes can diverge; for example, many people spend hours on social media but express misgivings about doing so. Finally, user logs lack information on user demographics so cannot distinguish how use differs by age. Second, comments on Reddit and Quora (sometimes termed “social listening”) provide a window into concerns and questions users have about AI. However, Reddit and Quora users are by definition self-selected by issue (e.g., Reddit users who ask about using AI as a therapist) and therefore cannot be assumed to indicate how “most people” are using AI. Apart from not being representative, this type of data also lacks information on user demographics or attitudes. Finally, there is survey data. This type of data is also imperfect but for different reasons. Most obviously, survey respondents may deliberately fake their data or fail to accurately recall their own behavior. A less obvious but important limitation concerns survey design. Some surveys use ambiguously worded questions (e.g., asking about using AI chatbots as “a companion” without specifying what that means, or asking about “using AI” without specifying chatbots in particular, an important distinction given how pervasive the integration of AI algorithms are into nearly every aspect of life). But in our view, the primary limitation of most surveys on AI use concerns the size and representativeness of the sample. To arrive at precise estimates of AI usage and attitudes among young adults in the U.S., we collaborated with the Gallup Organization to recruit a very large and representative sample of Gen Z adults. Weeks of cognitive interviews and pilot testing prior to launching the survey ensured comprehension of items asking about AI habits and attitudes, as well as demographic characteristics like age.
