question: Many people are concerned about AI replacing jobs. How do you think AI is being designed and used in the workplace?
answer: It is unlikely that any single occupation will experience rapid job losses. As with previous rapid technological changes, most new technologies modify tasks rather than complete them, allowing work activities to change over time. And technology by itself does not determine outcomes; institutional choices, business models, policies, governance, and power relationships do.
A key question is: Who will shape the use of AI in the workforce? The same AI tool can have very different outcomes depending on who is involved in making design, deployment, and governance decisions. For example, we see how AI can deskill jobs, increase surveillance, and hollow out jobs, or augment the workforce, reduce drudgery, and improve the quality of work.
The most pressing risks are not mass layoffs, but algorithmic management and electronic surveillance, increased work intensity and loss of autonomy, and racial and gender bias in scheduling, evaluation, and discipline. These dynamics reflect earlier waves of automation, but AI expands and obfuscates control in new ways.
question: AI is being implemented in many different ways in many fields. Do you see any patterns, challenges or new opportunities emerging?
answer: I think automation will increase in many service and blue-collar tasks, leading to changes in activities and responsibilities within existing jobs. However, many of the current changes are focused on professional, technical, and white-collar occupations because large-scale language models (LLMs) affect writing, analysis, interpretation, and communication. These are precisely the occupations in which people are more flexible in their work responsibilities and therefore more likely to adapt to changing work activities.
However, we are overlooking the potential of AI to make invisible skills visible, especially in caregiving, education, and communication jobs. Some of the most economically and socially important jobs cannot be easily automated, such as child care, early education, elder care, health support, education, and guidance.
The problem isn’t that these jobs lack skills. It’s that we’re bad at recognizing, measuring, and rewarding the quality of relational, interactive work. Ironically, generative AI could help here by providing a deeper understanding of communication, pedagogy and care practices, supporting training, feedback and professional development, and making tacit knowledge more visible, without replacing human judgment.
question: Your research examines some of the broader changes that occur during labor and social transitions. How do you think AI can reshape social structures related to work?
answer: AI highlights the need to rethink social support related to employment. This shift may prompt people to question why their primary social supports are tied to their jobs, especially health insurance, but also retirement benefits, access to training, income, and security.
AI systems are built on collective knowledge gleaned from the texts, images, and videos of millions of workers, writers, teachers, artists, and caregivers. If AI generates widespread productivity gains from its collective inheritance, fundamental questions of fairness arise. Why not treat some of those benefits as shared social goods? This opens the door to ideas like universal AI dividends, stronger social wages, and universal access to health care and lifelong learning separate from employment.
question: How can we ensure that AI is used effectively in the workplace?
answer: We need to focus on worker-centric innovation and ask ourselves what would happen if AI was designed to improve jobs. This could include using AI to support training, mentoring, and skill development; support better scheduling, safety, and career paths; and reduce administrative burden so employees can focus on relationships and creative work.
However, achieving these types of outcomes requires not only market adoption, but also worker voice, industry standards, and public interest governance. That means treating AI governance as a public policy issue, rather than just a technical or corporate issue.
Now, we should focus on increasing worker participation in AI design and deployment decisions and updating labor standards to address algorithmic control and surveillance. We can also invest in learning infrastructure, including community-based and employer-embedded learning, as well as individual reskilling.
AI will not decide the future of work on its own. The real question is whether we treat this as a transition to new extractive technologies, or as an opportunity to rebuild institutions, norms, and governance around work in ways that center dignity, equity, and learning.
