Depending on who you ask, AI is either your most helpful colleague, a glorified search engine, or wildly overrated.
And no one seems to agree on which is correct.
Tech executives championing AI have long been weaving a narrative that the technology will revolutionize jobs and usher in a new industrial revolution. Skeptics think it’s all marketing hype, but some researchers and executives are sounding the alarm about safety concerns.
The divergence in how people view AI was perhaps never more evident than last week, when a viral essay by an AI CEO and investor claimed that AI will be in every job that involves sitting in front of a computer.
But there may be a simpler explanation for why people take such diverse positions. Even though people use different types of AI in different ways, they are all called the same thing.
“There’s a huge range of how exposed people are to technology and how much they’ve used it,” said Matt Murphy, a partner at Menlo Ventures who has led investments in AI companies such as Anthropic. “And that’s changing pretty quickly, too.”
Comparison of paid AI and free version
Those who use free AI for basic tasks like making grocery lists or planning vacations may only be seeing one side of the technology. A Menlo Ventures report released last June estimated that only 3% of AI users are paying members, but Murphy told CNN he expects that to change soon.
But those who pay will have access to other features. It’s not just a chatbot that creates a response, it’s an agent that can handle some of the work for you, with even more usage restrictions.
For example, Anthropic’s Claude Cowork agent is only available on Pro plans priced at $20 or more per month. The same is true for OpenAI’s Codex coding agent.
It is this type of AI that has led to growing concerns about its impact on jobs, including a controversial argument brought up in a viral essay by Matt Schumer, an AI startup investor and former CEO.
“I tell the AI, ‘I want to build this app, and here’s what it should do and roughly what it should look like. Understand the user flow, the design, all of that.’ And it does. We write tens of thousands of lines of code,” Schumer wrote.
He further claimed that the AI was able to test the app and make decisions regarding its preferences and judgment. And if AI could write code that well, he reasoned, it might start improving itself.
(AI researchers accused Schumer in 2024 of exaggerating the performance of his AI models. Schumer apologized at the time, telling CNN that it was the “biggest mistake” of his “professional life” and that he had learned along the way.)
Some experts are skeptical that the use cases Schumer outlined are possible even on paid plans, given the ambiguity about which model Schumer used and what kind of apps the AI built. Schumer told CNN that he primarily uses OpenAI’s GPT-5.3 codex tools and is working on “moderate to high complexity apps” for testing purposes.
Still, Emily Desjou, a professor at Carnegie Mellon University who teaches a course on the use of AI in business, said the free versions of AI apps don’t paint a complete picture of what the technology can do. He said it was “misguided” to make assumptions about AI capabilities based solely on free AI services.
Oren Etzioni, professor emeritus at the University of Washington and former CEO of the Allen Institute for Artificial Intelligence, described the gap between the free and paid tiers of AI as comparing an enthusiastic but inexperienced intern to an experienced and hardworking intern. While free AI layers are great for creating summaries and generating content, users typically have to pay to use AI to perform deep research or create advanced documents.
Free AI “gives you amazingly good advice” and “engages you in amazingly sophisticated conversations, but you wouldn’t want to use that AI as your lawyer or paralegal,” he said.
But AI companies are trickling increasingly advanced features into the free tier, which is one reason why James Landay, co-founder of the Stanford Institute for Human-Centered AI, said there’s not much difference between free and paywalled AI. Case in point: Anthropic launched a new model on Tuesday called the Sonnet 4.6. The company says this brings performance closer to the more advanced Opus models, which are only available on paid plans.
Tensions rise over AI and work
Software stocks plunged in early February after AI company Anthropic released a tool that tailored its AI helpers to specific industries, such as legal and financial analysis. This announcement, and Schumer’s subsequent essay, raised concerns that AI will eventually broadly automate knowledge work, just as it is beginning to streamline software engineering jobs.
But there is growing skepticism about whether AI lives up to the lofty declarations often made by technology executives with a financial stake in the technology’s success. Some studies throw cold water on the actual capabilities of AI and the speed with which it is being adopted.
Last year, a group of researchers at the Center for AI Safety and Scale AI discovered that leading AI models produce flawed results when assigned tasks such as data visualization or video game coding. Model Evaluation and Threat Research, an organization that tests AI models, found in July that using AI causes developers to work on code 19% longer. However, that study was based on tools from early 2025.
Landay also said in his essay that the role AI plays in software development has been overstated. Although AI is a useful tool that programmers use to speed up development, it is still error-prone and does not create the AI model itself. While experts broadly agree that AI will change many industries, AI’s coding abilities should not be taken as a sign that it will work in other professions as well.
“[Coding]is also a logical construct, and machines are very good at testing code to see if it works,” he said. “A lot of people’s jobs aren’t structured that way.”
Lisa Airdichko
