After Emily Chariot wrote a blog post about work-life balance for her employer, she did what many people do before publishing anything online and pasted her draft into an AI tool in hopes of coming back with a stronger post.
But Chariot said he spent nearly half the time he spent writing the original reviewing the new version, rather than taking the revised version produced at face value. There’s a good reason for that. AI added a sentence saying that he recently blocked time on his calendar to attend his daughter’s school play.
“I don’t have a daughter, and I didn’t have any school play,” said Chariot, chief operating officer of Kilo Code, a remote AI coding startup, and mother of three young boys.
AI tools are helping employees complete all kinds of tasks faster than ever before, but many are discovering drawbacks. The output still needs to be carefully reviewed for errors and hallucinations, and the time the technology is supposed to save is significant.
New survey data reveals just how much. Research shows that nearly 40% of the value of AI is lost through rework and misalignment, and only 14% of employees consistently see clear and positive results from the technology.
Emily Chariot is a mother of three boys. She doesn’t have a daughter, despite recent claims from an AI tool. Cirrus Gold Creative
The survey was conducted in November by Hanover Research on behalf of HR and finance software provider Workday. Respondents included 1,600 leaders and full-time employees from companies around the world with annual revenues of $100 million or more.
Workday executive Aashna Kircher said AI is still a time-saver, and as the technology advances, compiling results will become less of a hassle in the future, and employees will receive more training on how to write prompts and apply critical thinking to AI-generated work.
“We believe that organizations need to empower their employees to better evaluate output and make the right decisions about how to use it,” she said.
How training can help
In a Workday study, 66% of leaders cite skills training as a top priority, but only 37% of employees say AI-driven rework is their top priority. The findings also show that fewer than half of employee job descriptions have been updated to reflect AI capabilities, and employees must balance the same expectations for accuracy, judgment, and risk with faster AI-driven outcomes.
Other research suggests that AI output routinely requires human intervention and cannot be fully trusted. A global survey of 2,000 CEOs found that only a quarter of AI initiatives delivered the benefits leaders expected. The research was conducted by the IBM Institute for Business Value and Oxford Economics between February and April last year.
Similarly, an MIT study based on reviews of published AI initiatives and executive interviews from January to June of last year found that 95% of organizations reported no measurable ROI from AI.
Chariot, who works at Kilo Code from his home in Columbus, Georgia, isn’t giving up on AI. She finds writing about as unpleasant as folding laundry, and said the speed at which these tools work outweighs the need to carefully check for errors.
“I think where people get into trouble is they just take the output of the AI agent and don’t review it closely and just pass it on,” she says. “At the end of the day, you are still responsible for the output, whether it is produced by an AI agent or not.”
