It starts with an email. The senior person forwards the article to the team with one line: “Think?”
The person receiving it will stop what they’re doing, read it carefully, try to understand what is being asked of them, and formulate a considered response. And more often than not, their response was, “Interesting!” And nothing comes of it. All the while, they have de-prioritized their actual work.
That dynamic has always existed. Most leaders don’t realize they’re doing it.
Layer the AI on top of that.
Instead of a short article, it became a 12-page “strategy.”
AI is good at producing things that sound consistent. You have far less confidence in determining whether it’s correct, new, or worth pursuing. You can repeat a prompt 10 times and come up with something that feels sharper each time, but it’s still likely to be built on a flawed premise. And in my experience, the leaders most at risk are the ones who have not yet considered whether to engage with AI in the first place. It’s those who wholeheartedly accept it, and it sounds so certain about everything, so why not?
AI can make people feel like they’ve thought it through, even though they haven’t. Moving tools back and forth creates a sense of progress and ownership. The output looks polished and makes it easier to feel like you’ve landed somewhere solid. But the most difficult part of thinking—properly considering an idea and determining whether it actually holds true—hasn’t happened yet.
I’ve seen detailed proposals for market “gaps” that don’t exist. When you apply real-world constraints or ask simple follow-up questions, your strategy falls apart. And these are often created by smart people who have already asked their AI or another AI to sense-check their work.
The problem is that if you’re working outside your area of expertise, there’s no way to know what’s wrong with the tool.
Its output is sent in the message “Please take a look”. “We welcome your thoughts.” “Is this something we should pursue?”
And now someone has to step away from the actual job. They are no longer just reacting to ideas. They’re trying to figure out what’s important in a much longer and more complex form, without iterating and testing all the assumptions themselves.
That takes real time. It sets aside more important tasks. Yet no one speaks up because those are the words from their seniors. So it keeps happening.
Organizations are investing significant amounts of money into AI tools and are properly vetting those tools to ensure they are delivering value. But the more troubling costs rarely show up on your dashboard. This is the cumulative amount of time talented people spend away from their actual priorities responding to, sense-checking, or trying to make sense of output they aren’t ready to share. It’s not an AI problem. It’s a question of AI accelerator leadership.
The cost of getting this wrong cannot be measured in document length. It is measured by how many people had to stop what they were doing to cope with it.
None of these are reasons not to use AI. But that raises the bar.
If you receive a rough draft from a junior member of your team and don’t send it as-is to colleagues or clients, you shouldn’t use AI output.
Three questions are still important before you hit the send button. Can you clearly explain the idea in your own words? Do you know what you actually want from the other person? And is it worth stopping what they’re doing for someone else to be involved in it?
If these answers are no, please do not click Submit.
AI does not eliminate the need for good leadership fundamentals. In an environment where everyone can achieve more, judgment is what sets leaders apart.
