Economists Mariana Mazzucato and Rosie Collington argue that consultants can provide questionable guidance at best and exacerbate dysfunction in government and the private sector at worst. in their books The Big Question: How the consulting industry is undermining our businesses, infantilizing our governments, and distorting our economies. Economists argue that consultants emerged in the era of deregulation following the Ronald Reagan administration, when third-party intervention was needed to save financial institutions that had lost faith in themselves.
Mr. Mazzucato and Mr. Collington argued that while governments and private companies spent money to hire them, consultants, rather than righting the ship, merely created an “impression of value” and the illusion of usefulness and nothing else.
In the age of AI, which promises to save companies cash by automating white-collar jobs, using chatbots for coaching may be an attractive option for companies unwilling or unable to spend big bucks on consultants. But new research shows that while you can ask AI to do things you would ask a consultant for a fraction of the cost, its advice may not be worth taking. In fact, AI assistance may simply be presenting an old problem in a new medium.
A recent study led by Ramon Llull University Esade Business School in Barcelona found that when various large language models (LLMs) are asked to provide guidance on workplace issues, they gravitate towards answers that best match the buzzwords, rather than providing guidance that best matches the scenario. Researchers have dubbed the tendency for AI to gravitate toward the same technical terms when making decisions “trend slop.”
“LLMs are not colleagues who critically evaluate current ideas, consider the details of the context, stress test assumptions, and then push back when everyone is satisfied,” the study authors wrote in their paper. harvard business review A post summarizing their research. “When it comes to strategy, an LLM can be more like a fresh graduate MBA or junior consultant parroting what’s popular rather than what’s right for a particular situation.”
Recent layoffs among the Big Four consulting firms amid an industry-wide downturn suggest that companies may already be losing value in the eyes of potential clients. PwC cut its business support staff by 150 people in November 2025, around the same time McKinsey cut hundreds of jobs.
“As we celebrate our 100th anniversary, we operate at a moment shaped by rapid advances in AI that are transforming business and society,” a McKinsey spokesperson told Bloomberg last year.
However, the emergence of “trend slop” suggests that AI is far from providing direction for companies looking to technology for advice, and this research reveals the biases that LLMs struggle with.
How “trend slop” manifests itself
To measure the AI’s tendency to give responses that follow trends rather than logic, researchers tested seven models, including GPT-5, Claude, Gemini, and Grok, across 15,000 simulations and scenarios. The model asked companies to choose between two solutions when tensions arise in the workplace, such as whether companies should prioritize long-term versus short-term growth, or whether they should use technology to automate or augment workers’ jobs.
The researchers predicted that if LLM provided advice based on situation-specific details, there would be diversity in the solutions the model chose. Instead, the seven models typically cluster their responses around the same strategies, showing a preference for “contemporary management buzzwords and cultural metaphors.”
Even when researchers rephrased the prompt or asked for analysis of the pros and cons, the AI models often showed a strong preference for similar business strategies. The study authors warn that relying on AI as a consultant will not result in bespoke business solutions, but rather cookie-cutter solutions that can be proposed to any business in response to a prompt, regardless of the specifics of the challenge presented.
“This reveals the real risk to leaders,” the researchers said. “LLM sounds highly tailored to your situation, while still being able to quietly steer you into the same small clusters of modern management trends.”
LLM Bias Exposure
The researchers note that the LLM’s tendency to “trend slop” is a result of biases that the LLM is subjected to when training the model. Because LLMs are trained on a wealth of information, from internet texts to social media and news, they tend to fixate on the positive or negative connotations attached to certain phrases and concepts, viewing “commoditization” as outdated and negative and “expansion” as progressive and positive.
In other words, when asked to provide guidance on a tricky workplace scenario, the AI is not analyzing the problem situation, but rather spitting out key phrases based on how often they have encountered them while being trained on the data. In the case of ChatGPT, the study notes that bots sometimes refuse to offer binary choices and instead recommend both solutions. Research published in nature We found last year that AI sycophancy is not only counterproductive, but can be harmful to science, confirming the biases of those who encourage it rather than presenting users with data supported by scientific literature and other reliable and unbiased sources.
‘Trendslop’ researchers did not completely avoid the use of LLM when dealing with difficult workplace situations. They suggested that the model could still be useful in generating alternative solutions or identifying blind spots in certain scenarios. Research shows that if you’re aware of AI bias toward concepts like expansion and long-term strategy, you can counter that bias and reveal more insightful guidance.
“Leadership is ultimately about making difficult choices and taking responsibility for them in situations of uncertainty,” the researchers said. “AI cannot and should not be a substitute.”
