While enterprise-generated AI can improve productivity and free up employees to focus on higher-value work, problems can arise when staff rely too heavily on these tools without fully understanding the content being generated or verifying its accuracy.
“AI slop” is a term increasingly used to describe low-quality content generated by artificial intelligence. Although the term is often used in the context of internet disruption, its meaning within the workplace is much more important.
Without clear guidelines, tools designed to increase productivity can easily backfire. Over-reliance on generative AI can lead to “work slop,” or work that appears sophisticated but lacks accuracy, originality, and critical thinking.
Unchecked, inaccurate, or illusory AI content can erode trust and credibility among colleagues and customers, leading to reputational damage. Employers may also face intellectual property risks if the AI-generated content is not original and is derived from other sources.
To address this issue, it is important to train employees on the proper use of AI. However, organizations must take steps to protect their employees and themselves if the situation worsens.
Public Gen AI tools: revealing the dangers of misuse
One of the main threats lies in the use of public AI tools by employees. A Microsoft study found that 71% of UK employees use unapproved consumer AI tools at work, with 51% of them doing so on a weekly basis.
This widespread unauthorized deployment means the risk of exploitation is much greater than many organizations realize. In doing so, employees may enter sensitive or legally privileged business or personal data into public AI platforms. This data can be published, stored, and even used to train future models.
This lack of security and confidentiality poses significant concerns for organizations, including loss of confidentiality and legal privileges, data leaks, and regulatory compliance violations. Such behavior not only impacts the organization, but can also lead to disciplinary action, reputational damage, and even legal repercussions for individuals.
In-house enterprise AI tools also come with risks. Employers should be aware that personal or sensitive information entered into an AI system may be disclosed if a designated person subsequently makes a data subject access request. In response to such requests, employers may be required to disclose personal data held by them, including the inputs to and outputs of AI tools. This may also be brought to light in future employment tribunal proceedings.
Manage employee usage
Clear policies regarding employee AI use are no longer optional. These are essential for companies dealing with the growing role of AI in the workplace. Policies should set clear guidance for employees about the types of AI tools and tasks they are allowed to use in the workplace. This makes it clear that employees will use AI as a partner rather than a substitute for judgment and creativity, and that human oversight is always required to check the accuracy of the output. It should also outline the types of information that should and should not be entered into such tools, and clearly outline the consequences of employee misuse.
Strong governance must also include training and clear employee communication. Organizations need to consider how AI will impact the quality of work and organizational culture, and ensure that the use of AI fits into wider people strategies such as inclusion, wellbeing, behavior and values.
Learning and development must align with your AI strategy and include risk training for both your HR team and the broader workforce. This is essential to embed responsible use, enhance business value, and prevent a culture that tolerates AI work.
Hannah Mahon is a partner in the employment, labor and pensions group, and Rebecca Denvers is a principal associate professional support attorney, both at Evershed Sutherland.
