- Last week, customers were furious when I mentioned Slack's AI and ML efforts using customer data.
- The company says it does not use customer data for machine learning or generative artificial intelligence, despite its policies.
- When it comes to machine learning, Slack aggregates data and uses it to train systems that recommend emojis and autocomplete channel names.
Confusion over Slack's use of data for artificial intelligence and machine learning training has forced the company to address the issue.
Last week, Corey Quinn, principal cloud economist at Duckbill Group, found Slack's data management policy to be incredibly poorly worded.
“To develop AI/ML models, our systems use customer data sent to Slack (such as messages, content, files, etc.) and other information (such as usage information) as defined in our Privacy Policy and Customer Agreement. Read the 'Analyze the information' section.
We say “Please read” because this section caused so much anger that Slack had to amend this policy to make it more clear.
First, Slack trains AI and ML to recommend things like emoji responses and channels that are relevant to users, and to use timestamps to recommend archiving of chats, despite what the policy says. It states that only.
“We do not build or train these models in such a way that they can learn, remember, or reproduce customer data of any kind. Customers can opt out, but these models will Slack's traditional ML model uses anonymized, aggregated data for message content in DMs, private channels, and public channels, with no risk of data being shared and a better product experience for users. No access,” Slack explained.
The company explains how it uses AI and ML and what it does for them, but people are on the defensive considering how vastly different these two explanations are.
Slack also said it uses third-party LLMs (Large-Scale Language Models), and those LLMs are not trained on customer data. We also emphasize that we maintain control over all the data that passes through our halls.
However, one of the key points of contention is that customers must manually opt out of their data being used to train non-generative machine learning models. This involves sending an email to the Slack Customer Experience team and your request will be processed manually. As many have pointed out, this process should be opt-in, not opt-in as the default action.
Opting out doesn't turn off the feature, it just prevents Slack from adding data to the aggregate sources used to train machine learning models. The company warns that if you opt out, “your unique patterns of usage will no longer be optimized.”
I understand that Slack uses data to improve the platform, but they're adding AI to the mix and saying that by agreeing to the terms, Slack gives you the keys to the kingdom without any further information. If we did, people would be furious.
When it comes to generating AI, Slack is adamant that it doesn't use customer data to train its models.
“Slack does not train LLM or other generative models based on customer data or share customer data with LLM providers,” the company outlines. Generative AI is also a premium add-on for Slack, so the way you access your data may not apply to all Slack users.
With AI being the latest buzzword, it's worth taking another look at your policies, terms of service, and user agreements to make sure your data isn't being used to pay someone else's salary. maybe.