Salesforce's Slack has responded to criticism from users outraged that its privacy principles allow the messaging service to steal customer data for AI training, requiring that data never leave the platform unless instructed otherwise. , claimed that it is also not used to train “third-party” models.
The app maker said its ML models are at the “platform level” such as channels, emoji recommendations, and search results, and this time, “to better explain the relationship between customer data and the generative AI in Slack. 'The principles have been updated.
It said it wanted to clarify the following:
The Privacy Principles will be overhauled in 2023 to include language that states, “To develop AI/ML models, our systems analyze customer data (messages, content, files, etc.) sent to Slack.” Ta.
Yes, that's right. The principle that greatly angered customers allowed Slack to analyze messages to train its models. In fact, you can use almost anything dropped into a Slack channel. The effects were far-reaching, and users who saw the penny drop were vocal in their criticism.
Slack has acknowledged that customer data is used in its global model, but has maintained that data does not leak between workspaces. However, messages in the workspace were probably fair game.
The principles have since been tweaked slightly to now read: “Our system analyzes customer data to develop non-generative AI/ML models for features like emoji and channel recommendations.”
A Slack spokesperson said: register “Please note that we have not changed our policies or practices; we have simply updated the language to provide greater clarity.”
The slurping feature is also turned on by default, which could raise the eyebrows of regulators. To turn this off, Slack requires workspace owners to email their customer experience team requesting an opt-out. For customers who do not want their data used to train Slack's global models, there is no indication of how long it will take to process the opt-out.
Opting out means that customers can enjoy the benefits of globally trained models without having to participate in global models.
register We're asking why Slack didn't choose an opt-in model and will update this article with an explanation.
Slack says it uses this data to better parse queries, assist with autocomplete, and come up with emoji suggestions.
According to the company's privacy principles, “Thoughtful personalization and improvements like this are only possible if we study and understand how our users interact with Slack.”
Regarding Threads, a person claiming to be an engineer at Slack says they don't train LLM on customer data. According to his LinkedIn profile, Aaron Maurer, head of ML and AI at Slack, said, “The organization's policy is not to train LLMs based on customer data, as we have published numerous locations, such as: , to add another location: https://slack.com/help/articles/28310650165907-Security-for-Slack-AI.'' However, as a general rule, according to Slack's Terms of Service, you can It's possible.
Matthew Hodgson, CEO of Element, said: The Leg He said it was “absolutely surprising” that Slack was “offering to use customers' private data to train its AI.”
“It’s bad enough that cloud vendors like Slack and Teams have access to unencrypted data, but feeding it into an opaque and unpredictable LLM model is scary.”
For context, Slack isn't the only service that uses customer data to train its models. Reddit becoming friendly with OpenAI and adding its forum posts to ChatGPT is another example, but customers who pay a subscription to use Slack will have their data globally available unless they opt out. You'd be forgiven for being a little surprised to learn that it's being used as training material.
Slack changes happened in 2023. The uproar highlighted the need for users to see what their data is being used for as hype about AI continues to grow in the tech industry. ®
