- Wetransfer users were furious when it seemed that the updated terms of service were implying that they would use the data to train AI models.
- The company moved quickly to ensure users who are not using content uploaded to AI training
- wetransfer rewritten the clause in a clearer language
The file sharing platform Wetransfer has reassured users that it is not intended to train AI models using uploaded files. This is not intended to train AI models after suggesting that anything sent through the platform can be used to create or improve machine learning tools.
The problematic language buried in TOS stated that by using Wetransfer, the company will use the right to use the data for the purposes of improving the “service” or new technology or services, such as improving the performance of machine learning models that enhance the content moderation process in accordance with the privacy and cookie policy.
That part of the general widespread nature of machine learning and text appeared to suggest that wetransfer can do anything that you wanted with your data without clarifying the modifiers to alleviate certain safeguards or doubt.
Perhaps unsurprisingly, many Wetransfer users, including many creative experts, were upset that this seemed to imply. Many have begun posting plans to switch from Wetransfer to other services in the same vein. Others have begun to warn people that they should either encrypt files or switch to old school physical delivery methods.
We have determined that the time to stop using @wetransfer from August 8th is that we own what we are transferring to Power ai pic.twitter.com/syr1jnmemx.July 15, 2025
Wetransfer focused on the fierce rage around the language and quickly tried to put out the fire. The company rewrites the TOS section and shares a blog describing the confusion. We have repeatedly promised that no one will be used without permission, especially in the AI model.
“From your feedback, we understand that it may be unknown that you hold ownership and control of your content. Since then, we have updated the terminology further to make them easier to understand,” Wetransfer wrote on our blog. “We've also removed the machine learning reference because it may have caused anxiety, not something Wetransfer uses in connection with customer content.”
While allowing standard licenses to improve Wetransfer, the new text omits references to machine learning and instead focuses on the familiar scope necessary to run and improve the platform.
Clear privacy
If this feels like Deja Vu, it's because something very similar happened about a year and a half ago on another file transfer platform, Dropbox. The company's changes to fine printing imply that Dropbox is getting content that users uploaded to train AI models. Public protests apologised for Dropbox for the confusion and for fixing the problematic boilerplate.
The fact that it happened again in a similar way like this is not because of the troublesome legal language used by software companies, but because the distrust of these companies' knees means protecting your information. If there is uncertainty, assume that the worst approach is the default approach, and companies must make extra effort to alleviate these tensions.
Sensitivity from creative experts to the appearance of data misuse. In an age where tools like Dall E, Midjourney, ChatGpt train the work of artists, writers and musicians, stakes are very realistic. Not to mention the suspicion of corporate data use, litigation and boycotting by artists about how their work is used will likely make tech companies want to introduce the kind of assurance that Wetransfer offers early.
