wetransfer makes it clear that you don't train AI using files

Applications of AI


Wetransfer was forced to respond this week after triggering a major backlash from users who believed that a change in the Terms of Use (TOS) allowed users to access service to files to train AI.

“We do not use any form of machine learning or AI to process content shared via Wetransfer, nor sell any content or data to third parties,” a Wetransfer spokesperson told BBC News on Tuesday.

Wetransfer revealed this after users noticed recent changes to the TOS page. This originally stated that the following policy will come into effect in August (via the wayback machine on July 14th, 2025).

Here we grant us a permanent, global, non-exclusive, royalty-free, transferable, secondary license to use the Content, such as improving the performance of a machine learning model that enhances the content moderation process in accordance with the policy of privacy and Cositie.

The language appeared to imply that Wetransfer could use data and files from users to train either its own or third-party AI models. User anger continues quickly, many of whom are independent artists who use Wetransfer to send large files such as film footage and music.

Masculine light speed

The user took them to social media, vowed to focus on the changes and switch to another service. The problem with user content used as AI training data is a controversial data that is becoming increasingly popular as companies try to develop their own AI models and capabilities. Especially since these tools that can automate creative work are already impacting the job market. Users are wary of the newly established terms of service. This is because it could mean signing data, signing AI models and automating them straight away from work.

Similar confusion has also occurred on other platforms, such as a policy update for Capcut. And Adobe had to clarify last year's policy changes. The update made it sound like they were training the Firefly model without permission using the author's content. However, companies like Google and Meta rely on user data to train their models.

However, Wetransfer has changed the language in the content section of its policy after confirming to BBC News that the previous update “may have caused confusion for customers.” The company further made clear to the outlet that the original language was intended to include the possibility of improving content moderation in order to use AI, with the aim of identifying harmful content.

The section looks like this:

You accept a royalty-free license to use your content for the purposes of operating, developing and improving the service. Privacy and Cookie Policy.

In both versions of Wetransfer's TOS, the company says users own and retain all rights to their work. It should make it clear and confuse that Wetransfer was trying to take ownership of the job. Wetransfer has a license to users' files, but it is for content moderation purposes rather than training AI models.

topic
Artificial Intelligence Privacy





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *