Home Technology WeTransfer Denies Using User Files for AI Training After Backlash

WeTransfer Denies Using User Files for AI Training After Backlash

by Bustop TV News

File-sharing service WeTransfer has clarified that it does not use user-uploaded content to train artificial intelligence models, following a wave of online criticism over recent changes to its terms of service.

Concerns were raised when some users interpreted a newly added clause as granting the company permission to feed uploaded files into machine learning systems. However, a company spokesperson told BBC News:

“We don’t use AI or machine learning to analyze content shared on our platform, nor do we sell user content or data to third parties.”

The updated terms had originally included language about using content to improve the performance of AI models for content moderation — such as identifying harmful material. This led to confusion among users, some of whom feared their files might be used for broader AI training purposes.

To address the backlash, WeTransfer has revised the language in its terms of service, aiming to make it clearer. As of 8 August, the new clause (Section 6.3) states:

“You hereby grant us a royalty-free license to use your Content for the purposes of operating, developing, and improving the Service, all in accordance with our Privacy & Cookie Policy.”

Despite the clarification, the initial phrasing raised alarms in the creative community. Several artists and actors expressed concerns on social media platform X (formerly Twitter), with some suggesting they might switch to alternative file-sharing tools.

WeTransfer stressed that the purpose of the clause was limited to enhancing internal tools, such as moderation systems, and not for external data exploitation.

This incident echoes similar concerns raised against Dropbox in late 2023, when users suspected it of using uploaded content for AI training. The company later denied doing so, but the public response highlighted growing distrust in tech firms’ data practices.

Mona Schroedel, a data protection lawyer at Freeths, told BBC News that changes to service terms often carry hidden risks, especially as many tech companies look to leverage user data to fuel AI development.

“Companies are eager to benefit from the AI boom, and data is the fuel,” she said. “What might seem like a routine update can quickly become a method for expanding data use under the guise of service improvements.”

She added that users who rely on these services for professional work may find themselves in a bind when terms change unexpectedly — often left with limited options but to accept them.

ALSO READ : Lithuanian Startup Pioneers Laser-Based Satellite-to-Earth Internet

Related Articles