SoundCloud CEO Eliah Seton, who’s responded to the ongoing controversy about his platform’s AI training policies. Photo Credit: SoundCloud
The SoundCloud head addressed the fiasco in a more than 650-word open letter today. This lengthy follow-up arrives on the heels of criticism from artists as well as a few not-so-helpful statements from the company.
As we previously recapped, an early 2024 update (which, presumably implemented sans email notification, just recently entered the media spotlight) to SoundCloud’s terms compelled artists to “explicitly agree that” their music uploads “may be used to inform, train, develop or serve as input to artificial intelligence or machine intelligence technologies.”
The company subsequently claimed it hadn’t actually trained or allowed training on protected works – albeit while leaving the door open for related initiatives down the line.
In keeping with the Musiio owner SoundCloud’s prior remarks, the exec described the relevant TOS clause as an effort to power “smarter recommendations, search, playlisting, content tagging, and tools that help prevent fraud.”
“More broadly,” the CEO deflected, “we use AI to identify emerging talent, personalize the platform experience, and support real-time customer service, all designed to support human artists and engage real fans.”
Getting to the heart of the matter, Seton then acknowledged that the February 2024 TOS section “was too broad and wasn’t clear enough.”
Instead, the adjusted TOS, which Seton disclosed in full, would seemingly mean that artists would have the chance to expressly consent to some types of AI training beforehand. Nevertheless, even the modified language doesn’t close the aforementioned training door altogether.
“We will not use Your Content to train generative AI models that aim to replicate or synthesize your voice, music, or likeness without your explicit consent, which must be affirmatively provided through an opt-in mechanism,” the proposed alteration reads in part.
“For the avoidance of doubt,” the important text continues, “neither SoundCloud nor any third party is allowed to use, copy or reproduce any Content delivered to the Platform under separate agreements…for the purposes of informing, training developing (or as input to) artificial intelligence technologies without authorization from the applicable rightsholders.”
“SoundCloud have changed their terms on AI in response to user backlash, but the change doesn’t go nearly far enough,” Newton-Rex wrote.
“Their new terms will say they won’t train gen AI models that replicate your voice / style,” he continued. “But they leave the door open to the much more likely gen AI training: models trained on your work that might not directly replicate your style but that still compete with you in the market.
“If they actually want to address concerns, the change required is simple. It should just read ‘We will not use Your Content to train generative AI models without your explicit consent,’” the former Stability AI exec proceeded.
The way Newton-Rex sees things, if SoundCloud decides to leave the fresh terms “unchanged” from here, “we can only assume” the new approach “is intentional.”
Suffice to say that the multifaceted issue won’t be going away anytime soon. For SoundCloud in particular, future AI announcements, pertaining to training or not, will be closely scrutinized. And it remains to be seen how different DSPs will approach training moving forward.
Share on:
You must be logged in to post a comment.
SoundCloud Updates Terms Amid AI Training Controversy – Digital Music News
