We’ve previously covered how streaming services like Spotify or YouTube are overrun by AI music. Some estimates forecast a staggering $4 billion yearly revenue for “AI artists” by 2028, or up to a quarter of music streaming revenue. If you think that’s a problem, well, Spotify also thinks so. Sort of.
This month, Spotify revealed that it had removed a whopping 75 million tracks from its platform in the past year—nearly half of its total song archive. The reason was a flood of AI-generated spam and deceptive uploads.
“Spam tactics … have become easier to exploit as AI tools make it easier for anyone to generate large volumes of music,” Spotify said in a statement.

AI’s Double-Edged Sword
Musicians have always embraced technology — from electric guitars to Auto-Tune. But generative AI is different. With a few clicks, anyone can churn out endless songs. Some are creative experiments; most are just noise. Worse, scammers are using AI to impersonate real artists, flood algorithms, and siphon royalties.
“At its worst, AI can be used by bad actors and content farms to confuse or deceive listeners,” Spotify said in a policy update.
Spotify isn’t trying to ban AI entirely, however. On the contrary, executives say they believe AI can be a powerful creative tool if used responsibly. The real concern is when AI is deployed at scale to exploit the system.
“We’re not here to punish artists for using AI authentically and responsibly,” said Charlie Hellman, Spotify’s VP and Global Head of Music, as per TechCrunch. “But we are here to stop the bad actors who are gaming the system.”
So What’s Spotify Actually Doing?
Tech companies in general have taken a very laissez faire attitude to AI. As long as the content flows and the views or listens flow, all is well. Except, of course, all is not well. AI content often impersonates real people or bands, or creates fictitious bands to trick listeners. Spotify’s new crackdown is designed to restore trust in the music it streams.
First, it’s launching a music spam filter. This system will identify patterns of abuse (mass uploads, duplicate tracks, SEO hacks, or ultra-short audio designed to game payouts) and tag them before they pollute the recommendation algorithms. Basically, if you try to game the system, you’ll likely get flagged. If flagged, Spotify will stop promoting the songs and, in many cases, remove them.
Second, Spotify is strengthening its rules around vocal deepfakes. From now on, AI-generated voices that imitate real artists are only allowed if the original artist gives explicit permission. This is hitting much closer to the point of what music really is.
“Unauthorized use of AI to clone an artist’s voice exploits their identity, undermines their artistry, and threatens the fundamental integrity of their work,” the company stated.
Spotify is also testing new tools to stop “profile mismatches”—a practice where someone uploads a song, fake or otherwise, under the name of a better-known artist to ride their fame. This kind of impersonation has become easier with AI, and Spotify now says it will allow artists to report these issues even before a song is published.
How Big Is the Problem?

In 2023, Spotify quietly changed its royalty rules. Now, a song must be streamed more than 1,000 times before earning money. The update was partly designed to combat scammers who uploaded thousands of short AI tracks that gamed the 30-second play threshold for royalty payouts.
Even with these efforts, the scale of the problem is daunting. Spotify says the 75 million spam tracks removed either never made it online—caught by filters—or were taken down after the fact. That number is nearly three-quarters the size of its active catalog of 100 million songs. Yet Spotify insists this flood of junk hasn’t changed the way people actually listen to music.
“Engagement with AI-generated music on our platform is minimal and isn’t impacting streams or revenue distribution for human artists in any meaningful way,” the company said, as per The Guardian.
To address growing confusion about what’s real and what’s not, Spotify is adopting a new credit system called DDEX. It’s a tech-backed industry standard that allows artists, labels, and distributors to clearly state whether and how AI was used to create a song—whether in the vocals, instruments, or post-production.
“This industry standard will allow for more accurate, nuanced disclosures,” Sam Duboff, Spotify’s Global Head of Marketing and Policy told TechCrunch. “It won’t force tracks into a false binary where a song either has to be categorically AI or not AI at all.”
The disclosures are optional, at least for now. But Spotify says they’re a first step toward a more transparent streaming landscape, one where listeners can better understand what they’re hearing.
“This change is about strengthening trust across the platform,” Spotify noted. “It’s not about punishing artists who use AI responsibly.”
The Next Act for Streaming
AI is threatening to completely take over the music industry (among many other things).
Spotify’s recent moves echo wider tensions, telling of much greater turmoil in the music industry. Streaming rival Deezer recently revealed that nearly 30,000 tracks uploaded each day (1 in 5) are now fully AI-generated. That number is growing rapidly.
Meanwhile, experimental projects like Velvet Sundown, a self-described “synthetic music project,” continue to find audiences. Spotify hasn’t taken the band down because it hasn’t broken any rules. But the case has fueled public pressure for mandatory AI labeling. Meanwhile, on YouTube, fake LoFi or instrumental playlists are already common.
Spotify’s executives say they don’t create or own any of the music on their platform. But as gatekeepers to hundreds of millions of listeners, their policies shape the industry in powerful ways. The challenge now is to strike a balance: protecting artists, informing audiences, and preserving room for innovation, without letting the slop take over.