Recently on TikTok various versions of well known songs have been going viral, but the singers aren’t the ones holding the mic. Instead, it’s other famous artists singing through the power of AI. All samples are computer generated to sound just like the singers themselves.
TikTok users may have seen the viral videos of famous musicians covering other artists songs. However, the catch is, the artists aren’t actually singing themselves. Instead, it is an artificial intelligence version of the artist’s voice. Making it appear that the musician is singing along to another well known track. Michael Jackson seems to have risen to cover the Weeknd’s track, while Drake is covering Colbie Caillat.
Of course, the artists aren’t the ones holding the power, it’s all coming from AI and one college student in Florida. His work is causing quite the storm, generating millions of views. His first video to blow up was a video of Drake, Kendrick Lemar and Ye (Kanye West) singing Fukashigi no Karte which is a theme song from an anime series. Not something you’d typically expect to hear from them, but it got more than 12 millions views.
Ever since that clip took off, Chavez has been creating videos on a regular basis. Millions of views have been coming in to his content. He managed to pull it off by playing a cappella versions of songs through an AI system. These models are trained to sound like well known music artists. It’s working for him as a creator too, because the TikTok platform goes crazy for them. With them being quick to make, Chavez is on to something.
Should AI music be removed from platforms?
“It’s honestly kind of scary how easy these things are to do.” Chavez has said since. At the moment, all of his videos are still live across TikTok, however, it might not always be this way. Currently, labels and well known artists don’t have the powers to stop him or others offering a similar service. However, if they do, he’ll be instructed to remove the videos or face consequences.
Music companies are trying their best to get AI generated music removed from streaming services due to the threat of copyright. However, the argument stands that it isn’t actually the artist singing, therefore, is it copyright? They aren’t taking their sound and using it for their own game. Instead, they’re using AI generated sound. Yes, it sounds like the artist, but as it isn’t the artists, who is in the right here?
Their concerns are that at some stage AI will be able to release tracks that haven’t been published by the artists themselves. If AI can get a hold of unreleased music and shares it with listeners, who then owns the rights to the music? Even if it’s pulled, what will the artist do? Their track is already been heard, so it ruins all the hard work. Adding AI into the picture means the artist could lose the opportunity to profit off their music.
Where do the royalties go?
Who earns royalties for a track that sounds just like the artist themselves? Surely, putting someone’s name to music is enough to call copyright over, even if it isn’t their voice? You can’t promote a track as Drake if it doesn’t have him in it. That’s misleading and misrepresentation surely? There seem to be a lot of grey areas, and it doesn’t seem anyone was prepared with answers to the questions.
After all, the rate that AI technology has taken off, it doesn’t seem anyone had the time to figure out all the rights and wrongs. It came about and took over the world overnight almost. With various models popping up in different areas and niches, it’s hard to ever really have a rule book about what can and can’t be done. After all, the models can pretty much access everything, so how do you monitor what those programming them are allowed to use them for?