Deepfake AI and the Music Industry: Creating Music With AI


Article by: Amanda Nechesa

Publication date:

In 2023, a deepfake AI audio of a song named Heart On My Sleeve featuring popular American artists Drake and The Weeknd went viral on the internet. The song had everything that fans of both artists would expect in such a collaboration. It had the fusion of hip-hop and R&B beats, the swanky lyricism of Drake and the deep but witty lyricism of The Weeknd. It even featured a Metro Boomin beat at the beginning of it.

ai music.png
AI and music. Made on Canva.

Basically, Heart On My Sleeve, which had been released on Spotify by an artist named  GhostWriter but has since been taken down, had checked every box to convince fans that it was real. Except that, of course, it was not. The entire song was generated by deepfake AI.

This replication of human aspects by a machine is obviously scary. There are many dangers of deepfake AI, ranging from impersonation, identity theft, spread of misinformation; to mention but a few. And when it comes to the music industry, the fact that AI can clone your voice down to the inclinations of your singing and then go ahead to create lyrics in the exact way you write them and in the topics you sing about, is a clear violation of your voice as an artist. 

In the period of release of Drake and The Weeknd’s fake AI song, other AI-generated songs started to emerge on the internet. There was Frank Sinatra singing the hip-hop song “Gangsta’s Paradise”, Johnny Cash singing the pop single “Barbie Girl”, and a Nigerian producer called EDashMan even started producing AI covers of American artists like Kanye West singing Naija bangers from Nigerian Afrobeat artists Burna Boy, Asake and Wizkid. 

The output from AI-generated music soon became an internet sensation, but beneath it, one has to step back and wonder, what does this mean for the music industry? 

Deepfake AI and the music industry

There is a popular meme about AI going around social media. The meme is a quote that loosely reads:

The problem with pushing all things AI is the wrong direction. I want AI to do my laundry and taxes so I can create art and poetry. Not for AI to create my art and poetry so that I can do my laundry and taxes.

When it comes to the intersection between AI and Creativity, this statement cannot be more right. And when it especially comes to deepfake AI being used to create entirely new music by mimicking artists' voices, it becomes a question of replacement of human talent. To what end? Most people wonder, and for artists, this wonder quickly becomes a worry of whether their contribution will become insignificant in a few decades. 

Upon the release of deepfakes of his voice, Drake took it upon himself to share on social media that that was a “final straw”. He is not the only artist that shares this sentiment. Frank Ocean, an American R&B artist whose voice has also suffered a similar deepfake fate, has now taken to carrying around physical copies of his music to prevent it from being leaked or re-generated by AI.

This same worry has spilt over to major music labels. In April 2023, Universal Music Group (UMG), the world’s leading music company, sent letters to music streaming platforms like Spotify and Apple requesting them to block artificial intelligence platforms from training on the melodies and lyrics of their copyrighted songs. 

These letters were followed by a petition to cease the use of AI to infringe upon and devalue the rights of artists, and several major UMG artists including American pop sensations Billie Eilish, Nikki Minaj and Frank Sinatra signed it.

However, it’s important to note that all these sentiments were shared last year. In 2024, with the realisation that AI is not going anywhere and it’s growing even bigger than ever, some of these sentiments have changed. An instance of this is once again the American artist Drake, who, as previously noted, was against the use of deepfake AI to create music. 

But in May 2024, during his highly popularised rap beef with rapper Kendrick Lamar, Drake used deepfake AI to create a song in the voice of the late renowned rapper Tupac Shakur to diss his nemesis. The song has since been removed from streaming platforms as requested by Tupac’s estate, but this is not the first time AI has been used to recreate songs in the voice of deceased artists. 

The famous ‘60s band The Beatles also released their final song, Now and Thenfeaturing their late bandmate John Lennon, and it was all possible thanks to AI. While re-recording the song, AI was used to isolate the voice of John Lennon from the first recording, and then the living band members added their present verses, which, according to band member Paul McCartney, sounded almost like his long-gone friend was playing once again in the room with them. 

Universal Music Group, together with music label Roland, have also had a change of heart when it comes to integrating AI with music. Recently, UMG and Roland teamed up with AI companies in a partnership that seeks to involve the exploration of innovation sitting at the cross-section of music and technology. 

Creating with AI 

Someone somewhere once said there are two sides of the coin, and in this case, they are right. When it comes to deepfake AI and music creation, there are two sides to the story. One side is the one where the artists are concerned about the illegal replication of their voice while the other side is where some artists are starting to recognise the opportunities AI can offer and consider working with it instead of against it. 

One such artist from the latter argument is Eclipse Nkasi, a Nigerian producer and creative entrepreneur. Formerly the Head of Promotions at prominent Nigerian record label Chocolate City Music, Nkasi released the first AI-generated Afrobeats album named Infinite Echoes on 1st May 2023. 

The album contains nine tracks, including a spoken word intro and two interludes. It features Eclipse Nkasi, producer and sound engineer David Wondah, actor and singer Nnamdi Agbo, and an AI-generated artist Mya Blue. Listening to the album, which has songs that mix pidgin English and Igbo, you cannot tell that it was entirely made using AI. 

In a YouTube documentary, Nkasi and his friends deep dive into how they made Infinite Echoes, including showing snippets of how they prompted AI to create a storyline, lyrics in their native language, a spoken word piece, and how they created an entirely AI-generated artist Mya Blue. But the creation of it, albeit interesting, is actually not the major takeaway from this.

A tech enthusiast and a producer, Nkasi knows the struggle that comes with producing an album. From liaising with artists, paying them, dealing with them, and this excludes the actual production, the whole process takes a lot of time and money. With Infinite Echoes however, Nkasi was able to produce a complete project in only three days and it cost him as little as $500. 

In the documentary, he talks like he might have just found a loophole in how AI and creativity can intersect. 

“The biggest conversation in AI today as far as I am concerned is surrounding the ethical use of AI. I think that also stems from the reason why creatives have struggled for a long time. When it comes to art, people have some sort of sentiment in how they attach to how they create things. There is a need for originality for most people. There is a need to create art that feels unique to you. And these are the things that have begged the question: what is originality? What is creativity? Isn’t just people being inspired by other pieces of content that they have consumed over time and then finding a way to create something? 

In essence, I think AI is based on a similar system where it’s processing a lot of information based on pre-existing data and then interpreting that and saying, hey, based on these similarities and these differences, here is something that is sufficiently new. And I don’t think that is different from what we do as humans.” 

However, in the same documentary, Nkasi also recognises the risks that come from using artists’ copyrighted material to create work that is then monetised. He admonishes the people who rip off the creators’ work by creating copies of artists singing songs that they did not originally make.

What sets Eclipse Nkasi apart is that, in the creation of his album Infinite Echoes, a name that the AI also came up with, he has found a way to use AI as a tool to do something that is efficiently new and which steers the audience into a new direction. Creating with AI, he calls it. 

But while Infinite Echoes might be the first AI-generated Afrobeats album, it’s not the first time an artist has used AI in an entirely new way for their art. American artist Holly Henderson, created a vocal deepfake of her own voice called Holly+, allowing anyone to transform their own voice into hers. With Holly+, Holly Henderson has surrendered her voice entirely to AI, paving the way for anyone and even her to explore all the limits AI can reach when it comes to creativity. 

Whether these limits will be outstretched beyond comprehension is only a bridge we can cross when we reach there. For now, all we need to do is sit back, put some music on, and connect with what we are listening to, whether it’s created by a human or an AI.