Imagine dropping a highly anticipated new single, only to watch artificial intelligence instantly hijack your voice and flood the internet with unauthorised, studio-quality knockoffs. As the music industry battles a massive deepfake crisis, a team of researchers has developed a brilliant new weapon for artists: an invisible digital shield that forces AI cloning models to choke on audio and spew distorted noise.
Artificial intelligence models can now clone a human voice using just a few seconds of audio. For musicians, this growing crisis goes beyond intellectual property rights; it can lead to significant lost revenue and take a heavy emotional toll on artists who pour their hearts into their work.
To combat this, researchers at Binghamton University, State University of New York, in collaboration with the startup Cauth AI, have developed “My Music My Choice” (MMMC). This new digital safeguard empowers artists by explicitly protecting their original songs from generative AI cloning.
My Music My Choice protects tracks by adding small, imperceptible changes directly to a song’s waveform.
- To human ears: When a person plays the song back, the vocals sound exactly the same as the original.
- To artificial intelligence: When an AI model attempts to replicate the protected song, it only produces distorted noise.
From the AI’s perspective, these slight, targeted shifts make the audio sound like a completely different vocal track, causing the voice-cloning system to fail completely.
Stopping nefarious deepfakes
Umur Aybars Ciftci, a research assistant professor at Binghamton University, partnered with Cauth AI CEO Ilke Demir to build the tool.
“Even though this AI technology has been developed for fun and entertainment, a lot of people are using it for nefarious purposes,” Ciftci said. “You can easily take someone’s voice and make them sing something that they normally don’t sing, or steal someone’s songs and make it look like it is your song to begin with”.
Ciftci explained that the team’s overarching goal is to minimise the impact on human listeners while maximising disruption to machines. Moving forward, musicians with a new track could easily apply this safeguard to their songs right before releasing them to the public to prevent theft.
The research team, which included Binghamton students Gerald Pena Vargas, Alicia Unterreiner, and David Ponce, has already successfully tested the tool on 150 music tracks across multiple genres. The findings were recently presented at the 39th Conference on Neural Information Processing Systems (NeurIPS 2025) Workshop.