AI cloning voices.
Photo credit: Binghamton University, State University of New York

Imagine dropping a highly anticipated new single, only to watch artificial intelligence instantly hijack your voice and flood the internet with unauthorised, studio-quality knockoffs. As the music industry battles a massive deepfake crisis, a team of researchers has developed a brilliant new weapon for artists: an invisible digital shield that forces AI cloning models to choke on audio and spew distorted noise.

Artificial intelligence models can now clone a human voice using just a few seconds of audio. For musicians, this growing crisis goes beyond intellectual property rights; it can lead to significant lost revenue and take a heavy emotional toll on artists who pour their hearts into their work.

To combat this, researchers at Binghamton University, State University of New York, in collaboration with the startup Cauth AI, have developed “My Music My Choice” (MMMC). This new digital safeguard empowers artists by explicitly protecting their original songs from generative AI cloning.

My Music My Choice protects tracks by adding small, imperceptible changes directly to a song’s waveform.

  • To human ears: When a person plays the song back, the vocals sound exactly the same as the original.
  • To artificial intelligence: When an AI model attempts to replicate the protected song, it only produces distorted noise.

From the AI’s perspective, these slight, targeted shifts make the audio sound like a completely different vocal track, causing the voice-cloning system to fail completely.

Stopping nefarious deepfakes

Umur Aybars Ciftci, a research assistant professor at Binghamton University, partnered with Cauth AI CEO Ilke Demir to build the tool.

“Even though this AI technology has been developed for fun and entertainment, a lot of people are using it for nefarious purposes,” Ciftci said. “You can easily take someone’s voice and make them sing something that they normally don’t sing, or steal someone’s songs and make it look like it is your song to begin with”.

Ciftci explained that the team’s overarching goal is to minimise the impact on human listeners while maximising disruption to machines. Moving forward, musicians with a new track could easily apply this safeguard to their songs right before releasing them to the public to prevent theft.

The research team, which included Binghamton students Gerald Pena Vargas, Alicia Unterreiner, and David Ponce, has already successfully tested the tool on 150 music tracks across multiple genres. The findings were recently presented at the 39th Conference on Neural Information Processing Systems (NeurIPS 2025) Workshop.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Why digital tears and online outrage fail to win modern political arguments

Scrolling through your social media feed today often feels like navigating a…

Massive AI study uncovers the secret GLP-1 side effects hidden on Reddit

Millions of patients are flocking to GLP-1 weight loss injections, but artificial…

One in four Americans now consult AI chatbots for medical advice

Millions of desperate patients are quietly abandoning the waiting room for a…

Alarming new US survey shows half of patients rely on AI for medical choices

Across the United States, a dangerous new trend is emerging. Millions of…

Global gambling firms rush to adopt AI despite severe lack of safety controls

The global gambling industry is racing to integrate artificial intelligence into its…

Tracking how war and energy policies dimmed night lights of Europe

While human civilisation is glowing brighter than ever before, the lights across…