Elizabeth Perry
Project lead Elizabeth Perry demonstrating the poisoned data image outcome. Photo credit: Angkit Thapa Magar

Monash University and the Australian Federal Police (AFP) are developing a new “data poisoning” tool designed to stop criminals from creating malicious AI-generated content, including deepfakes and child abuse material. The tool, called ‘Silverer’, works by subtly altering images before they are uploaded.

The AiLECS Lab, a collaboration between the AFP and Monash, is developing the tool. Data poisoning involves altering pixels in a way that is invisible to humans but “tricks” AI models that train on the data. When criminals try to use this poisoned data, the AI models produce inaccurate, skewed, or unrecognisable results.

“Before a person uploads images on social media or the internet, they can modify them using Silverer,” said Project Lead Elizabeth Perry. “This will alter the pixels to trick AI models and the resulting generations will be very low-quality, covered in blurry patterns, or completely unrecognisable.”

Harmful images and videos

The AFP has identified an increase in AI-generated child abuse material. Digital forensics expert and AiLECS Co-Director Associate Professor Campbell Wilson said the generation of fake images is a growing problem. “Currently, these AI-generated harmful images and videos are relatively easily created using open source technology and there’s a very low barrier to entry for people to use these algorithms,” Associate Professor Wilson said.

AFP Commander Rob Nelson said the tool could also help investigators by cutting down the volume of fake material to wade through.

“We don’t anticipate any single method will be capable of stopping the malicious use or re-creation of data, however, what we are doing is similar to placing speed bumps on an illegal drag racing strip,” Commander Nelson said. “We are building hurdles to make it difficult for people to misuse these technologies.”

The ‘Silverer’ prototype has been in development for the last 12 months and is currently in discussions to be used internally at the AFP. The project’s goal is to create an easy-to-use tool for ordinary Australians to protect their data.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Journalism students ‘get out of the bubble’ to rebuild public trust

Journalism is facing dual challenges of lost trust and declining relevance. But…

To govern AI, we must stop policing software and start capping ‘compute’

Trying to regulate subjective AI capabilities is a losing battle. Instead, we…

Why failing public sector AI projects refuse to die despite broken promises

Generative AI projects in public administration often persist even when the technology…

Why the AI job apocalypse might just be history repeating itself

From silent film stars to bank tellers, professions threatened by new technology…

AI could revolutionise global healthcare — if we stop leaving billions behind

Artificial intelligence offers a historic opportunity to fix broken medical systems in…