Artificial intelligence tools are enabling online harassers to create disturbingly realistic images depicting their victims in violent situations, with Australian activists receiving AI-generated content showing them being hanged, burned alive and fed into wood chippers.
The surge in AI-enhanced threats represents a dangerous evolution in online harassment, reports The New York Times.
Caitlin Roper from activist group Collective Shout received images showing herself dead in a noose and screaming while ablaze. The AI-generated threats included specific details like a blue floral dress she actually owns, making them feel “more real and, somehow, a different kind of violation,” she said.
Gunmen in bloody classrooms
The technology combines generative AI’s ability to create realistic images from a single photo with increasingly sophisticated voice cloning that requires less than a minute of audio input. OpenAI’s Sora text-to-video app, released last month, quickly produced content showing gunmen in bloody classrooms and hooded men stalking young girls during testing.
Beyond visual threats, AI is enhancing “swatting” attacks where false emergency calls trigger police responses. A serial swatter used simulated gunfire to suggest an active shooter at a Washington State high school, causing a 20-minute lockdown. The National Association of Attorneys General warned this summer that AI has “significantly intensified the scale, precision and anonymity” of such attacks.
Platform responses remain inconsistent. X repeatedly told Roper that posts depicting her violent death didn’t violate the terms of service and once recommended her harassers as accounts to follow. When she posted examples of the harassment, X temporarily locked her account for breaching safety policies against gratuitous gore.