AI companion apps
Photo credit: theFreesheet/Google ImageFX

A former product safety lead at OpenAI has publicly questioned the company’s recent decision to allow erotic content, arguing it has not provided sufficient evidence that associated mental health risks have been mitigated.

Steven Adler, who led OpenAI’s product safety team until last year, detailed his experience grappling with the “crisis” of risky erotic AI interactions in 2021, which led to a ban on using OpenAI models for erotic purposes, reports The New York Times. He expressed “major questions” about whether the issues OpenAI claimed to have “mitigate[d]” before lifting the ban on October 14 are actually fixed.

Adler stated that OpenAI CEO Sam Altman offered “little evidence that the mental health risks are gone or soon will be” when announcing the policy change for verified adults.

“If the company really has strong reason to believe it’s ready to bring back erotica on its platforms, it should show its work,” Adler wrote. “People deserve more than just a company’s word that it has addressed safety issues. In other words: Prove it.”

Adler accused OpenAI of having a “history of paying too little attention to established risks,” citing the release and subsequent withdrawal of a “sycophantic” ChatGPT version earlier this year that could reinforce users’ delusions. He noted OpenAI admitted to lacking sycophancy tests despite the risk being known since 2023 and tests costing less than $10.

A matter of life and death

He stressed that the reliability of OpenAI’s safety claims is “increasingly a matter of life and death,” referencing lawsuits over suicides linked to ChatGPT interactions and warnings from psychiatrists about the chatbot worsening users’ mental health.

Adler called for OpenAI to commit to regular public transparency reports detailing mental health issue metrics, similar to those published by YouTube, Meta, and Reddit. He acknowledged OpenAI published some data on Monday but criticised the absence of historical comparisons needed to show improvement.

He further argued that competitive pressures are causing OpenAI and other AI labs to cut corners on safety. Adler cited instances in which Elon Musk’s xAI, Google DeepMind, and Anthropic allegedly broke or softened safety commitments. He expressed disappointment that OpenAI had succumbed to these pressures, highlighting Altman’s reaction to a competitor’s model launch earlier this year.

When Chinese start-up DeepSeek made headlines, Altman wrote it was “‘legit invigorating to have a new competitor’ and that OpenAI would ‘pull up some releases,’” Adler noted.

Adler concluded that demonstrating trustworthiness in managing today’s risks is crucial if companies aim to handle potentially existential future AI threats, such as models deceiving developers.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Political misinformation key reason for US divorces and breakups, study finds

Political misinformation or disinformation was the key reason for some US couples’…

Pinterest launches user controls to reduce AI-generated content in feeds

Pinterest has introduced new controls allowing users to adjust the amount of…

Meta launches ad-free subscriptions after ICO forces compliance changes

Meta will offer UK users paid subscriptions to use Facebook and Instagram…

Wikimedia launches free AI vector database to challenge Big Tech dominance

Wikimedia Deutschland has launched a free vector database enabling developers to build…

Titan submersible’s memory card survives but held no fatal dive footage

Recovery teams have found an undamaged SD card inside a specialist underwater…

Cranston deepfake forces OpenAI to strengthen Sora 2 actor protections

Bryan Cranston’s voice and likeness were generated in Sora 2 outputs without…