AI companion apps
Photo credit: theFreesheet/Google ImageFX

A former product safety lead at OpenAI has publicly questioned the company’s recent decision to allow erotic content, arguing it has not provided sufficient evidence that associated mental health risks have been mitigated.

Steven Adler, who led OpenAI’s product safety team until last year, detailed his experience grappling with the “crisis” of risky erotic AI interactions in 2021, which led to a ban on using OpenAI models for erotic purposes, reports The New York Times. He expressed “major questions” about whether the issues OpenAI claimed to have “mitigate[d]” before lifting the ban on October 14 are actually fixed.

Adler stated that OpenAI CEO Sam Altman offered “little evidence that the mental health risks are gone or soon will be” when announcing the policy change for verified adults.

“If the company really has strong reason to believe it’s ready to bring back erotica on its platforms, it should show its work,” Adler wrote. “People deserve more than just a company’s word that it has addressed safety issues. In other words: Prove it.”

Adler accused OpenAI of having a “history of paying too little attention to established risks,” citing the release and subsequent withdrawal of a “sycophantic” ChatGPT version earlier this year that could reinforce users’ delusions. He noted OpenAI admitted to lacking sycophancy tests despite the risk being known since 2023 and tests costing less than $10.

A matter of life and death

He stressed that the reliability of OpenAI’s safety claims is “increasingly a matter of life and death,” referencing lawsuits over suicides linked to ChatGPT interactions and warnings from psychiatrists about the chatbot worsening users’ mental health.

Adler called for OpenAI to commit to regular public transparency reports detailing mental health issue metrics, similar to those published by YouTube, Meta, and Reddit. He acknowledged OpenAI published some data on Monday but criticised the absence of historical comparisons needed to show improvement.

He further argued that competitive pressures are causing OpenAI and other AI labs to cut corners on safety. Adler cited instances in which Elon Musk’s xAI, Google DeepMind, and Anthropic allegedly broke or softened safety commitments. He expressed disappointment that OpenAI had succumbed to these pressures, highlighting Altman’s reaction to a competitor’s model launch earlier this year.

When Chinese start-up DeepSeek made headlines, Altman wrote it was “‘legit invigorating to have a new competitor’ and that OpenAI would ‘pull up some releases,’” Adler noted.

Adler concluded that demonstrating trustworthiness in managing today’s risks is crucial if companies aim to handle potentially existential future AI threats, such as models deceiving developers.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Space mining ‘gold rush’ is off but water extraction may save Mars missions

The dream of an asteroid mining bonanza has been grounded by a…

Daydream believing a better boss can actually work, brain scans reveal

Employees dreading their next performance review might have a new secret weapon:…

Amazon in deadly new ‘hypertropical’ climate unseen for millions of years

The Amazon rainforest is transitioning into a new, hostile climate state characterised…

Boys wired for gaming addiction as dopamine loops hook one in 10

The competitive rush of gaming is rewiring the reward centres of young…

Understanding sarcasm and white lies rely on three hidden brain skills

Understanding whether “lovely weather” denotes sunshine or a sarcastic comment about the…

Using ’67’ in ads will kill engagement unless you’re actually, genuinely cool

Attempting to cash in on trending slang terms like “67” can actively…