female cyborg
Photo credit: PDP

OpenAI will allow erotica conversations for age-verified ChatGPT users starting in December, as the company relaxes restrictions it implemented to address mental health concerns.

Sam Altman, CEO of OpenAI, announced the change on X as part of the company’s “treat adult users like adults” principle, reports The Verge. The move comes as OpenAI rolls out age-gating more fully across its platform.

Altman says: “We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right. Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.”

Earlier this month, OpenAI hinted at allowing developers to create “mature” ChatGPT apps after implementing appropriate age verification and controls. The company is not alone in this space, as Elon Musk’s xAI previously launched flirty AI companions that appear as 3D anime models in the Grok app.

OpenAI also plans to launch a new version of ChatGPT in a few weeks that behaves more like what users appreciated about GPT-4o. The company made GPT-5 the default model powering ChatGPT but brought back GPT-4o as an option after users complained the new model was less personable.

Altman wrote that the new version will allow users to have a personality that behaves more like what people liked about 4o, stating that if users want ChatGPT to respond in a very human-like way, use emoji, or act like a friend, the chatbot should do it, but only if they want it.

OpenAI has launched tools to better detect when a user is in mental distress and announced the formation of a council on wellbeing and AI to help shape the company’s response to complex or sensitive scenarios. The council comprises eight researchers and experts who study the impact of technology and AI on mental health, though it does not include any suicide prevention experts, many of whom recently called on OpenAI to roll out additional safeguards for users with suicidal thoughts.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Political misinformation key reason for US divorces and breakups, study finds

Political misinformation or disinformation was the key reason for some US couples’…

Meta launches ad-free subscriptions after ICO forces compliance changes

Meta will offer UK users paid subscriptions to use Facebook and Instagram…

Wikimedia launches free AI vector database to challenge Big Tech dominance

Wikimedia Deutschland has launched a free vector database enabling developers to build…

Film union condemns AI actor as threat to human performers’ livelihoods

SAG-AFTRA has condemned AI-generated performer Tilly Norwood as a synthetic character trained…

Walmart continues developer hiring while expanding AI agent automation

Walmart will continue hiring software engineers despite deploying more than 200 AI…

Mistral targets enterprise data as public AI training resources dry up

Europe’s leading artificial intelligence startup Mistral AI is turning to proprietary enterprise…

Anthropic’s Claude Sonnet 4.5 detects testing scenarios, raising evaluation concerns

Anthropic’s latest AI model recognised it was being tested during safety evaluations,…

Wong warns AI nuclear weapons threaten future of humanity at UN

Australia’s Foreign Minister Penny Wong has warned that artificial intelligence’s potential use…