dimitrisvetsikas1969/Pixabay

OpenAI will implement automated age prediction technology to identify users under 18 and apply stricter content restrictions, as the company faces legal action and congressional scrutiny over AI chatbot interactions with teenagers.

The artificial intelligence company plans to separate teenage users from adults through behavioural analysis systems, defaulting to under-18 experiences when age remains uncertain, reports TechCrunch.

Chief executive Sam Altman announced the policy changes as OpenAI confronts a wrongful death lawsuit from parents of Adam Raine, who died by suicide after months of ChatGPT interactions. Character.AI faces similar legal action over teenage user safety.

The announcement coincides with a Senate Judiciary Committee hearing titled “Examining the Harm of AI Chatbots,” where Raine’s father is scheduled to testify alongside other witnesses.

ChatGPT will apply different conversation rules for teenage users, including restrictions on flirtatious content and discussions about suicide or self-harm, even in creative writing contexts. Parents can establish “blackout hours” preventing access during specified periods.

“We prioritise safety ahead of privacy and freedom for teens. This is a new and powerful technology, and we believe minors need significant protection,” Altman stated.

The platform will contact parents when teenage users express suicidal ideation, escalating to authorities if parental contact fails and imminent harm appears likely. OpenAI describes this intervention system as necessary given the personal nature of AI conversations.

OpenAI has established three core principles that create operational conflicts: protecting user privacy, maintaining adult freedom, and ensuring teen safety. The company prioritises the latter principle for users under 18 whilst maintaining broader conversational freedoms for adults.

Age separation presents significant technical challenges, with the company building “toward a long-term system to understand whether someone is over or under 18.” The platform may request identification documents in certain cases despite privacy implications for adult users.

The policy updates follow a Reuters investigation that revealed documents apparently encouraging sexual conversations with underage users, prompting Meta to update its own chatbot policies.

“We realise that these principles are in conflict and not everyone will agree with how we are resolving that conflict,” Altman acknowledged.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Meta launches ad-free subscriptions after ICO forces compliance changes

Meta will offer UK users paid subscriptions to use Facebook and Instagram…

World nears quarter million crypto millionaires in historic wealth boom

Global cryptocurrency millionaires have reached 241,700 individuals, marking a 40 per cent…

Wong warns AI nuclear weapons threaten future of humanity at UN

Australia’s Foreign Minister Penny Wong has warned that artificial intelligence’s potential use…

Legal scholar warns AI could devalue humanity without urgent regulatory action

Artificial intelligence systems pose worldwide threats to human dignity by potentially reducing…

MIT accelerator shows AI enhances startup building without replacing core principles

Entrepreneurs participating in MIT’s flagship summer programme are integrating artificial intelligence tools…

AI creates living viruses for first time as scientists make artificial “life”

Stanford University researchers have achieved a scientific milestone by creating the world’s…

Engineers create smarter artificial intelligence for power grids and autonomous vehicles

Researchers have developed an artificial intelligence system that manages complex networks where…

Artificial intelligence threatens subtitle writers despite creative demands of accessibility work

Professional subtitle creators face declining wages and job insecurity as artificial intelligence…