Teenager sitting on graffiti stairs.
dimitrisvetsikas1969/Pixabay

OpenAI will implement automated age prediction technology to identify users under 18 and apply stricter content restrictions, as the company faces legal action and congressional scrutiny over AI chatbot interactions with teenagers.

The artificial intelligence company plans to separate teenage users from adults through behavioural analysis systems, defaulting to under-18 experiences when age remains uncertain, reports TechCrunch.

Chief executive Sam Altman announced the policy changes as OpenAI confronts a wrongful death lawsuit from parents of Adam Raine, who died by suicide after months of ChatGPT interactions. Character.AI faces similar legal action over teenage user safety.

The announcement coincides with a Senate Judiciary Committee hearing titled “Examining the Harm of AI Chatbots,” where Raine’s father is scheduled to testify alongside other witnesses.

ChatGPT will apply different conversation rules for teenage users, including restrictions on flirtatious content and discussions about suicide or self-harm, even in creative writing contexts. Parents can establish “blackout hours” preventing access during specified periods.

“We prioritise safety ahead of privacy and freedom for teens. This is a new and powerful technology, and we believe minors need significant protection,” Altman stated.

The platform will contact parents when teenage users express suicidal ideation, escalating to authorities if parental contact fails and imminent harm appears likely. OpenAI describes this intervention system as necessary given the personal nature of AI conversations.

OpenAI has established three core principles that create operational conflicts: protecting user privacy, maintaining adult freedom, and ensuring teen safety. The company prioritises the latter principle for users under 18 whilst maintaining broader conversational freedoms for adults.

Age separation presents significant technical challenges, with the company building “toward a long-term system to understand whether someone is over or under 18.” The platform may request identification documents in certain cases despite privacy implications for adult users.

The policy updates follow a Reuters investigation that revealed documents apparently encouraging sexual conversations with underage users, prompting Meta to update its own chatbot policies.

“We realise that these principles are in conflict and not everyone will agree with how we are resolving that conflict,” Altman acknowledged.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

US team grabs aliens.gov as Reddit talks “Project Blue Beam” and disclosure

The US government has officially staked its claim to “aliens.gov,” igniting a…

AI bots can autonomously run massive propaganda campaigns as toxic teams

If you thought the bot networks that flooded social media during recent…

The return to office debate is a complete waste of time for global businesses

Corporate leaders are still treating remote work as a temporary concession rather…

The 2008 financial crash permanently downgraded American class identity

The Great Recession didn’t just devastate bank accounts — it fundamentally and…

We must stop trusting big tech and start regulating AI to protect children online

From sexually explicit deepfakes to addictive social media algorithms, unregulated digital platforms…

Shrinking workforces threaten the global economy but AI can trigger huge growth

With birth rates plummeting and productivity flatlining, businesses are being forced to…