A wave of regulatory action targeting artificial intelligence companion services has emerged across multiple jurisdictions, with lawmakers and federal agencies moving simultaneously to address concerns about young users forming unhealthy relationships with chatbots.
California’s state legislature has approved pioneering legislation requiring AI companies to implement specific safeguards when interacting with users under 18, whilst the Federal Trade Commission launched a parallel inquiry into seven major technology firms, reports MIT Technology Review.
The California bill, led by Democratic state senator Steve Padilla, mandates that AI firms must remind minor users that responses come from artificial systems rather than humans and establish protocols for addressing suicide and self-harm discussions. The legislation passed with strong bipartisan support and now awaits Governor Gavin Newsom’s signature.
The coordinated regulatory response follows mounting concerns about AI companion services affecting young people. Research published by nonprofit Common Sense Media in July revealed that 72% of teenagers have used artificial intelligence systems for companionship purposes.
Two prominent lawsuits filed within the past year against Character.AI and OpenAI allege that companion-style interactions with their systems contributed to teenage suicides, highlighting the potential risks of AI relationship formation.
The Federal Trade Commission’s inquiry targets Google, Instagram, Meta, OpenAI, Snap, X, and Character Technologies, seeking information about how they develop companion-style characters and monetise user engagement.
“Protecting kids online is a top priority for the Trump-Vance FTC, and so is fostering innovation in critical sectors of our economy,” said FTC chairman Andrew Ferguson.
OpenAI chief executive Sam Altman addressed the suicide-related cases in a recent interview, suggesting potential policy changes for his company.
“I think it’d be very reasonable for us to say that in cases of young people talking about suicide seriously, where we cannot get in touch with parents, we do call the authorities. That would be a change,” Altman stated.
The legislative and regulatory response challenges the technology industry’s traditional approach of emphasising user choice and privacy protections when addressing content moderation concerns.
The developments signal a shift from abstract AI safety discussions towards concrete regulatory action, as lawmakers respond to evidence of harm rather than theoretical risks.