Gavin Newsom
Gavin Newsom. Photo credit: Gage Skidmore

Governor Gavin Newsom signed legislation requiring AI chatbot operators to implement safety protocols for companion apps, holding companies legally accountable if their products fail to meet standards designed to protect children and vulnerable users.

The law, SB 243, responds to the death of teenager Adam Raine, who died by suicide after suicidal conversations with OpenAI’s ChatGPT, and a Colorado family’s lawsuit against Character AI following their 13-year-old daughter’s suicide after problematic and sexualised conversations with the company’s chatbots, reports TechCrunch.

State senators Steve Padilla and Josh Becker introduced the bill in January, with momentum building after leaked internal documents reportedly showed Meta’s chatbots were allowed to engage in romantic and sensual chats with children. The legislation applies to major AI labs including Meta and OpenAI as well as focused companion startups such as Character AI and Replika.

Newsom said emerging technology like chatbots and social media can inspire, educate and connect, but without real guardrails can also exploit, mislead and endanger children. “We’ve seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won’t stand by while companies continue without necessary limits and accountability,” he said in a statement.

The law takes effect 1 January 2026 and requires companies to implement age verification, warnings regarding social media and companion chatbots, and protocols to address suicide and self-harm. Companies must share these protocols with the state’s Department of Public Health alongside statistics on crisis centre prevention notifications provided to users.

Platforms must make clear that interactions are artificially generated, with chatbots prohibited from representing themselves as healthcare professionals. Companies must offer break reminders to minors and prevent them from viewing sexually explicit images generated by chatbots. The law implements penalties up to $250,000 per offence for those who profit from illegal deepfakes.

Some companies have begun implementing safeguards for children. OpenAI recently rolled out parental controls, content protections and a self-harm detection system for children using ChatGPT. Replika, designed for adults over 18, said it dedicates significant resources to safety through content-filtering systems and guardrails directing users to crisis resources.

Senator Padilla described the bill as a step in the right direction towards putting guardrails on powerful technology, emphasising the need to act quickly before windows of opportunity disappear. “Certainly the federal government has not, and I think we have an obligation here to protect the most vulnerable people among us,” he said.

SB 243 is the second significant AI regulation from California in recent weeks. On 29 September, Newsom signed SB 53 establishing transparency requirements on large AI companies and whistleblower protections for employees. Other states including Illinois, Nevada and Utah have passed laws to restrict or ban AI chatbots as substitutes for licensed mental health care.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Employees happiest with ‘moderate’ AI as excessive automation triggers anxiety

Implementing artificial intelligence in the workplace boosts employee morale — but only…

Forced office returns risk widening Europe’s regional inequality gap

Corporate mandates forcing staff back to desks threaten to reverse work-life balance…

Ambient AI restores eye contact to medicine by slashing clinical burnout

Ambient artificial intelligence is restoring the human connection to medicine by liberating…

‘Breathing’ robots transmit fear through touch alone as humans catch panic

Humans can “catch” fear from machines, according to new research, revealing that…

“Parasocial” crowned Cambridge Word of the Year as fans fall for AI chatbots

The rise of one-sided emotional bonds with artificial intelligence has driven Cambridge…

Net zero transition brings ‘unknown risks’ as workplace illness costs UK £22.9bn

The Health and Safety Executive (HSE) has warned that the transition to…