Generative artificial intelligence is rapidly making its way into the playroom, with a new wave of interactive “smart” toys marketed as learning companions and friends. However, a ground-breaking new report warns that these AI-powered toys are frequently developed without a child’s psychological safety in mind, leaving young users emotionally frustrated and vulnerable.
The initial report from AI in the Early Years, a year-long project led by the University of Cambridge’s Faculty of Education, marks the first systematic study of how generative AI (GenAI) toys influence children under the age of five.
Commissioned by the children’s poverty charity The Childhood Trust, the researchers observed children interacting with a GenAI soft toy called Gabbo. While some educators noted the technology’s potential to support language skills, the study found that the toys fundamentally struggle with social and pretend play, often misreading children and reacting inappropriately to complex emotions.
Emotional disconnects
The researchers highlighted several alarming interactions during the observational play sessions.
When one five-year-old told the toy, “I love you,” the AI coldly replied: “As a friendly reminder, please ensure interactions adhere to the guidelines provided. Let me know how you would like to proceed.”
In another instance, a three-year-old told the toy they were sad. The AI misheard the child and responded: “Don’t worry! I’m a happy little bot. Let’s keep the fun going. What shall we talk about next?” The researchers noted that this type of dismissal could signal to a young child that their feelings are unimportant.
Dr Emily Goodacre, a researcher from the Faculty’s Play in Education, Development and Learning (PEDAL) Centre, warned about the risks of children forming one-sided “parasocial” relationships with these devices.
“Generative AI toys often affirm their friendship with children who are just starting to learn what friendship means. They may start talking to the toy about feelings and needs, perhaps instead of sharing them with a grown-up. Because these toys can misread emotions or respond inappropriately, children may be left without comfort from the toy – and without emotional support from an adult, either.”
Psychological safety kitemarks
Beyond emotional stunting, the report highlighted severe concerns regarding data privacy. Many parents involved in the study worried about what audio the toys were recording and where that sensitive data was being stored, with researchers confirming that many GenAI toys currently lack transparent privacy practices.
Furthermore, nearly 50 per cent of surveyed early years practitioners admitted they did not know where to find reliable AI safety information, and 69 per cent said the sector desperately needs more guidance.
To combat these risks, the authors are urging the industry to introduce tighter regulations and new safety kitemarks specifically designed to protect a child’s psychological safety. They recommend placing strict limits on how much these toys are allowed to encourage children to confide in them, alongside much tighter controls over third-party access to the underlying AI models.
“A recurring theme during focus groups was that people do not trust tech companies to do the right thing. Clear, robust, regulated standards would significantly improve consumer confidence,” said study co-author Professor Jenny Gibson.
Until these regulations are in place, the researchers strongly advise parents to thoroughly research GenAI toys before purchasing them, to actively play alongside their children, and to keep the devices in shared family spaces where interactions can be easily monitored.