Parents are generally willing to accept artificial intelligence in children’s books provided the text remains human-authored and images undergo expert review, though children are proving surprisingly adept at spotting emotional inconsistencies in the artwork.
Researchers worked with 13 parent-child groups to analyse reactions to reading stories featuring AI-generated illustrations. While parents focused on safety concerns and real-world accuracy, children aged four to eight picked up on subtle emotional disconnects where the AI failed to interpret emotional cues within the narrative.
“We found children were more sensitive than their parents to the emotional content of the illustrations and were more likely to notice any disconnect between the emotions being conveyed by the images and the emotions being conveyed by the text,” says Qiao Jin, first author of a paper on the work and an assistant professor of computer science at North Carolina State University.
Unsafe behaviour
The study highlighted distinct priorities between generations. Parents and older children raised concerns about real-world accuracy in realistic or scientific stories. Older children specifically noticed when AI images contained size or behaviour errors, whilst parents worried about errors that might inadvertently encourage unsafe behaviour.
Despite these concerns, most parents expressed openness to AI-generated images if experts in children’s literature screened them. However, the majority were uncomfortable with the idea of AI generating the story text itself.
The researchers also experimented with transparency measures by placing small labels under each image. Most participants neither noticed nor used these labels, with several reporting they were distracting.
“Parents preferred a clear notification on the cover of the story making clear whether AI had been used to create a story’s images so they could make an informed decision about whether to purchase the book,” says Jin.