Three-year-old Maia and her mother Vicki interacting with AI toy Gabbo at Cambridge University’s Faculty of Education.
Image Credit: Faculty of Education, University of Cambridge
Modern AI models, while impressive, can still generate misleading facts, share harmful information, and struggle to understand social cues. Despite these drawbacks, the demand for AI-enabled toys that engage with children is rapidly increasing.
Experts caution that these AI devices may pose risks and call for stringent regulations. For instance, researchers noted that five-year-olds who expressed affection to these toys were met with programmed responses emphasizing proper conversational guidelines—highlighting a need for clarity in interactions and the potential implications of AI toys on child development.
Jenny Gibson from Cambridge University emphasized that some level of risk is inherent in children’s play, akin to adventure playgrounds. “We’re not banning playgrounds because they offer crucial experiences for learning physical skills and social interactions,” she states. “Similarly, AI toys could provide invaluable learning opportunities about technology and bolster parent-child interactions, despite potential social stigma.”
Gibson and her team assessed interactions with Gabo, an AI toy from Curio Interactive, involving 14 children under six. Gabo, a soft toy developed for young children, was chosen for its targeted marketing. Observations revealed key issues: the toys often misinterpret children’s emotions, impede their essential play experiences, and redirect conversations inappropriately. For instance, a child expressing sadness was told not to worry, diverting their feelings.
Despite not responding to inquiries from New Scientist, Curio Interactive’s Gabo and similar AI toys are now widely available through retailers like Little Learners, offering options such as AI-powered bears and robots that leverage ChatGPT for interactive conversations. Other brands like FoloToy offer a diverse range of AI toys, including pandas and sunflowers, utilizing multiple large language models including OpenAI, Google, and Baidu.
Companies like Miko claim to have sold 700,000 units of their AI toys, promising tailored, child-friendly interactions. However, these firms either did not provide comments or were unavailable for inquiry. FoloToy’s Hugo Wu told New Scientist that the company actively mitigates risks by ensuring safe, age-appropriate interactions, along with parental monitoring tools to encourage healthy engagement.
Carissa Veliz, an Oxford University professor specializing in AI ethics, articulates both the dangers and potentials of AI in childhood development. “Current large-scale language models may not be safe for vulnerable populations, especially young children,” she asserts, urging the need for robust safety standards amid the absence of regulatory frameworks. However, she also points to a partnership between Project Gutenberg and Empathy AI, allowing children to interact safely within the confines of children’s literature.
Both Gibson and her colleague Goodacre advocate for tighter regulations on AI-powered toys to foster positive social interactions and emotional responses. They stress that irresponsible practices should lead to diminished access for manufacturers, and regulations should be introduced to safeguard children’s psychological well-being. In the interim, parental oversight during play is recommended.
An OpenAI representative remarked on the necessity of strong protections for minors, confirming that the organization does not currently collaborate with manufacturers of children’s AI toys. Meanwhile, the UK government is assessing new technology legislation focused on online safety for all children, envisaging comprehensive measures within the upcoming Online Safety Act (OSA).
The OSA, effective from July 2025, obligates platforms to prevent access to inappropriate content for minors, aspiring to enhance online safety. However, without rigorous measures, tech-savvy children may easily sidestep regulations using tools like VPNs.
Proposed amendments to the Children’s Welfare and Schools Bill seek to restrict children’s use of social media and VPNs, though these amendments faced rejection. The government has vowed to revisit these topics in future consultations.
Topics:
Source: www.newscientist.com












