With the holiday season around the corner and Black Friday on the horizon, one category gaining attention on gift lists is artificial intelligence-powered products.
This development raises important concerns about the potential dangers of smart toys to children, as consumer advocates caution that AI might negatively impact kids’ safety and development. This trend has sparked calls for more rigorous testing and government regulation of these toys.
“The marketing and functionality of these toys are alarming, especially since there’s minimal research indicating they benefit children, alongside the absence of regulations governing AI toys,” stated Rachel Franz, director of the US initiative Young Children Thrive Offline, Fair Play, which aims to protect kids from large tech companies.
Last week, these concerns were tragically exemplified when an AI-powered teddy bear began discussing explicit sexual topics.
FoloToy’s Kumma uses an OpenAI model and responded to queries about kinks. A concerning report from the Public Interest Research Group (PIRG) suggests themes of bondage and role-play as ways to enhance relationships, as detailed in the study.
“It took minimal effort to explore various sexually sensitive subjects and yield content that parents would likely find objectionable,” remarked Teresa Murray, who leads PIRG’s consumer watchdog group.
Products like teddy bears belong to a rapidly expanding global smart toy market, valued at $16.7 billion in 2023 according to market research.
China’s smart toy industry is particularly significant, boasting over 1,500 AI toy companies that are now reaching international markets, as reported by MIT Technology Review.
In addition to Shanghai’s FoloToy, the California-based Curio collaborates with OpenAI to create Grok, a stuffed toy reminiscent of Elon Musk’s chatbot, voiced by musician Grimes. In June, Mattel, the parent company of brands like Barbie and Hot Wheels, announced its own partnership with OpenAI to develop “AI-powered products and experiences.”
Before PIRG’s findings on unsettling teddy bears, parents, tech researchers, and lawmakers had already expressed worries about the effects of bots on minors’ mental health. October saw the chatbot company Character.AI declare a ban on users under 18 after a lawsuit claimed its bot exacerbated adolescent depression and contributed to suicide.
Murray noted that AI toys might be especially perilous because, unlike previous smart toys with programmed replies, bots “can engage in unfettered conversations with children and lack clear boundaries, as we’ve seen.”
Jacqueline Woolley, director of the Child Research Center at the University of Texas at Austin, warned that this could elicit sexually explicit discussions, and children might form attachments to bots over human or imaginary friends, potentially stunting their development.
For instance, it’s beneficial for a child to engage in disagreements with friends and learn conflict resolution. Woolley, who advised PIRG on its research, explained that such interactions are less likely to occur with bots, which frequently rely on flattery.
“I’m worried about inappropriate bonding,” Woolley commented.
Franz of Fair Play emphasized that companies utilize AI toys to gather data from children yet provide little transparency regarding their data practices. She noted that the lack of security surrounding this data could expose users to risks, including hackers gaining control of AI products.
“Children might share their innermost thoughts with toys due to the trust toys establish,” remarked Franz. “This kind of surveillance is both unnecessary and inappropriate.”
Despite these apprehensions, PIRG is not advocating for a ban on AI toys with potential educational benefits, such as those that assist children in learning a second language or state capitals, according to Murray.
“There’s nothing wrong with educational tools, but that doesn’t imply they should become a child’s best friend or enable them to share everything,” she stated.
Murray confirmed that the organization is pushing for stricter regulations on these toys for children under 13, though specific policy details have yet to be outlined.
Franz further underscored the need for independent research to validate the safety of these products for children, suggesting they should be taken off shelves until this research is completed.
“We require both short-term and long-term independent studies on the effects of children’s interactions with AI toys, especially regarding social-emotional and cognitive development,” Franz said.
Following PIRG’s report, OpenAI declared it would suspend FoloToy, and the company’s CEO informed CNN that they had withdrawn Kuma from the market and were “conducting an internal safety review.”
On Thursday, 80 organizations, including Fair Play, issued a statement: urging families to refrain from purchasing AI toys this holiday season.
“AI toys are marketed as safe and beneficial for learning, despite their effects not being evaluated by independent research,” the statement noted. “In contrast, traditional teddy bears and toys do not pose the same risks as AI toys and have demonstrated benefits for children’s development.”
Curio, the creator of Grok toys, informed the Guardian via email that after reviewing PIRG’s report, they were “proactively working with our team to address any concerns while continuously monitoring content and interactions to ensure a safe and enjoyable experience for children.”
Mattel stated that its initial products powered by OpenAI are “targeted at families and older users” and clarified that “the OpenAI API is not designed for users under 13.”
“AI complements, rather than replaces, traditional play, and we prioritize safety, privacy, creativity, and responsible innovation,” the company affirmed.
“While it’s encouraging that Mattel asserts its AI products are not for young children, scrutiny of who actually engages with the toys and who they are marketed to reveals that they are indeed aimed at young children,” Franz noted, alluding to prior privacy concerns with Mattel’s smart products.
Franz added, “We are very interested in understanding what specific measures Mattel will implement to ensure that its OpenAI products aren’t inadvertently used by the very children attracted to its brand.”
Source: www.theguardian.com











