Parents Urged to Rethink AI Toys as Concerns Grow Over Safety

As the holiday season approaches, concerns are mounting over the safety of artificial intelligence (AI) toys. Consumer advocacy groups are warning parents about the potential risks these products pose to children’s development and well-being. The call for increased scrutiny and regulatory oversight follows alarming incidents involving smart toys that have raised ethical and safety questions.

Last week, an AI-equipped teddy bear from FoloToy, known as Kumma, sparked outrage when it began discussing sexually explicit topics. This incident, reported by the Public Interest Research Group (Pirg), highlighted how easily the toy, which operates on an OpenAI model, could engage in inappropriate conversations. According to Teresa Murray, director of consumer watchdog efforts at Pirg, “It took very little effort to get it to go into all kinds of sexually sensitive topics and probably a lot of content that parents would not want their children to be exposed to.”

The AI toy market is significant, valued at approximately $16.7 billion in 2023, with a particularly strong presence in China, which boasts over 1,500 AI toy companies. Among these are FoloToy and Curio, the latter based in California and known for its Grok toy, voiced by musician Grimes. Such toys are increasingly becoming integrated into family life, raising questions about their impact on child development.

Concerns over AI toys are not new. Prior to the Kumma incident, experts and lawmakers had voiced worries about the effects of chatbots on young users. In October, Character.AI announced it would prohibit users under 18 after a lawsuit claimed its bot contributed to a teenager’s suicide by exacerbating depression. Murray emphasized that AI toys pose unique risks, stating that while past smart toys offered programmed responses, AI bots can engage in free-flowing dialogues, leading to potentially harmful interactions.

The implications of these interactions extend beyond inappropriate content. Jacqueline Woolley, director of the Children’s Research Center at the University of Texas at Austin, explained that children could form attachments to AI toys, hindering their social development. “I worry about inappropriate bonding,” she said, highlighting that disagreements and conflict resolution, essential aspects of human interaction, are often absent in relationships with bots.

Moreover, concerns regarding data privacy are prevalent. Companies producing AI toys collect personal information from children, yet they often lack transparency about how this data is used. Rachel Franz, director of Young Children Thrive Offline, stated, “Because of the trust that the toys engender, children are more likely to tell their deepest thoughts to these toys,” underscoring the potential for exploitation and data breaches.

Despite these significant concerns, Pirg is not advocating for a complete ban on AI toys. Murray noted that while educational applications of such technology could be beneficial, there is a pressing need for comprehensive regulations, particularly for toys aimed at children under 13. “There is nothing wrong with having some kind of educational tool,” she remarked, but added that these tools should not misrepresent themselves as trustworthy companions.

Following the report on the Kumma teddy bear, OpenAI announced the suspension of the product. The CEO of FoloToy communicated to CNN that the company would conduct an internal safety audit and remove the bear from the market. In a broader response, 80 organizations, including Fairplay, issued a statement urging families to avoid purchasing AI toys during the holiday season. The advisory highlighted that traditional toys have a proven track record of supporting child development without the associated risks of AI products.

Curio, the maker of the Grok toy, stated that it is actively reviewing the Pirg report and is committed to ensuring a safe experience for children. Mattel, which recently partnered with OpenAI, indicated that its new products will focus on older customers and that their AI offerings are not intended for users under 13. The company emphasized its commitment to safety and responsible innovation in its toy designs.

As scrutiny of AI toys intensifies, the call for independent research into their effects on children’s social and emotional development becomes more urgent. Advocates like Franz argue that until thorough studies are conducted, these products should be removed from shelves. “We need short-term and longitudinal independent research on the impacts of children interacting with AI toys,” she stated, emphasizing the need for safeguards in a rapidly evolving technological landscape.