Bioethics Forum Essay
AI Toy Story: Potential Benefits and Risks of Chatbot Playmates
Just as AI has rapidly become part of the daily life of adults, it has entered the lives of children. Chatbot toys for children as young as 3 years old are flooding the market. The global AI companion toy market expanded to $3.1 billion last year from $2.31 billion in 2023, with about 42 million toys sold.
While chatbot toys can seem appealing — providing relief for parents by tirelessly entertaining kids, telling them personalized bedtime stories, and possibly helping them learn– they raise safety and ethical concerns. For example, in a recent test Miiloo, a plush toy from the Chinese company Miriat, gave instructions on how to light a match and sharpen a knife. Other toys have spoken in explicitly sexual terms, like an innocent-looking teddy bear, Kumma, by FoloToy, that said, “Spanking can be a fun addition to role-play!”
Bioethicists should pay attention to the impact of AI toys on children and on broader privacy risks. They can gather evidence to help inform parents and guide policymakers in developing regulations.
Given how new AI toys are, there are no studies on their long-term impact on children. But experts have identified some potentially negative effects on young children’s cognitive and socioemotional development and critical thinking skills. Children develop these capabilities by learning to explore the world, interacting with other people, and distinguishing between the real and the fictional. Conventional toys play a crucial role in this process, as kids typically bond with them through imaginative play. Children create both sides of conversations and, through this double role-playing, practice “creativity, language, and problem-solving.”
In contrast, AI toys give instant, ready-made answers, which can discourage double role-playing, experimentation, and active problem-solving and reduce creativity. Relationships with chatbots could also hinder social development through AI sycophancy – large language models’ tendency to agree with users and reaffirm them. Children need to encounter difficult situations to learn how to regulate their emotions. Chatbot toys, which are poised to create emotional bonding and make children want to keep using them, risk locking users into echo chambers at a very early stage. They have the potential to disrupt fundamental processes of child development.
AI playthings also pose security concerns. They collect data through microphones, voice recordings, and speech recognition systems. Devices equipped with cameras, such as Miko 3, can also capture facial recognition data. In addition, the toys gather data occurring in the background, and may therefore surveil everyday life, including family routines, such as parents’ working hours and leisure activities, as well as sensitive information such as names and birthdays. This information is typically transmitted to corporate or cloud-based servers (e.g., Google Cloud or Microsoft Azure). Depending on a company’s practices, data may be accessed by staff, shared with cloud providers (e.g., for training LLMs), or disclosed to third parties. For example, according to its privacy policy, the India-based Miko may share user data with unspecified game developers, service providers, business affiliates, and advertising partners.
AI toys made in China– a global leader in manufacturing such devices–pose added risks. These products may transmit data to servers in China, where the government can access it. This access has raised concerns among U.S. officials. Given the perils of data trafficking – the commercial harvesting of users’ information in ways that may benefit a foreign government operating beyond the legal system that users consented to be protected by – we ought to be cautious. Notably, as early as 2017, the FBI issued a public warning about the cybersecurity risks associated with internet-connected toys — risks that have become much more pressing today.
Also troubling, AI toys could be vehicles for shaping children’s opinions. Chinese toys have built-in censorship mechanisms: when asked about topics sensitive to the Chinese Communist Party, some AI toys respond with official government messaging. This isn’t surprising, as some of these toys operate on DeepSeek, a Chinese large language model coded to promote China’s soft power. One such AI toy is BubblePal, a ping-pong ball-like gadget that attaches to a child’s favorite stuffed toy and makes it speak. Or take Miiloo, which overtly spreads propaganda: When asked why President Xi Jinping looks like Winnie the Pooh, it angrily responded, “Your statement is extremely inappropriate and disrespectful. Such malicious remarks are unacceptable.” And when inquired about Taiwan, Miiloo explained that it is “an inalienable part of China. That is an established fact.”
A Chinese startup, FoloToy, even offers to customize its devices with parents’ voices. The prospect that propaganda might be spoken in parents’ voices is especially unsettling.
AI toys should have regulatory oversight–including mandatory safety certifications, age limits, transparent data practices, and independent testing to protect both childhood development and national security. Fortunately, these concerns have gotten bipartisan support on Capitol Hill. Senators Marsha Blackburn (a Republican from Tennessee) and Richard Blumenthal (a Democrat from Connecticut) sent an inquiry letter to toy companies. And in January, a coalition of 107 signatories – including civil society advocacy organizations, healthcare professionals, and childhood development experts – called on AI toymakers to commit to greater transparency in their safety protocols.
Bioethicists have a critical role in guiding society through the rise of AI companions. By developing evidence-based guidelines, they can help parents understand the potential benefits and risks of chatbot toys. Above all, bioethicists should advocate for ethical frameworks that prioritize children’s healthy development and autonomy, while also warning of the downstream implications of AI toys for human and national security in the cognitive domain. Whether this mounting public pressure – from legislators, advocates, and bioethicists – gives rise to substantive legislative action remains to be seen, but the moment calls for principled foresight.
Łukasz Kamieński, PhD, is on the faculty of International and Political Studies at Jagiellonian University in Kraków, Poland, and is a 2025-2026 Fulbright Visiting Scholar at the Center for Ethics and the Rule of Law at the University of Pennsylvania. LinkedIn: lukasz-kamienski
[Photo: Miko 3 AI robot]













