Laptops & Gear

Adult-only AI chatbots still appear in children’s toys


A new report from the US Public Interest Research Group (PIRG) Education Fund has raised concerns about the growing use of artificial intelligence chatbots in children’s toys, warning that some of these programs may not be suitable for new users. According to the report, several AI-powered toys include chatbot technology that can generate responses similar to those used in adult-oriented AI services, potentially exposing children to inappropriate or misleading content.

The study examined a number of toys that incorporate conversational AI features, including interactive dolls, robots, and educational gadgets. Many of these products allow children to talk to a toy that responds in natural language, powered by large-scale language models similar to those used in widely available AI chatbots.

While technology can make toys interactive and educational, PIRG researchers argue that the safeguards built into some products may not be strong enough to protect younger audiences. In particular, the report highlights that the underlying AI systems often come from platforms designed primarily for general users and not children.

Because of this, the AI ​​responses generated by these toys may include information or discussion topics that are more appropriate for adults than children. The report also warns that AI may produce wrong answers or unexpected answers, which could confuse new users who tend to trust toys as reliable sources of information.

Researchers who reviewed toy documents and privacy policies also found that some products rely heavily on cloud-based AI systems.

This means that children’s voice interactions may be transferred to external servers where the data is processed and used to generate responses. Privacy advocates say this raises more concerns about how children’s data is stored and used. Some toys may collect audio recordings, user information, or other personal information during conversations. If these systems are not carefully designed to protect children’s privacy, data may be misused or stored without clear safeguards.

The report also points out that many AI-powered toys include disclaimers buried in their terms of service or product documentation. These disclaimers sometimes state that AI responses may not always be accurate or appropriate, shifting responsibility to parents while the toy itself is marketed directly to children.

This trend is important because AI technology is increasingly entering everyday consumer products, including products designed specifically for younger audiences. Toys that simulate conversations can have a strong influence on children, who often treat them as friends or learning tools.

Experts say children may have difficulty distinguishing between reliable information and AI-generated answers that may be speculative, biased, or incorrect. As AI systems continue to develop, ensuring that these technologies are optimized for children’s safety will become increasingly important.

The findings also highlight a broader regulatory challenge

Although many countries have laws designed to protect children’s online privacy, such as the Children’s Online Privacy Protection Act (COPPA) in the United States, these laws were established before the rise of productive AI.

Advocacy groups argue that regulators may need to revise safety standards and guidelines to address how AI systems interact with children through connected devices.

The PIRG report calls on toy manufacturers to implement stronger safeguards, including stronger content filtering, clear disclosures about the use of AI, and more transparent data practices. It also recommends that companies develop AI systems that specifically target children rather than replicating models designed for an adult audience.

Looking ahead, researchers say collaboration between technology companies, regulators, and child safety experts will be needed to ensure that AI-powered toys remain innovative and safe.

As artificial intelligence is increasingly embedded in everyday products, the challenge will be balancing the benefits of interactive technology with the responsibility of protecting young users from potential harm.

Back to top button