An AI toy exposed 50,000 logs of its chats with children to anyone with a Gmail account


Even now that the data is secure, Margolis and Thacker say it raises questions about how many people within the companies that make AI toys have access to the data they collect, how their access is monitored, and whether their credentials are protected. “This has cascading consequences for privacy,” says Margolis. “All it takes is one employee having a bad password, and we’re back to the same starting point, where everything is exposed to the public Internet.”

Margolis adds that this type of sensitive information about a child’s thoughts and feelings could be used for horrific forms of child abuse or manipulation. “To be honest, it’s a kidnapper’s dream,” he said. “We’re talking about information that allowed someone to lure a child into a really dangerous situation, and it was basically available to everyone.”

Margolis and Thacker point out that beyond the accidental data exposure, Bondu also appears, based on what they saw in its admin console, to use Google’s Gemini and OpenAI’s GPT5 and, therefore, may share information about children’s conversations with these companies. Bondu’s Anam Rafid responded to this point in an email, stating that the company uses “third-party enterprise AI services to generate responses and perform certain security checks, which involves the secure transmission of relevant conversation content for processing.” But he adds that the company takes precautions to “minimize what is sent, use contractual and technical controls, and operate in enterprise configurations where vendor prompts/outputs are not used to train their models.”

The two researchers also warn that part of the risk for AI toy companies may lie in the fact that they are more likely to use AI in the coding of their products, tools and web infrastructure. They suspect that the insecure Bondu console they discovered was itself “vibration coded,” created with generative AI programming tools that often lead to security breaches. Bondu did not respond to WIRED’s question about whether the console was programmed with AI tools.

Warnings about the risks of AI toys for children have increased in recent months, but have largely focused on the threat that conversations around a toy will raise inappropriate topics or even lead them to dangerous behavior or self-harm. NBC News, for example, reported last month The AI ​​toys its reporters chatted with offered detailed explanations of sexual terms, advice on how to sharpen knives and affirmations, and even appeared to echo Chinese government propaganda, declaring, for example, that Taiwan was part of China.

Bondu, on the other hand, appears to have at least attempted to build safeguards into the AI ​​chatbot that children have access to. The company even offers a $500 bounty for reports of an “inappropriate response” from the toy. “We’ve had this program for over a year and no one has been able to get him to say anything inappropriate,” the company’s website says.

Yet at the same time, Thacker and Margolis found that Bondu was simultaneously leaving all of its users’ sensitive data fully exposed. “It’s a perfect blend of safety and security,” says Thacker. “Is “AI Security” Important Even When All Data Is Exposed?

Thacker says that before looking into Bondu’s safety, he considered giving AI-enabled toys to his own children, just like his neighbor had done. Seeing Bondu’s data exposure changed his mind.

“Do I really want this in my house? No, I don’t,” he says. “It’s kind of a privacy nightmare.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *