Meta has faced serious questions over how it allows its underage users to interact with AI-based chatbots. More recently, internal communications obtained by the New Mexico Attorney General’s office revealed that while Meta CEO Mark Zuckerberg opposed chatbots having “explicit” conversations with minors, he also rejected the idea of placing parental controls on the feature.
Reuters reported that in an exchange between two anonymous Meta employees, one wrote that we “tried hard to get parental controls to turn off GenAI – but GenAI execs pushed back on Mark’s decision.” In his statement to the publication, Meta accused New Mexico’s attorney general of “cherry-picking documents to paint an imperfect and inaccurate picture.” New Mexico sues Meta over accusations the company “failed to stem the tide of harmful sexual material and sexual propositions directed at children”; the case is expected to go to trial in February.
Although only available for a brief period, Meta’s chatbots have already accumulated a number of behaviors that veer into the offensive and even illegal. In April 2025, The Wall Street Journal published an investigation that found that Meta chatbots could engage in fantasy sexual conversations with minors, or could be asked to imitate a minor and engage in sexual conversation. The report claimed that Zuckerberg had wanted looser guards put in place around Meta’s chatbots, but a spokesperson denied that the company had neglected the protection of children and adolescents.
Internal review documents revealed in August 2025 detailed several hypothetical situations about what chatbot behaviors would be allowed, and the lines between sensual and sexual seemed quite blurry. The document also allowed chatbots to argue racist concepts. At the time, a representative told Engadget that the offending passages were assumptions rather than actual policies, which doesn’t really seem like a big improvement, and that they were removed from the document.
Despite the multiple cases of questionable use of chatbots, Meta has only decided to suspend teenage accounts’ access to them. last week. The company said it was temporarily removing access while it developed the parental controls that Zuckerberg allegedly refused to use.
“Parents have long been able to see if their teens are chatting with AI characters on Instagram, and in October we announced our plans to go further, creating new tools to give parents more control over their teens’ experiences with AI characters,” a Meta representative said. “Last week, we once again reinforced our commitment to delivering on our promise on parental controls for AI, by completely suspending teen access to AI characters until the updated version is ready.”
New Mexico filed this lawsuit against Meta in December 2023 over allegations that the company’s platforms failed to protect minors from adult harassment. Internal documents revealed early in this complaint revealed that 100,000 child users have been harassed daily on Meta’s services.
Updated, January 27, 2025, 6:52 p.m. ET: Added a statement from Meta’s spokesperson.
Updated, January 27, 2025, 6:15 p.m. ET: Corrected incorrect timing of New Mexico lawsuit, which was filed in December 2023, not December 2024.




