The National Center for Missing and Exploited Children said it received more than 1 million reports of AI-related child sexual abuse material (CSAM) in 2025. The “vast majority” of this content was flagged by Amazon, which found the material in its training data, according to an Amazon-led investigation. Bloomberg. Additionally, Amazon only said it obtained the inappropriate content from external sources used to train its AI services and claimed it could not provide further details about where the CSAM came from.
“It’s really an outlier,” said Fallon McNulty, executive director of NCMEC’s CyberTipline. Bloomberg. The CyberTipline is where many types of U.S.-based businesses are legally required to report suspected cases of sexual or sexual exploitation. “Receiving such a high volume throughout the year raises many questions about where the data comes from and what protections are in place.” She added that besides Amazon, AI-related reports the organization received from other companies last year included actionable data that it could pass on to law enforcement for next steps. Because Amazon doesn’t disclose its sources, McNulty said its reports proved “unactionable.”
“We take a deliberately cautious approach to analyzing core model training data, including public web data, to identify and remove known data. [child sexual abuse material] and protect our customers,” an Amazon representative said in a statement to Bloomberg. The spokesperson also said Amazon aims to over-report its numbers to NCMEC to avoid missing cases. The company said it removed suspected CSAM content before introducing training data into its AI models.
Minor safety issues have become a major concern for the artificial intelligence industry in recent months. CSAM skyrocketed in NCMEC records; Compared to the more than 1 million AI-related reports the organization received last year, the 2024 total was 67,000 reports, while in 2023 there were only 4,700.
Aside from issues like using abusive content to train models, AI chatbots have also been involved in several dangerous or tragic cases involving young users. OpenAI And Character.AI were both prosecuted after teenagers planned their suicides using these companies’ platforms. Meta is also being sued for alleged failure to protect teenage users from sexually explicit conversations with chatbots.




