The National Center for Missing and Exploited Children said it received more than 1 million reports of AI-related child sexual abuse material (CSAM) in 2025. The “vast majority” of that content was reported by Amazon, which found the material in its training data, according to an investigation by Bloomberg. In addition, Amazon said only that it obtained the inappropriate content from external sources used to train its AI services and claimed it could not provide any further details about where the CSAM came from.
“This is really an outlier,” Fallon McNulty, executive director of NCMEC’s CyberTipline, told Bloomberg. The CyberTipline is where many types of US-based companies are legally required to report suspected CSAM. “Having such a high volume come in throughout the year begs a lot of questions about where the data is coming from, and what safeguards have been put in place.” She added that aside from Amazon, the AI-related reports the organization received from other companies last year included actionable data that it could pass along to law enforcement for next steps. Since Amazon isn’t disclosing sources, McNulty said its reports have proved “inactionable.”
“We take a deliberately cautious approach to scanning foundation model training data, including data from the public web, to identify and remove known [child sexual abuse material] and protect our customers,” an Amazon representative said in a statement to Bloomberg. The spokesperson also said that Amazon aimed to over-report its figures to NCMEC in order to avoid missing any cases. The company said that it removed the suspected CSAM content before feeding training data into its AI models.
Safety questions for minors have emerged as a critical concern for the artificial intelligence industry in recent months. CSAM has skyrocketed in NCMEC’s records; compared with the more than 1 million AI-related reports the organization received last year, the 2024 total was 67,000 reports while 2023 only saw 4,700 reports.
In addition to issues such as abusive content being used to train models, AI chatbots have also been implicated in several dangerous or tragic cases involving young users. OpenAI and Character.AI have both been sued after teenagers planned their suicides with those companies’ platforms. Meta is also being sued for alleged failures to protect teen users from sexually explicit conversations with chatbots.
Update: Jan 30, 4:00am ET:
An Amazon spokesperson has shared the following statements with Engadget:
“Amazon is committed to preventing CSAM across all of its businesses, and we are not aware of any instances of our models generating CSAM. In accordance with our commitments to responsible AI and the Generative AI Principles to Prevent Child Abuse, we take a deliberately cautious approach to scanning foundation model training data, including data from the public web, to identify and remove known CSAM and protect our customers. While our proactive safeguards cannot provide the same detail in NCMEC reports as consumer-facing tools, we stand by our commitment to responsible AI and will continue our work to prevent CSAM.”
“We intentionally use an over-inclusive threshold for scanning, which yields a high percentage of false positives.”
“When we set up this reporting channel in 2024, we informed NCMEC that we would not have sufficient information to create actionable reports, because of the third-party nature of the scanned data. The separate channel ensures that these reports would not dilute the efficacy of our other reporting channels. Because of how this data is sourced, we don’t have the data that comprises an actionable report.”
This story originally appeared on Engadget
