MANILA, Philippines – The latest report from the Internet Watch Foundation (IWF), an English charity working to minimize the availability of child sexual abuse material (CSAM) hosted online, said that artificial intelligence-enabled CSAM increased in 2025 compared to the previous year.
The 2026 CSAM report, titled Harm without limits: AI child sexual abuse material through the eyes of our analysts, found that in 2025, the IWF assessed 8,029 AI-generated images and videos as showing realistic child sexual abuse, with the imagery appearing on both regular online platforms and on the dark web. It is said to be a 14% increase in criminal AI content from the previous year.
This AI-enabled CSAM is said to be sophisticated, as 65% of it — 2,233 pieces of CSAM in total — is “realistic full-motion AI video content” under classification A based on UK grading.
Classification A material, or the most offensive types of CSAM, involve “penetrative sexual activity, sexual activity with an animal, or sadism.”
According to IWF’s analysts, there are identified AI-generated child sexual abuse images shared on AI chatbot services. These may encourage users to act out simulated child sexual abuse scenarios and, because these generative models are trained using photographed abuse imagery, can directly re-victimize sexual abuse survivors.
Artificial intelligence mechanisms have also grown sufficiently advanced as to make the tools for creating AI CSAM easier or at least requiring less work to create. IWF says a convergence of AI tools “can now generate abusive imagery with minimal effort, removing the need for technical expertise and significantly lowering barriers to entry.”
Alongside this report, polling done by Savanta for the IWF also showed more than four in five UK adults wanted regulation to ensure AI would be safe by design.
The full IWF report is available here. – Rappler.com


