Protecting Children from AI-Generated Child Sexual Abuse Material

Artificial intelligence is becoming a bigger part of children’s lives – from helping with homework to powering games, search engines, and even personalised chatbots. But while AI brings many opportunities, it is also being misused in serious and harmful ways.
One of the most concerning developments is the creation of child sexual abuse material (CSAM) using AI tools. Some of these images are generated entirely from scratch. Others are altered from real photos or videos of children, manipulated into something abusive. Either way, this type of content poses a real and growing threat to child safety – and many people still aren’t aware that it exists.
This article explains what AI-generated CSAM is, how it’s being used, and what parents and caregivers can do to help protect children.
What is AI-generated CSAM?
AI-generated CSAM refers to images or videos created using artificial intelligence tools that depict child sexual abuse. Some of these images may look incredibly realistic, even though no real child was involved. Others are altered from real photographs – sometimes taken from social media or other online sources – and changed in ways that are abusive or exploitative.
Worryingly, this type of material is becoming easier to create, and some people are now using AI image generators to make illegal and harmful content at scale. In some cases, even young people have been involved in creating and sharing this material, often without fully understanding the consequences.
Is it really happening?
Yes. Unfortunately, this isn’t just a future risk – it’s happening right now.
An international police operation In February 2025, Europol announced the arrest of 25 people worldwide as part of a crackdown on AI-generated CSAM. The suspects had used AI tools to create synthetic abuse material and share it via encrypted platforms. (Europol, 2025)
A school community in distress In September 2024, news broke that AI-generated nude images of teenage girls had circulated within a high school in Pennsylvania. The images were fake, but the distress caused to the girls was very real. (AP News, 2024)
An exposed AI image database In 2024, a South Korean AI image-generation company leaked a massive database containing more than 95,000 images, including illegal content such as AI-generated CSAM. The company had been offering explicit AI imagery on demand. (Wired, 2024)
Growing concern from experts The Internet Watch Foundation (IWF), which works to remove child sexual abuse images from the internet, reported a significant rise in AI-generated imagery on one dark web forum between October 2023 and July 2024. Alarmingly, a large proportion of the images they assessed fell into Category A – the most serious type of abuse. (IWF, 2024)
What does the law say?
In the UK, it is illegal to create, possess, or share AI-generated CSAM. These materials are treated under the same laws that apply to non-AI-generated CSAM – and law enforcement can take action, even if the image was created entirely by artificial intelligence.
Many other countries have, or are introducing, similar legislation. However, laws in this area are still catching up with technology. Because AI capabilities are developing so quickly, the legal systems in many countries are still deciding how best to define and regulate this type of abuse.
How parents and caregivers can help protect children

You don’t need to be an expert in AI to help keep your child safe. Here are some practical ways to reduce the risks:
Talk openly and often Keep the conversation about online safety going as your child grows. Let them know they can come to you if they ever feel unsure, upset, or pressured by anything they see or are asked to do online.
Be cautious about sharing images It’s normal to want to share happy moments with friends and family – but it’s also a good idea to limit how many photos of your child you post publicly. AI tools can be used to turn innocent pictures into something harmful. Where possible and age-appropriate, ask your child for consent before posting images of them online.
Know what they’re using Check the websites, games, and apps your child uses regularly. Set parental controls where appropriate, and talk together about privacy settings.
Help them think critically Explain that not everything online is what it seems. AI can be used to create convincing – but completely fake – images and videos. Teach your child to question what they see and to tell you if something doesn’t feel right.
Know where to report If you ever come across images or videos that seem abusive, even if they look fake, report them. In the UK, you can do this anonymously through the Internet Watch Foundation. If you believe a child is at immediate risk, call the police and contact the NSPCC.
Final thoughts
AI-generated CSAM is a rapidly growing problem. Even though the images may be fake, the harm they cause – and the risk they pose – is very real.
As parents, carers, and educators, we all have a role to play in protecting children. By staying informed, talking openly, and being cautious about how we share images and interact online, we can help build a safer future for the next generation.
At SAIFCA, we advocate for stronger safeguards, better regulation, and greater transparency in the development and use of AI systems.
To stay up to date with our work – and discover ways you can help – please sign up to our newsletter.
If you’d like to support our mission financially, we would be incredibly grateful for your donation.