Search

Shopping cart

Saved articles

You have not yet added any article to your bookmarks!

Browse articles
Newsletter image

Subscribe to the Newsletter

Join 10k+ people to get notified about new posts, news and tips.

Do not worry we don't spam!

GDPR Compliance

We use cookies to ensure you get the best experience on our website. By continuing to use our site, you accept our use of cookies, Privacy Policy, and Terms of Service.

Teachers given new guidance in dealing with AI-generated child sexual abuse material

Guidelines on how to deal with AI-generated child sexual abuse material (CSAM) have been issued to 38,000 teachers and staff across the UK.  The guidelines are an attempt to help people working with children tackle the "highly disturbing" rise in AI-generated CSAM.

They have been issued by the National Crime Agency (NCA) and the Internet Watch Foundation (IWF). The AI-generated content is illegal in the UK and is treated the same as any other sexual abuse imagery of children, even if the imagery isn't photorealistic.

"The rise in AI-generated child sexual abuse imagery is highly disturbing and it is vital that every arm of society keeps up with the latest online threats," said safeguarding minister Jess Phillips. "AI-generated child sexual abuse is illegal and we know that sick predators' activities online often lead to them carrying out the most horrific abuse in person.

"We will not allow technology to be weaponised against children and we will not hesitate to go further to protect our children online," she said. The guidelines suggest that if young people are using AI to create nude images from each other's pictures - known as nudifying - or creating AI-generating CSAM, they may not be aware that what they're doing is illegal.

Nudifying is when a non-explicit picture of someone is edited to make them appear nude and is increasingly common in "sextortion" cases - when someone is blackmailed with explicit pictures. "Where an under-18 is creating AI-CSAM, they may think it is 'just a joke' or 'banter' or do so with the intention of blackmailing or harming another child," suggests the guidance.

"They may or may not recognise the illegality or the serious, lasting impact their actions can have on the victim." Last year, the NCA surveyed teachers and found that over a quarter weren't aware AI-generated CSAM was illegal, and most weren't sure their students were aware either. More than half of the respondents said guidance was their most urgently needed resource.

The IWF has seen an increasing amount of AI-generated CSAM as it scours the internet, processing 380% more reports of the abuse in 2024 than in 2023. "The creation and distribution of AI-manipulated and fake sexual imagery of a child can have a devastating impact on the victim," said Derek Ray-Hill, interim chief executive at the IWF.

Read more from Sky News:Major pornography sites to introduce 'robust' age verification for UK usersDoctors using unapproved AI to record patient meetingsMinecraft users targeted by cyber criminals "It can be used to blackmail and extort young people. There can be no doubt that real harm is inflicted and the capacity to create this type of imagery quickly and easily, even via an app on a phone, is a real cause for concern." Multiple paedophiles have been sent to jail for using artificial intelligence to create child sexual abuse images in recent years.

Last year, Hugh Nelson was sentenced to 18 years in jail for creating AI-generated CSAM that police officers were able to link back to real children. "Tackling child sexual abuse is a priority for the NCA and our policing partners, and we will continue to investigate and prosecute individuals who produce, possess, share or search for CSAM, including AI-generated CSAM," said Alex Murray, the NCA's director of threat leadership and policing lead for artificial intelligence.

In February, the government announced that AI tools designed to generate child sex abuse material would be made illegal under "world-leading" legislation. In the meantime, however, campaigners called for guidance to be issued to teachers.

Laura Bates, the author of a book on the spread of online misogyny, told MPs earlier this month that deepfake pornography "would be the next big sexual violence epidemic facing schools, and people don't even know it is going on." "It shouldn't be the case that a 12-year-old boy can easily and freely access tools to create these forms of content in the first place," she said..

Prev Article
Tech Innovations Reshaping the Retail Landscape: AI Payments
Next Article
The Rise of AI-Powered Personal Assistants: How They Manage

Related to this topic:

Comments

By - Tnews 26 Jun 2025 5 Mins Read
Email : 4

Related Post