Search

Shopping cart

Saved articles

You have not yet added any article to your bookmarks!

Browse articles
Newsletter image

Subscribe to the Newsletter

Join 10k+ people to get notified about new posts, news and tips.

Do not worry we don't spam!

GDPR Compliance

We use cookies to ensure you get the best experience on our website. By continuing to use our site, you accept our use of cookies, Privacy Policy, and Terms of Service.

New law targets AI-generated child sex abuse images

Groups tackling AI-generated child sexual abuse material could be given more powers to protect children online under a proposed new law.

Organisations like the Internet Watch Foundation (IWF), as well as AI developers themselves, will be able to test the ability of AI models to create such content without breaking the law. That would mean they could tackle the problem at the source, rather than having to wait for illegal content to appear before they deal with it, according to Kerry Smith, chief executive of the IWF.

The IWF deals with child abuse images online, removing hundreds of thousands every year. Ms Smith called the proposed law a "vital step to make sure AI products are safe before they are released".

How would the law work? The changes are due to be tabled today as an amendment to the Crime and Policing Bill. The government said designated bodies could include AI developers and child protection organisations, and it will bring in a group of experts to ensure testing is carried out "safely and securely".

The new rules would also mean AI models can be checked to make sure they don't produce extreme pornography or non-consensual intimate images. "These new laws will ensure AI systems can be made safe at the source, preventing vulnerabilities that could put children at risk," said Technology Secretary Liz Kendall.

"By empowering trusted organisations to scrutinise their AI models, we are ensuring child safety is designed into AI systems, not bolted on as an afterthought." AI abuse material on the rise The announcement came as new data was published by the IWF showing reports of AI-generated child sexual abuse material have more than doubled in the past year. According to the data, the severity of material has intensified over that time.

The most serious category A content - images involving penetrative sexual activity, sexual activity with an animal, or sadism - has risen from 2,621 to 3,086 items, accounting for 56% of all illegal material, compared with 41% last year. Read more from Sky News:Protesters storm COP30UK stops some intel sharing with US The data showed girls have been most commonly targeted, accounting for 94% of illegal AI images in 2025.

The NSPCC called for the new laws to go further and make this kind of testing compulsory for AI companies. "It's encouraging to see new legislation that pushes the AI industry to take greater responsibility for scrutinising their models and preventing the creation of child sexual abuse material on their platforms," said Rani Govender, policy manager for child safety online at the charity.

"But to make a real difference for children, this cannot be optional. "Government must ensure that there is a mandatory duty for AI developers to use this provision so that safeguarding against child sexual abuse is an essential part of product design.".

Prev Article
Tech Innovations Reshaping the Retail Landscape: AI Payments
Next Article
The Rise of AI-Powered Personal Assistants: How They Manage

Related to this topic:

Comments

By - Tnews 11 Nov 2025 5 Mins Read
Email : 13

Related Post