UK Tech Firms and Child Safety Officials to Test AI's Ability to Create Exploitation Content
Tech firms and child protection agencies will receive authority to evaluate whether artificial intelligence tools can generate child exploitation images under new UK laws.
Significant Increase in AI-Generated Illegal Material
The declaration came as revelations from a safety monitoring body showing that cases of AI-generated CSAM have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.
Updated Legal Structure
Under the amendments, the government will allow approved AI companies and child safety organizations to inspect AI systems – the underlying technology for chatbots and image generators – and verify they have adequate safeguards to prevent them from creating images of child exploitation.
"Fundamentally about stopping abuse before it occurs," declared Kanishka Narayan, adding: "Experts, under rigorous conditions, can now detect the risk in AI systems promptly."
Addressing Regulatory Challenges
The amendments have been implemented because it is against the law to produce and possess CSAM, meaning that AI developers and other parties cannot generate such content as part of a testing process. Previously, officials had to delay action until AI-generated CSAM was published online before dealing with it.
This law is designed to preventing that problem by helping to stop the production of those images at their origin.
Legislative Structure
The changes are being introduced by the authorities as revisions to the crime and policing bill, which is also establishing a prohibition on possessing, producing or distributing AI systems developed to generate child sexual abuse material.
Practical Consequences
This week, the official visited the London base of a children's helpline and listened to a mock-up conversation to counsellors involving a report of AI-based abuse. The interaction depicted a adolescent seeking help after being blackmailed using a sexualised deepfake of himself, constructed using AI.
"When I learn about young people experiencing extortion online, it is a source of extreme anger in me and rightful anger amongst parents," he stated.
Alarming Data
A leading internet monitoring organization stated that cases of AI-generated abuse content – such as online pages that may include numerous images – had significantly increased so far this year.
Instances of the most severe material – the most serious form of abuse – increased from 2,621 images or videos to 3,086.
- Girls were overwhelmingly victimized, making up 94% of illegal AI images in 2025
- Portrayals of infants to two-year-olds rose from five in 2024 to 92 in 2025
Sector Response
The legislative amendment could "constitute a crucial step to guarantee AI tools are secure before they are launched," stated the head of the internet monitoring organization.
"AI tools have enabled so victims can be targeted repeatedly with just a simple actions, giving offenders the ability to create potentially limitless amounts of advanced, photorealistic exploitative content," she continued. "Content which additionally commodifies victims' suffering, and makes children, particularly female children, more vulnerable on and off line."
Counseling Session Information
The children's helpline also released details of counselling sessions where AI has been referenced. AI-related risks discussed in the sessions include:
- Employing AI to rate weight, body and looks
- AI assistants dissuading children from consulting safe guardians about harm
- Being bullied online with AI-generated material
- Digital extortion using AI-manipulated images
During April and September this year, Childline conducted 367 counselling interactions where AI, conversational AI and related topics were discussed, significantly more as many as in the same period last year.
Fifty percent of the references of AI in the 2025 interactions were related to psychological wellbeing and wellness, including using AI assistants for assistance and AI therapy apps.