UK Tech Companies and Child Safety Agencies to Test AI's Capability to Create Exploitation Content
Tech firms and child safety organizations will be granted authority to assess whether artificial intelligence tools can produce child abuse material under new British laws.
Significant Increase in AI-Generated Illegal Content
The declaration came as findings from a protection monitoring body showing that cases of AI-generated child sexual abuse material have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.
New Regulatory Framework
Under the amendments, the government will permit designated AI developers and child safety organizations to examine AI models – the underlying technology for conversational AI and visual AI tools – and ensure they have adequate protective measures to stop them from producing depictions of child sexual abuse.
"Ultimately about preventing exploitation before it occurs," stated Kanishka Narayan, adding: "Specialists, under rigorous conditions, can now identify the risk in AI models early."
Tackling Legal Challenges
The amendments have been implemented because it is against the law to produce and own CSAM, meaning that AI creators and others cannot generate such images as part of a evaluation regime. Previously, officials had to wait until AI-generated CSAM was uploaded online before addressing it.
This legislation is aimed at averting that problem by enabling to stop the creation of those materials at their origin.
Legislative Framework
The amendments are being introduced by the government as revisions to the criminal justice legislation, which is also implementing a prohibition on owning, producing or distributing AI models designed to generate exploitative content.
Real-World Consequences
This recently, the official visited the London headquarters of a children's helpline and listened to a mock-up conversation to advisors featuring a account of AI-based exploitation. The call depicted a teenager seeking help after being blackmailed using a explicit AI-generated image of themselves, created using AI.
"When I hear about children experiencing blackmail online, it is a source of extreme anger in me and rightful anger amongst parents," he said.
Concerning Statistics
A leading online safety organization stated that instances of AI-generated exploitation content – such as online pages that may contain multiple images – had more than doubled so far this year.
Instances of category A content – the gravest form of exploitation – increased from 2,621 images or videos to 3,086.
- Female children were overwhelmingly targeted, accounting for 94% of prohibited AI depictions in 2025
- Depictions of newborns to toddlers increased from five in 2024 to 92 in 2025
Sector Reaction
The legislative amendment could "represent a vital step to guarantee AI products are safe before they are released," commented the head of the online safety organization.
"Artificial intelligence systems have enabled so survivors can be victimised all over again with just a few clicks, providing offenders the ability to create possibly endless amounts of advanced, photorealistic exploitative content," she continued. "Material which additionally exploits victims' suffering, and renders young people, particularly female children, less safe both online and offline."
Support Session Information
Childline also published details of support interactions where AI has been mentioned. AI-related risks mentioned in the sessions comprise:
- Using AI to evaluate weight, body and appearance
- AI assistants dissuading young people from talking to trusted guardians about harm
- Facing harassment online with AI-generated content
- Digital extortion using AI-manipulated pictures
Between April and September this year, Childline conducted 367 support sessions where AI, chatbots and related topics were mentioned, four times as many as in the equivalent timeframe last year.
Fifty percent of the mentions of AI in the 2025 sessions were connected with mental health and wellbeing, including utilizing chatbots for support and AI therapy apps.