British Tech Firms and Child Protection Officials to Examine AI's Ability to Create Abuse Content

Tech firms and child protection organizations will receive permission to evaluate whether artificial intelligence systems can produce child exploitation images under new British legislation.

Significant Increase in AI-Generated Illegal Content

The announcement coincided with findings from a safety watchdog showing that reports of AI-generated CSAM have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.

New Regulatory Framework

Under the changes, the authorities will permit designated AI developers and child safety groups to inspect AI systems – the underlying systems for chatbots and visual AI tools – and ensure they have sufficient safeguards to stop them from creating depictions of child sexual abuse.

"Fundamentally about stopping abuse before it occurs," declared the minister for AI and online safety, adding: "Experts, under rigorous conditions, can now detect the danger in AI systems early."

Tackling Legal Challenges

The amendments have been implemented because it is illegal to create and possess CSAM, meaning that AI developers and others cannot generate such images as part of a testing regime. Until now, officials had to wait until AI-generated CSAM was uploaded online before dealing with it.

This law is aimed at preventing that problem by helping to halt the production of those materials at their origin.

Legal Framework

The amendments are being added by the authorities as modifications to the crime and policing bill, which is also establishing a ban on possessing, producing or distributing AI models designed to generate exploitative content.

Real-World Impact

This recently, the minister visited the London base of a children's helpline and heard a mock-up conversation to advisors featuring a account of AI-based exploitation. The interaction depicted a adolescent seeking help after facing extortion using a explicit AI-generated image of himself, constructed using AI.

"When I hear about young people experiencing blackmail online, it is a cause of extreme frustration in me and justified anger amongst families," he said.

Concerning Statistics

A leading internet monitoring foundation reported that instances of AI-generated exploitation content – such as online pages that may contain numerous images – had significantly increased so far this year.

Cases of the most severe content – the most serious form of exploitation – rose from 2,621 images or videos to 3,086.

  • Female children were predominantly victimized, making up 94% of prohibited AI images in 2025
  • Depictions of newborns to toddlers rose from five in 2024 to 92 in 2025

Sector Response

The law change could "constitute a crucial step to guarantee AI products are secure before they are released," commented the head of the online safety foundation.

"Artificial intelligence systems have enabled so survivors can be targeted repeatedly with just a simple actions, providing criminals the ability to make possibly endless quantities of sophisticated, photorealistic exploitative content," she added. "Content which additionally commodifies victims' trauma, and makes young people, particularly female children, more vulnerable both online and offline."

Support Interaction Information

The children's helpline also published details of support interactions where AI has been referenced. AI-related harms discussed in the sessions include:

  • Employing AI to rate body size, body and appearance
  • AI assistants discouraging children from talking to safe adults about abuse
  • Being bullied online with AI-generated content
  • Digital blackmail using AI-faked pictures

During April and September this year, the helpline conducted 367 counselling interactions where AI, chatbots and related terms were discussed, significantly more as many as in the same period last year.

Half of the references of AI in the 2025 interactions were related to psychological wellbeing and wellbeing, including using AI assistants for support and AI therapy applications.

Denise Mitchell
Denise Mitchell

A digital content strategist passionate about gaming and live streaming innovations, with years of experience in community building.