British Tech Companies and Child Safety Officials to Examine AI's Ability to Generate Abuse Content
Technology companies and child safety agencies will receive authority to assess whether AI tools can produce child exploitation material under new UK laws.
Significant Rise in AI-Generated Illegal Material
The declaration came as revelations from a safety monitoring body showing that cases of AI-generated child sexual abuse material have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025.
Updated Regulatory Framework
Under the changes, the government will allow approved AI companies and child safety groups to examine AI models – the foundational systems for conversational AI and visual AI tools – and ensure they have adequate protective measures to stop them from creating depictions of child exploitation.
"Fundamentally about preventing exploitation before it occurs," declared Kanishka Narayan, noting: "Experts, under rigorous protocols, can now detect the risk in AI models early."
Tackling Regulatory Obstacles
The changes have been implemented because it is against the law to produce and possess CSAM, meaning that AI developers and others cannot generate such content as part of a evaluation regime. Previously, officials had to delay action until AI-generated CSAM was uploaded online before addressing it.
This legislation is aimed at averting that problem by enabling to halt the production of those materials at source.
Legal Framework
The amendments are being added by the authorities as revisions to the crime and policing bill, which is also establishing a prohibition on owning, creating or sharing AI systems designed to generate exploitative content.
Real-World Consequences
This recently, the official visited the London base of a children's helpline and listened to a mock-up conversation to advisors involving a report of AI-based exploitation. The interaction depicted a teenager requesting help after being blackmailed using a explicit deepfake of himself, created using AI.
"When I hear about young people experiencing extortion online, it is a source of intense frustration in me and rightful anger amongst parents," he stated.
Concerning Statistics
A prominent internet monitoring foundation stated that instances of AI-generated exploitation material – such as online pages that may contain numerous images – had significantly increased so far this year.
Instances of category A content – the gravest form of exploitation – increased from 2,621 visual files to 3,086.
- Girls were overwhelmingly victimized, making up 94% of illegal AI images in 2025
- Depictions of newborns to two-year-olds rose from five in 2024 to 92 in 2025
Industry Reaction
The law change could "represent a vital step to guarantee AI products are secure before they are launched," stated the chief executive of the internet monitoring foundation.
"AI tools have made it so victims can be targeted all over again with just a few clicks, giving criminals the ability to create possibly endless amounts of advanced, lifelike exploitative content," she added. "Content which further exploits victims' suffering, and renders young people, particularly female children, more vulnerable on and off line."
Counseling Interaction Data
Childline also published information of support sessions where AI has been referenced. AI-related risks mentioned in the conversations include:
- Employing AI to evaluate weight, body and looks
- Chatbots dissuading young people from consulting safe guardians about harm
- Being bullied online with AI-generated content
- Online blackmail using AI-manipulated images
Between April and September this year, Childline conducted 367 counselling interactions where AI, conversational AI and associated topics were mentioned, four times as many as in the same period last year.
Half of the references of AI in the 2025 sessions were connected with psychological wellbeing and wellness, encompassing using chatbots for support and AI therapeutic apps.