UK Tech Companies and Child Protection Officials to Test AI's Capability to Create Abuse Images
Technology companies and child protection agencies will be granted authority to evaluate whether artificial intelligence systems can generate child abuse images under recently introduced UK legislation.
Significant Increase in AI-Generated Harmful Content
The declaration came as revelations from a safety monitoring body showing that reports of AI-generated child sexual abuse material have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.
Updated Regulatory Structure
Under the amendments, the government will allow designated AI companies and child protection groups to examine AI models – the underlying systems for chatbots and image generators – and verify they have adequate safeguards to stop them from creating images of child sexual abuse.
"Ultimately about stopping exploitation before it occurs," declared the minister for AI and online safety, noting: "Specialists, under strict protocols, can now identify the danger in AI systems early."
Addressing Legal Challenges
The amendments have been implemented because it is against the law to create and own CSAM, meaning that AI developers and others cannot create such content as part of a testing regime. Previously, authorities had to delay action until AI-generated CSAM was uploaded online before dealing with it.
This law is designed to averting that issue by helping to halt the production of those images at source.
Legal Framework
The changes are being introduced by the authorities as modifications to the criminal justice legislation, which is also implementing a prohibition on possessing, creating or distributing AI models developed to create child sexual abuse material.
Practical Impact
This week, the minister visited the London base of Childline and listened to a simulated conversation to counsellors involving a report of AI-based abuse. The interaction depicted a adolescent seeking help after being blackmailed using a explicit deepfake of himself, created using AI.
"When I hear about children facing blackmail online, it is a source of intense frustration in me and rightful concern amongst parents," he stated.
Concerning Data
A prominent online safety organization reported that cases of AI-generated abuse content – such as online pages that may include multiple images – had more than doubled so far this year.
Cases of the most severe material – the gravest form of abuse – rose from 2,621 images or videos to 3,086.
- Girls were predominantly victimized, accounting for 94% of illegal AI images in 2025
- Portrayals of newborns to toddlers rose from five in 2024 to 92 in 2025
Sector Response
The law change could "constitute a crucial step to ensure AI products are safe before they are launched," stated the head of the internet monitoring organization.
"Artificial intelligence systems have enabled so survivors can be victimised all over again with just a few clicks, providing criminals the capability to create potentially limitless amounts of advanced, lifelike exploitative content," she added. "Content which further commodifies victims' trauma, and renders young people, particularly girls, less safe both online and offline."
Counseling Session Data
The children's helpline also published information of counselling interactions where AI has been referenced. AI-related risks mentioned in the conversations comprise:
- Employing AI to evaluate body size, body and appearance
- AI assistants discouraging young people from consulting trusted guardians about abuse
- Being bullied online with AI-generated content
- Digital extortion using AI-manipulated images
Between April and September this year, the helpline conducted 367 counselling interactions where AI, conversational AI and related terms were discussed, significantly more as many as in the equivalent timeframe last year.
Half of the references of AI in the 2025 interactions were connected with psychological wellbeing and wellbeing, including utilizing chatbots for support and AI therapeutic applications.