British Tech Firms and Child Safety Agencies to Examine AI's Capability to Generate Abuse Images
Tech firms and child safety agencies will receive permission to assess whether artificial intelligence tools can generate child exploitation images under new UK legislation.
Substantial Increase in AI-Generated Illegal Content
The declaration coincided with revelations from a safety watchdog showing that reports of AI-generated CSAM have increased dramatically in the last twelve months, rising from 199 in 2024 to 426 in 2025.
Updated Legal Structure
Under the amendments, the authorities will permit approved AI companies and child protection organizations to inspect AI models – the foundational systems for chatbots and image generators – and ensure they have sufficient protective measures to prevent them from producing images of child exploitation.
"Fundamentally about preventing exploitation before it happens," declared Kanishka Narayan, noting: "Specialists, under rigorous conditions, can now identify the risk in AI systems early."
Tackling Legal Obstacles
The amendments have been implemented because it is illegal to produce and possess CSAM, meaning that AI creators and other parties cannot generate such images as part of a evaluation regime. Previously, authorities had to delay action until AI-generated CSAM was published online before dealing with it.
This legislation is aimed at averting that issue by helping to stop the production of those materials at their origin.
Legislative Framework
The changes are being introduced by the authorities as revisions to the crime and policing bill, which is also establishing a prohibition on possessing, creating or sharing AI systems developed to create child sexual abuse material.
Practical Impact
This week, the minister toured the London base of a children's helpline and listened to a simulated conversation to advisors involving a report of AI-based exploitation. The call portrayed a adolescent requesting help after facing extortion using a sexualised AI-generated image of himself, created using AI.
"When I hear about young people experiencing extortion online, it is a cause of intense frustration in me and justified concern amongst families," he said.
Alarming Data
A leading online safety organization reported that cases of AI-generated abuse material – such as webpages that may include numerous files – had significantly increased so far this year.
Instances of the most severe content – the gravest form of abuse – increased from 2,621 visual files to 3,086.
- Girls were overwhelmingly targeted, making up 94% of prohibited AI depictions in 2025
- Depictions of newborns to toddlers increased from five in 2024 to 92 in 2025
Industry Response
The legislative amendment could "represent a vital step to guarantee AI tools are secure before they are released," stated the chief executive of the online safety organization.
"Artificial intelligence systems have enabled so victims can be targeted all over again with just a few clicks, providing criminals the capability to make possibly endless amounts of advanced, lifelike exploitative content," she added. "Content which additionally exploits survivors' suffering, and renders young people, especially girls, more vulnerable on and off line."
Support Session Data
The children's helpline also released information of counselling interactions where AI has been referenced. AI-related risks discussed in the conversations comprise:
- Using AI to evaluate body size, physique and looks
- AI assistants dissuading children from consulting trusted adults about harm
- Being bullied online with AI-generated content
- Digital extortion using AI-manipulated pictures
Between April and September this year, the helpline conducted 367 counselling sessions where AI, conversational AI and associated terms were mentioned, four times as many as in the same period last year.
Fifty percent of the references of AI in the 2025 interactions were connected with mental health and wellbeing, encompassing using chatbots for assistance and AI therapeutic applications.