UNICEF Warns of Growing Threat from AI-Generated Sexualised Images of Children.

UNICEF Warns of Growing Threat from AI-Generated Sexualised Images of Children. 

 

NEW YORK:

 

UNICEF has raised the alarm over a sharp increase in the creation and circulation of AI-generated sexualised images of children, warning that so-called “deepfakes” constitute a serious and rapidly growing form of child sexual abuse.

In a statement issued a few days ago, UNICEF said it is increasingly concerned by reports that photographs of children are being manipulated using artificial intelligence to create sexually explicit content, including through “nudification” tools that digitally remove or alter clothing to fabricate nude or sexualised images.

“Deepfakes – images, videos, or audio generated or manipulated with Artificial Intelligence designed to look real – are increasingly being used to produce sexualised content involving children,” UNICEF said. “Deepfake abuse is abuse, and there is nothing fake about the harm it causes.”

New evidence highlights the scale of the threat. According to a joint study by UNICEF, ECPAT and INTERPOL conducted across 11 countries, at least 1.2 million children reported that images of them had been manipulated into sexually explicit deepfakes in the past year. In some countries, this equates to one in every 25 children — roughly one child in a typical classroom.

The study also found that children themselves are acutely aware of the risks posed by artificial intelligence. In several of the countries surveyed, up to two thirds of children said they are worried that AI could be used to create fake sexual images or videos of them. Levels of concern varied widely, underscoring the urgent need for stronger awareness, prevention and protection measures.

UNICEF stressed that AI-generated or manipulated sexualised images of children must be recognised as child sexual abuse material (CSAM).

“We must be clear. Sexualised images of children generated or manipulated using AI tools are child sexual abuse material,” the statement said. “When a child’s image or identity is used, that child is directly victimised.”

Even in cases where no identifiable child appears, UNICEF warned that AI-generated CSAM normalises the sexual exploitation of children, fuels demand for abusive material, and creates major challenges for law enforcement in identifying and protecting victims.

While welcoming efforts by some AI developers to adopt safety-by-design approaches and stronger safeguards, UNICEF cautioned that protections remain inconsistent. The risks are particularly acute when generative AI tools are integrated into social media platforms, where manipulated images can spread rapidly and widely.

To confront the escalating threat, UNICEF called for urgent action on multiple fronts. The agency urged governments to expand legal definitions of child sexual abuse material to explicitly include AI-generated content and to criminalise its creation, possession and distribution. It also called on AI developers to implement robust guardrails to prevent misuse of their technologies.

In addition, UNICEF urged digital companies to take proactive steps to prevent the circulation of AI-generated child sexual abuse material, rather than relying solely on removal after harm has occurred. This includes investing in detection technologies to ensure such content can be identified and removed immediately.

“The harm from deepfake abuse is real and urgent,” UNICEF said. “Children cannot wait for the law to catch up.”

Leave a Reply

Your email address will not be published. Required fields are marked *