UNICEF urges action as AI deepfakes drive child sexual exploitation

Share

The United Nations Children’s Fund (UNICEF) has sounded the alarm over a rapid surge in AI-generated sexually explicit material.

They warn that the manipulation of children’s photographs into fabricated abuse imagery constitutes a fast-growing global threat.

UNICEF reveals millions of child victims in past year

In a press statement, the agency stated that artificial intelligence is increasingly weaponised to create deepfakes of minors. This includes the use of nudification tools to alter clothing in ordinary images, producing nude or sexualised content.

New evidence from a collaborative study by UNICEF, ECPAT International, and INTERPOL across 11 countries confirms the alarming scale. The data reveals at least 1.2 million children suffered the manipulation of their images into sexually explicit deepfakes within one year alone.

“We must be clear. Sexualised images of children generated or manipulated using AI tools are child sexual abuse material,” UNICEF emphasized.

It also noted that Deepfake abuse is abuse, and there is nothing fake about the harm it causes.

Agency demands criminalisation and stricter tech safeguards

UNICEF warned that such material directly victimises the child whose identity is used and fuels demand for abusive content. It also presents severe challenges for law enforcement in identifying real victims.

While welcoming safety initiatives by some AI developers, UNICEF criticised the uneven environment where many models lack adequate safeguards. The integration of generative AI tools into social media platforms, where manipulated images can spread rapidly, compounds the danger.

The agency further issued an urgent call for governments to criminalise all AI-generated child sexual abuse material. It demanded that AI firms implement “safety-by-design” guardrails.

The agency also urged that digital companies should prevent the circulation of such content with stronger moderation and detection technologies.

Dr. Burhanettin Demircioglu, a social security engineering expert and academic, endorsed UNICEF’s alert. He declared such AI-generated material to be abuse, stating the technology must never make the exploitation of children acceptable.

In another development, the Molly Rose Foundation (MRF) issued a public warning about online networks known as the com that target vulnerable children for sexual abuse, self-harm, and suicide. The UK-based charity released the alert following publication of a comprehensive report detailing the scale and nature of these threats.

Similarly, children’s charities and online safety experts have repeatedly warned about the growing risks of digital exploitation facing vulnerable young people. Concerns have centred on the need for stronger safeguarding measures and coordinated responses to prevent harm facilitated through online platforms.

Charity Journal Logo

NEWSLETTER

The pulse of global philanthropy. News, grants, and social impact — delivered to your inbox.

Read more

Charity 101