Indonesia’s AI-Generated Child Sexual Abuse Threat

Indonesia must adapt laws to close loopholes that disable effective deterrence of synthetic sexual abuse targeting children. Credit: Road Ahead/Unsplash

Worrying Development

In March 2025, chilling news emerged from the Ngada Regency, East Nusa Tenggara, whereby the regency’s police chief was found to have sexually abused three children, recorded his acts and uploaded the videos to an Australian-based pornography website. The Australian authorities alerted Indonesia’s Ministry of Women’s Empowerment and Child Protection after detecting the illicit footage online.

The perpetrator, AKBP Fajar Widyadharma Lukman Sumaatmaja, was later stripped of his position and charged with sexual and drug abuses.

This revelation underscores three intertwined offences: the physical, sexual violence against minors, the production and the digital dissemination of child sexual abuse material (CSAM).

Though this case did not involve the use of sophisticated technology, it points towards the imperative to criminalise AI‑generated CSAM in Indonesia. A pertinent question is this: to what extent should the law evolve to address artificial child abuse imagery to boost prevention, legal enforcement and child protection?

Despite the gravity of CSAM-related crimes, current charges under the Information and Electronic Transactions Act (ITE Law 1/2024) and Articles 55–56 of the Criminal Code merely carry a maximum penalty of six years’ imprisonment or a fine of up to Rp1bn.

Meanwhile, Article 45(1) of the ITE Law addresses distribution of child pornography; Article 4(1) of the Pornography Law 44/2008 tackles production of pornographic content and; Article 4(2)(c) of the Sexual Violence Law 12/2022 focuses on this very deed.

However, these provisions are applied in isolation, resulting in fragmented sentencing and legal loopholes. Notably, no statute explicitly criminalises synthetic or AI‑generated CSAM, leaving law enforcement ill‑equipped to tackle digitally fabricated child sexual abuse images.

Criminalisation is a Must

Cesare Beccaria, a pioneer in modern criminology, maintains in his magnum opus that the ultimate aim of punishment must be deterrence rather than vengeance. He posits that individuals are rational actors who calculate the potential benefits of wrongdoing against the likelihood and severity of punishment. When sanctions are both certain and prompt, the perceived risk of detection outweighs any criminal gains.

In the context of CSAM, his insights underscore the necessity of a legal framework in which consequences are clearly defined, consistently applied and swiftly enforced—thereby deterring would‑be offenders before any harm occurs.

The production and consumption of CSAM form a self‑reinforcing loop. Perpetrators generate imagery by abusing children, emboldening those who consume CSAM material to commit new offences. There is an imperative to cut off this chain of exploitation. By disrupting the creation, distribution or possession of CSAM, a jurisdiction can suppress the market for such material and protect vulnerable children.

Some International Responses

To make matters worse, recent years have seen an alarming spike in AI‑generated CSAM. The Internet Watch Foundation reported that from October 2023 to July 2024, roughly 3,500 newly produced AI‑generated images depicting child sexual abuse appeared on monitored dark‑web forums. Although the overall volume of content has ebbed, the number of materials classified as criminal has steadily climbed.

These synthetic depictions, though created without direct contact with real children, nonetheless perpetuate exploitative narratives and pose serious challenges to existing legal definitions of abuse.

Article 34 of the United Nations Convention on the Rights of the Child (UNCRC) obligates each of its 196 state parties to shield minors from all forms of sexual exploitation, including those enabled by novel technologies. As paraphrased from an article by media scholar Sonia Livingstone and her colleagues, the UNCRC’s preventive mandate must be interpreted in light of digital transformations, ensuring that emerging forms of harm – real or simulated – fall within its protective scope.

Thus, criminalising AI‑generated CSAM is aligned with the prevailing international commitment to prevent – rather than merely address – abuses against children.

Despite their artificial origin, AI‑generated CSAM images can cause genuine trauma. The Nepal ChildSafeNet report illustrates how highly realistic, AI‑crafted pictures and videos can trigger deep psychological distress among victims and the broader community, even when no actual child was harmed in their production.

Furthermore, a United Nations Interregional Crime and Justice Research Institute (UNICRI) study reveals      that some generative models are trained on datasets containing illicit material, effectively recycling real‑world abuses into new, synthetic content. This cycle of reproduction exacerbates victimisation, as the constant availability of seemingly authentic imagery can re-traumatise survivors and perpetuate the stigma of abuse.

In response to these evolving threats, the United Kingdom has enacted a legislation criminalising the creation, possession and distribution of AI‑generated CSAM. By erasing the need for law‑enforcement agents and prosecutors to distinguish between real and synthetic content – a task growing ever more difficult as generative algorithms advance – the United Kingdom’s approach fortifies child protection and streamlines judicial processes.

This development serves as an international precedent, demonstrating how preventive criminal law can adapt to encompass new technological modalities.

Indonesia’s Position

Indonesia, which ranked fourth globally and second in ASEAN for CSAM distribution, reported more than 5.5 million cases over the past four years. With 89% of children over the age of five using the internet primarily for social media, according to Indonesia’s Central Statistics Agency (2021), the nation confronts heightened risks of online exploitation.

The Ministry of Communication and Digital Affairs (Komdigi) has already established a digital child‑safety working group, reflecting its commitment to combating CSAM. Yet, to stay ahead of emerging threats –particularly synthetic content – Komdigi must expand its mandate beyond age checks to encompass technology‑driven harms.

First, Komdigi should spearhead a regulation explicitly outlawing AI‑generated CSAM. This might be the quickest solution rather than pushing the House of Representatives (DPR) to formulate and pass a law. By integrating such provisions into the national law, Indonesia would make clear its position on zero tolerance to any depiction of child sexual exploitation, real or artificial.

Second, Komdigi should pursue partnerships with social media and technology platforms to deploy advanced detection tools. Industry initiatives like the Robust Open Online Safety Tools consortium illustrate how public–private collaboration can accelerate the identification and removal of harmful content at scale.

Finally, Komdigi must convene expert working groups to draft comprehensive, future‑proof regulations. Anticipating the next wave of digital offences, such as deepfake abuse, will ensure that policies remain effective and futureproof as offenders adopt ever more sophisticated means.

Conclusion

Focusing solely on age verification risks neglecting the broader technological landscape in which predators operate. Generative AI not only generates new CSAM but also empowers abusers to evade detection and re-traumatised victims through eerily authentic imagery. By criminalising AI‑generated content, fostering cross‑sector collaborations for rapid removal, and preparing robust, adaptable legislation, Indonesia can strengthen its national duty to protect children and uphold its international commitments in the digital age.


The views expressed are those of the authors and do not necessarily reflect those of STRAT.O.SPHERE CONSULTING PTE LTD.

This article is published under a Creative Commons Licence. Republications minimally require 1) credit authors and their institutions, and 2) credit to STRAT.O.SPHERE CONSULTING PTE LTD  and include a link back to either our home page or the article URL.

Author

  • Hanif Abdul Halim obtained his Bachelor of Law from the International Program, Universitas Islam Indonesia. He currently pursues a Law and Technology Master’s Degree at Utrecht University. His demonstrated experience in the legal field includes stints as a Legal Professional, both Corporate Lawyer and In-house Legal in Technology and Telecommunication companies. He is also active in Pusat Studi Hak Kekayaan Intelektual Universitas Islam Indonesia (PSHKI UII) as a Researcher. Hanif is paving his way to extend his strong interest in Law and Technology, Data Privacy, AI governance, and Intellectual Property (IP) law.