Hanif Abdul Halim – Stratsea https://stratsea.com Stratsea Tue, 06 May 2025 07:00:41 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://stratsea.com/wp-content/uploads/2021/02/cropped-Group-32-32x32.png Hanif Abdul Halim – Stratsea https://stratsea.com 32 32 Indonesia’s AI-Generated Child Sexual Abuse Threat https://stratsea.com/indonesias-ai-generated-child-sexual-abuse-threat/ Tue, 06 May 2025 07:00:40 +0000 https://stratsea.com/?p=2916
Indonesia must adapt laws to close loopholes that disable effective deterrence of synthetic sexual abuse targeting children. Credit: Road Ahead/Unsplash

Worrying Development

In March 2025, chilling news emerged from the Ngada Regency, East Nusa Tenggara, whereby the regency’s police chief was found to have sexually abused three children, recorded his acts and uploaded the videos to an Australian-based pornography website. The Australian authorities alerted Indonesia’s Ministry of Women’s Empowerment and Child Protection after detecting the illicit footage online.

The perpetrator, AKBP Fajar Widyadharma Lukman Sumaatmaja, was later stripped of his position and charged with sexual and drug abuses.

This revelation underscores three intertwined offences: the physical, sexual violence against minors, the production and the digital dissemination of child sexual abuse material (CSAM).

Though this case did not involve the use of sophisticated technology, it points towards the imperative to criminalise AI‑generated CSAM in Indonesia. A pertinent question is this: to what extent should the law evolve to address artificial child abuse imagery to boost prevention, legal enforcement and child protection?

Despite the gravity of CSAM-related crimes, current charges under the Information and Electronic Transactions Act (ITE Law 1/2024) and Articles 55–56 of the Criminal Code merely carry a maximum penalty of six years’ imprisonment or a fine of up to Rp1bn.

Meanwhile, Article 45(1) of the ITE Law addresses distribution of child pornography; Article 4(1) of the Pornography Law 44/2008 tackles production of pornographic content and; Article 4(2)(c) of the Sexual Violence Law 12/2022 focuses on this very deed.

However, these provisions are applied in isolation, resulting in fragmented sentencing and legal loopholes. Notably, no statute explicitly criminalises synthetic or AI‑generated CSAM, leaving law enforcement ill‑equipped to tackle digitally fabricated child sexual abuse images.

Criminalisation is a Must

Cesare Beccaria, a pioneer in modern criminology, maintains in his magnum opus that the ultimate aim of punishment must be deterrence rather than vengeance. He posits that individuals are rational actors who calculate the potential benefits of wrongdoing against the likelihood and severity of punishment. When sanctions are both certain and prompt, the perceived risk of detection outweighs any criminal gains.

In the context of CSAM, his insights underscore the necessity of a legal framework in which consequences are clearly defined, consistently applied and swiftly enforced—thereby deterring would‑be offenders before any harm occurs.

The production and consumption of CSAM form a self‑reinforcing loop. Perpetrators generate imagery by abusing children, emboldening those who consume CSAM material to commit new offences. There is an imperative to cut off this chain of exploitation. By disrupting the creation, distribution or possession of CSAM, a jurisdiction can suppress the market for such material and protect vulnerable children.

Some International Responses

To make matters worse, recent years have seen an alarming spike in AI‑generated CSAM. The Internet Watch Foundation reported that from October 2023 to July 2024, roughly 3,500 newly produced AI‑generated images depicting child sexual abuse appeared on monitored dark‑web forums. Although the overall volume of content has ebbed, the number of materials classified as criminal has steadily climbed.

These synthetic depictions, though created without direct contact with real children, nonetheless perpetuate exploitative narratives and pose serious challenges to existing legal definitions of abuse.

Article 34 of the United Nations Convention on the Rights of the Child (UNCRC) obligates each of its 196 state parties to shield minors from all forms of sexual exploitation, including those enabled by novel technologies. As paraphrased from an article by media scholar Sonia Livingstone and her colleagues, the UNCRC’s preventive mandate must be interpreted in light of digital transformations, ensuring that emerging forms of harm – real or simulated – fall within its protective scope.

Thus, criminalising AI‑generated CSAM is aligned with the prevailing international commitment to prevent – rather than merely address – abuses against children.

Despite their artificial origin, AI‑generated CSAM images can cause genuine trauma. The Nepal ChildSafeNet report illustrates how highly realistic, AI‑crafted pictures and videos can trigger deep psychological distress among victims and the broader community, even when no actual child was harmed in their production.

Furthermore, a United Nations Interregional Crime and Justice Research Institute (UNICRI) study reveals      that some generative models are trained on datasets containing illicit material, effectively recycling real‑world abuses into new, synthetic content. This cycle of reproduction exacerbates victimisation, as the constant availability of seemingly authentic imagery can re-traumatise survivors and perpetuate the stigma of abuse.

In response to these evolving threats, the United Kingdom has enacted a legislation criminalising the creation, possession and distribution of AI‑generated CSAM. By erasing the need for law‑enforcement agents and prosecutors to distinguish between real and synthetic content – a task growing ever more difficult as generative algorithms advance – the United Kingdom’s approach fortifies child protection and streamlines judicial processes.

This development serves as an international precedent, demonstrating how preventive criminal law can adapt to encompass new technological modalities.

Indonesia’s Position

Indonesia, which ranked fourth globally and second in ASEAN for CSAM distribution, reported more than 5.5 million cases over the past four years. With 89% of children over the age of five using the internet primarily for social media, according to Indonesia’s Central Statistics Agency (2021), the nation confronts heightened risks of online exploitation.

The Ministry of Communication and Digital Affairs (Komdigi) has already established a digital child‑safety working group, reflecting its commitment to combating CSAM. Yet, to stay ahead of emerging threats –particularly synthetic content – Komdigi must expand its mandate beyond age checks to encompass technology‑driven harms.

First, Komdigi should spearhead a regulation explicitly outlawing AI‑generated CSAM. This might be the quickest solution rather than pushing the House of Representatives (DPR) to formulate and pass a law. By integrating such provisions into the national law, Indonesia would make clear its position on zero tolerance to any depiction of child sexual exploitation, real or artificial.

Second, Komdigi should pursue partnerships with social media and technology platforms to deploy advanced detection tools. Industry initiatives like the Robust Open Online Safety Tools consortium illustrate how public–private collaboration can accelerate the identification and removal of harmful content at scale.

Finally, Komdigi must convene expert working groups to draft comprehensive, future‑proof regulations. Anticipating the next wave of digital offences, such as deepfake abuse, will ensure that policies remain effective and futureproof as offenders adopt ever more sophisticated means.

Conclusion

Focusing solely on age verification risks neglecting the broader technological landscape in which predators operate. Generative AI not only generates new CSAM but also empowers abusers to evade detection and re-traumatised victims through eerily authentic imagery. By criminalising AI‑generated content, fostering cross‑sector collaborations for rapid removal, and preparing robust, adaptable legislation, Indonesia can strengthen its national duty to protect children and uphold its international commitments in the digital age.

]]>
AI Development for the Global South https://stratsea.com/ai-development-for-the-global-south/ Tue, 23 Jan 2024 02:07:11 +0000 https://stratsea.com/?p=2268
There is a large gap between the Global North and Global South in terms of AI adoption. Credit: Martin Sanchez/Unsplash.

Introduction

Artificial Intelligence (AI) is reshaping the world in profound ways, revolutionizing industries, economies and societies. Yet, as AI’s influence extends, it becomes increasingly imperative to ensure that its development and deployment are not only technically proficient but also ethically sound and inclusive.

Inclusive AI development signifies not only technological advancement but also the infusion of ethical considerations into the AI landscape. The ethical dimensions of AI span far and wide, including fairness in algorithms, mitigating biases, transparency, accountability and safeguarding against discriminatory practices. These considerations are not only critical from a moral standpoint but are also central to the sustained advancement and societal acceptance of AI technologies.

Concurrently, the Global South – comprising diverse nations across Africa, Latin America, Asia      and other regions – stands at the cusp of an AI revolution. As AI promises unparalleled transformative potential, it is crucial to explore how this technology can be harnessed to address the unique challenges and opportunities faced by the Global South.

It is also essential to ensure that the benefits of this transformative technology are shared by all, transcending geographical and societal boundaries. Inclusive and equitable access to AI are not mere aspirations but imperatives that will define the path of AI’s journey through the 21st century.

Inclusive AI Development

Inclusive AI development is a holistic approach that goes beyond technical excellence to incorporate ethical, societal and human-centered considerations. It aims to harness the power of AI for the betterment of all, while mitigating the potential risks and challenges associated with these technologies. This approach is essential for building AI systems that are not only cutting-edge but also responsible, equitable      and aligned with human values.

The development of AI is not merely exciting the Global North, but also people in the South. The positive implications that AI might bring are long-awaited by private sectors, governmental bodies, as well as researchers from multiple disciplines. AI has the potential to empower people and those at the grassroots level by providing access to innovative solutions and services. For instance, micro, small and medium enterprises (MSMEs) can leverage AI for tasks such as automated customer support, inventory management and personalized marketing, enabling them to operate more efficiently and compete in the digital economy.    

AI is also seen as a catalyst for economic development in the Global South which comprises many developing nations. Of course, these countries want to experience the benefits that AI offers as mentioned above. However, several challenges – such as lack of funding, lagging infrastructure sophistication and also lack of personnel capabilities – render AI development in the South lagging behind that in the North. Thus, the adoption of AI by many stakeholders in the South remains a unique challenge despite the very rapid development experienced by AI.

Ethical AI and Data Governance

One of the main pillars of AI development depends so much on the large amount of data fed into its system. It is not just a matter of quantity: the better the quality of the data given to AI, the better the outcomes it will be able to produce.      

This concern can be a big barrier for an ethical AI development and to avoid harm for most people. However, to map the barriers related to data governance, we can divide it into two large parts: 1) treat the data as confidential as we can and; 2) give a clear guidance on the standard of datasets fed to the AI system.      

The first point, for example, is very important because it is a fundamental human right. Violation of one’s privacy can lead to a worse scenario, such as the undermining of one’s dignity or safety. Thus, safeguarding the data being fed to an AI system is crucial.      

On the second, policymakers at national or regional levels can work together to establish a clear guidance on what standards should be met when feeding data to an AI system during its training phase. Because if we take a look back at what General Data Protection Regulation (GDPR) has regulated, even data subjects have the right to be forgotten from a data controller’s system.

Currently in the Global South, in terms of legally binding instruments for the AI training system, there is not yet clear guidance when, how and under what circumstances one could request their data to be deleted from an AI system. This can also lead to reduced public trust in the data collected, processed and presented by an AI system, such as the Large Language Model (LLM) type used by ChatGPT.      

In addition, data governance on AI shall also uphold the intellectual properties in each dataset an AI developer might collect. Not all data on the Internet is free for use in the first place.

Data such as written works, a collection of chords from a song, or even a complex set of software backend coding formula might have also been registered as someone’s copyrights in national or international jurisdictions.

The use of huge amounts of data collected and stored in an AI system during its training phase could possibly deny the economic value of the initial creator. If such a thing happens frequently and collectively, it will impact economic growth, especially in the creative industry where appreciation to one’s work depends on intellectual property and the economic value in it.

Bridging the Global AI Gap    

The AI advances we see and experience today are an important part of what the Global North is all about. Qualified personnel capabilities, adequate infrastructure, abundant funding availability and policy frameworks that are starting to be finalized are major capital for the rapid development of AI.

Apart from that, the high use of AI in Western countries also means demand for investment is high. The large number of early AI adopters appeals to investors and the level of existing liquidity (both venture capital and private equity) to support AI development is increasingly unstoppable.

The AI is increasingly being adopted even in the public sector. Belgium’s CitizenLab, a civic technology company, aims to empower civil servants and provide them with machine-learning augmented processes that will help analyze citizen input, make better decisions and collaborate more efficiently internally. CitizenLab’s platform uses Natural Language Processing (NLP) and Machine Learning (ML) techniques to automatically classify and analyze thousands of contributions collected on citizen participation platforms.

Canada offers another example. Transport Canada is the department responsible for the Government of Canada’s transportation policies and programs: it promotes safe, secure, efficient and environmentally responsible transportation. Transport Canada is adopting AI to enhance processes and procedures, thereby freeing up employees to work on more highly valued tasks.

The department started by exploring the use of AI for risk-based reviews of air cargo records, which could be scaled to other areas if successful. To achieve this, the department assembled a multi-disciplinary team consisting of members of Pre-load Air Cargo Targeting (PACT), the department’s Digital Services and Transformation division, one of Canada’s Free Agents, and AI experts from from an external IT firm. As a result, the team was able to use AI to automatically generate accurate risk indicators.

Meanwhile Indonesia, the largest country in Southeast Asia, the use of AI is still led by the private sector. One of them is the use of AI by McEasy, a company that provides Software-as-a-Service (SaaS) for logistics and transportation operators. The development of this innovation occurred thanks to funding at the end of last year for route optimization and fleet management, thereby providing added value for consumers and increasing existence in second and third tier cities with business growth of 300 percent from last year.

Conclusion

Reflecting on the two success stories of using AI above, despite all the barriers and risks, there lies hope for better, AI-enhanced future. However, certain steps must be pursued.

First, we must ensure that there is a flow of investment funds from Global North to the Global South. AI development is very complex: it ranges from developing infrastructure, to acquiring technical expertise, to establishing clear legal regulatory frameworks and to grabbing research and development opportunities. All of these factors require a lot of funds to realize. With the relatively limited funds faced by developing countries in the Global South, it will only add more barriers for them to join the sky-rocketing AI growth up in the North.      

Second, we need to encourage the participation of the Global South in AI development and adoption through a clear multilateral cooperation framework. This can be achieved more quickly by involving regional organizations such as ASEAN or South Asian Association for Regional Cooperation (SAARC).

Such collaboration must also be able to accelerate other United Nations’ agendas such as the SDGs because they have the appeal to attract the cooperation of various UN member states. Apart from that, cooperation between regional bodies can be a fast way to realize multi-layered, standardized, ethical and responsible use of AI. Therefore, it is hoped that policies such as the AI Act will not only be adopted in the European Union, but also in ASEAN or SAARC regions.

]]>