The EU’s New AI Regulation Will Have Global Implications

Illustration of a robot. Credit:  hobijist3d  / Unsplash

Introduction

Two months ago, the European Parliament have finally passed the long-awaited Artificial Intelligence (AI) Act that was first introduced in 2021. The regulation was endorsed by the members of Parliament with 523 votes in favor, 46 against and 49 abstentions.

The EU’s AI Act is considered as the final technology-related legislation passed under the 2019-24 European Parliament and Commission, as a part of their mission to create a “Europe fit for the Digital Age”.

With the aim of creating a “futureproof” legal framework for AI regulation in all sectors, some pertinent questions arise. How will the act be implemented? Which key stakeholder would be most impacted by the regulation? Finally, will the legislation have any influence towards AI governance outside of the EU?

Impacts of the Act

Various countries have different approaches to governing AI. The United States prioritizes national competitiveness in AI development, often at the expense of individual rights and privacy. In contrast, China uses AI to maintain social harmony and control through their social credit system. However, both lack significant public criticism of AI systems, hindering the development of trustworthy and accountable AI.

Meanwhile, the EU’s identity is grounded in political values such as freedom and democracy, setting it apart from other global actors like the United States, China, Russia, and the United Kingdom.

The EU AI Act aims to regulate AI use with a focus on human-centric and ethical principles. It is envisioned to address such policy problems as potential violations of fundamental rights due to AI systems, including breach of privacy, bias, inequality and security issues.

The Act sets a broad definition on AI. The act defines it as “a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

Concerningly, this definition appears rigid and thus could be detrimental for the legislation’s adaptability in the future, considering the constant evolution of AI every day. 

The adoption of this landmark law will have several legal consequences.

First, the new rules will prohibit certain AI systems that poses threats to citizens fundamentals’ rights such as biometric categorization system. This means that any AI application that manipulates human behavior or exploits vulnerabilities will be banned. Examples for these are social scoring system, emotion recognition or predictive policing.

Second, the Act will impose transparency obligations on the use of AI and sets certain restriction on the use of general-purpose AI models. If an AI system is designed and deployed to interact with humans, its provider and employer must inform the human users – in a clear and distinguishable manner – that they are interacting with an AI system.

The Risk-Based System

The Act attempts to balance between innovation and risk-based approach with a degree of flexibility, so as to ensure adaptability and legal certainty. To do so, it imposes risk-based categories on AI providers depending on the level of risk that their AI system employs. These range from unacceptable risk, to high risk, to limited risk and to low or minimal risk.

Applications and systems that are considered as unacceptable risk will be banned; such as real-time biometric identification in public areas. AI systems that pose direct threat to people falls under “high risk” and will be strictly prohibited. These include systems that encourage dangerous behavior in children, apply social scoring and classify people based on their behavior, socio-economic status, or personal characteristics.

Additionally, AI systems are always considered high-risk if it profiles individuals based on collected data. In other words, these are the systems that automatically process personal data to assess various aspects of a person’s life, such as work performance and education. Examples for these include systems that determine access, admission or assignment to educational and vocational training or systems that are used for recruitment or selection, particularly targeted job ads, analyzing and filtering applications and evaluating candidates.

Chatbots and generative AI texts are considered “limited risks” and are subject to transparency obligations. “Minimal” or “no risk” systems, such as AI-enabled video games or spam filters, will be free to use and only subject to a voluntary code of conduct.

Implementing the Act

Once published in the EU’s Official Journal, the AI Act will come into force after 20 days, with full applicability expected in two years, except for certain provisions. Prohibitions will take effect six months after publication, while governance rules and obligations for general-purpose AI models will be applicable after 12 months. Additionally, rules for AI systems embedded in regulated products will apply after 36 months.

To aid the transition to the new regulatory framework, the Commission has introduced the AI Pact, an optional initiative encouraging AI developers worldwide to adhere to the key obligations of the AI Act in advance of its full implementation.

Additionally, the EU has also established the “European AI Office” that will oversee the Act’s enforcement implementation within member states. In doing so, it will have the authority to conduct evaluations of general-purpose AI models, request information and measures from model providers, as well as apply sanctions. The Office will collaborate with member states, expert and scientific community, industry and civil society in executing its mandate. This is a testament to EU’s multi-stakeholder approach to AI governance.

Implications for Non-EU Member States

A so called “Brussels Effect” is expected to occur after the adoption of the EU AI Act. It is a situation whereby EU’s introduction of its laws has a worldwide effect in shaping the international business environment and standards. An obvious example can be seen from the not-so recent enactment of the General Data Protection Regulation (GDPR) which sets a benchmark for data protection rules around the world, including Indonesia.

The EU AI Act will highlight the importance of public scrutiny towards AI application in daily life such as surveillance, health, education and law enforcement. It will prompt other countries to assess whether existing AI systems that have been applied within their territory may have caused harm or imposed risks on their citizens.

A definite outcome is that the Act will serve as a strong statement that the EU is able to regulate AI while ensuring that economic interests are still met. This is a manifestation of EU’s underlying legal policy and framework that are always based on the foundation of trade liberalization. This will encourage companies and investors of AI systems in the EU to adapt and comply with the Act. There is a likely chance that the EU will become a global standard for technology regulation which could lead to a greater degree of global coordination on AI.

Implications for Indonesia

While different countries have already progressed in initiating draft policies on AI governance, Indonesia’s progress seems to be on pause due to the recent presidential election. Outgoing President Joko “Jokowi” Widodo’s administration introduced the National AI Strategy, but it is up to President-elect Prabowo Subianto to carry this agenda forward. The concept of trust in AI is important for Indonesia. It involves strategically framing the narrative to unify societal skepticism towards AI while acknowledging its importance for national development. Like the EU, AI initiatives for Indonesia must be guided by national values, emphasizing trustworthiness and human-centric approach. Indonesia’s AI governance must focus in ensuring that AI programs align with overarching goals of not only economic progress, but also digital and citizens welfare, thereby emphasizing ethical considerations and societal well-being.


The views expressed are those of the authors and do not necessarily reflect those of STRAT.O.SPHERE CONSULTING PTE LTD.

This article is published under a Creative Commons Licence. Republications minimally require 1) credit authors and their institutions, and 2) credit to STRAT.O.SPHERE CONSULTING PTE LTD  and include a link back to either our home page or the article URL.

Author

  • Haekal Al Asyari is a lecturer at the Faculty of Law, Universitas Gadjah Mada and a doctoral candidate at the University of Debrecen.