Reading time: 32 minutes

Written by Alistair Simmons and Matthew Rostick | Edited by Josh Lee Kok Thong

Introduction

In recent months, many jurisdictions in the Asia-Pacific (“APAC”) have adopted or are considering various forms of AI governance mechanisms. At least 16 jurisdictions in APAC have begun some form of AI governance, and this number will likely continue to increase. This paper scans the different AI governance mechanisms across a number of APAC jurisdictions and offers some observations at the end. 

This paper segments AI governance mechanisms into four categories: Direct AI regulations are enforceable rules that regulate the development, deployment or use of AI directly as a technology, and consequently have regulatory impact across multiple sectors. Voluntary frameworks cover voluntary and non-binding guidance issued by governmental entities that directly address the development, deployment or use of AI as a technology. Indirect regulations (data & IP) are also enforceable legal rules but do not regulate the development, deployment or use of AI directly as a technology. They are rules of more general applicability that nevertheless have an impact on the development, deployment or use of AI. As the scope of this category is potentially broad, we have focused on data protection/privacy and intellectual property laws in this paper. Sector-specific measures refers to binding and non-binding rules and guidelines issued by sector regulators that are relevant to the development, deployment or use of AI in an industry. To avoid getting bogged down in the specifics of whether the rules and guidelines are technically binding or not, we have presented them together. Unlike the mechanisms addressed in the Sectoral Governance Mechanisms segment, the non-binding frameworks in this segment typically address the use of AI across multiple sectors.

For avoidance of doubt, this paper addresses legal governance mechanisms only. There may be other initiatives afoot to drive alignment and good practices from a technical perspective. We do not seek to address technical measures in this paper.

ASEAN Guide

Before looking at the AI governance mechanisms of individual jurisdictions, we note that there has been a regional effort to develop a set of AI guidelines applicable to the region as a whole. Specifically, in January 2024, the Association of Southeast Asian Nations (“ASEAN”) released a set of guidelines on AI Governance and Ethics for all member nations. ASEAN Member States comprise Singapore, Brunei, Cambodia, Indonesia, Laos, Malaysia, Myanmar, the Philippines, Thailand, and Vietnam.

Titled the ASEAN Guide on AI Governance and Ethics (“ASEAN Guide”), the ASEAN Guide serves as a guide for organisations in the region that wish to design, develop and deploy “traditional AI technologies” in commercial and non-military or dual-use contexts. In this context, the ASEAN Guide notes (at page 54) that “traditional AI technologies” refer to AI technologies that are “primarily used for tasks such as recommending, filtering, and making predictions as a primary output”. The ASEAN Guide is thus not intended to apply to generative AI systems. However, ASEAN is working on adapting the ASEAN Guide for use in contexts involving generative AI. 

The ASEAN Guide is aimed at encouraging alignment in ASEAN on AI governance, and fostering interoperability of AI frameworks across jurisdictions. It also includes national-level and regional-level recommendations for ASEAN governments to consider implementing. Further, the ASEAN Guide is billed as a “living” document, which means that it will be periodically reviewed and updated to keep up with regulatory and technological developments. 

The ASEAN Guide is structured around seven Guiding Principles. These are: Transparency and Explainability; Fairness and Equity; Security and Safety; Robustness and Reliability; Human-centricity; Privacy and Data Governance; and Accountability and Integrity. These Guiding Principles are then applied onto four key components in the AI design, development and deployment lifecycle to produce actionable recommendations. Specifically, the four components are: internal governance structures and measures, determining the level of human involvement in AI-augmented decision-making, operations management, and stakeholder interaction and communication. Astute readers may note how this structure of applying AI governance principles to an AI lifecycle is substantially similar to the structure of Singapore’s Model AI Governance Framework (described further below). This is perhaps a reflection of the leadership role that Singapore’s AI governance initiatives have played in the development of the ASEAN Guide.

The development of the ASEAN Guide is particularly notable for its ambition. This is because it seeks to establish practical alignment in AI governance in a region with countries that are as diverse in their culture, history and legal frameworks as they are in their economic development. It is perhaps in large part for this reason that the ASEAN Guide is made voluntary: mandating requirements in such a diverse region may be an exercise in futility. The ASEAN Guide thus takes a diametrically different approach from the EU AI Act, which categorises AI systems into risk categories and sets out compliance requirements for these categories. 

The ASEAN Guide is expected to advance regulatory interoperability in the region, and can serve as a common baseline for all ASEAN jurisdictions – especially for jurisdictions that have yet to develop their own AI governance policies. In the meantime, however, several Southeast Asian jurisdictions have already begun developing their own AI governance mechanisms. These mechanisms, among AI policies of other APAC jurisdictions, are set out below. 

Singapore

Singapore’s AI policy consists of voluntary frameworks, indirect regulation (data & IP), and sector specific measures.

Voluntary frameworks 

Singapore promotes responsible AI development by releasing sector-agnostic guidelines, such as the Model AI Governance Framework, the Implementation and Self-Assessment Guide for Organisations (“ISAGO”), and two editions of the Compendium of Use Cases. The Model AI Governance Framework refers to 2 key guiding principles: (1) Organisations using AI in decision-making should ensure that the decision- making process is explainable, transparent and fair; and (2) AI solutions should be human-centric. Embedded in the Model AI Governance Framework are references to other key ethical principles such as: transparency, explainability, repeatability/reproducibility, safety, security, robustness, fairness, data governance, accountability, human agency and oversight, inclusive growth, societal and environmental well-being. ISAGO provides instructions to help companies comply with the Model Framework, and the Compendium of Use Cases demonstrates how organisations have followed the Model Framework to practise AI governance. Singapore also offers non-binding guidance and technical tools, such as AI Verify, to assist companies in responsible AI development. The AI Verify Foundation was recently set up to encourage open-source contributions from industry to promote best practices and standards for AI. 

Singapore also released a discussion paper on Generative AI: Implications for Trust and Governance that highlights the risk of relying on a couple of large language models (“LLMs”) as foundation models, encourages more disclosure of LLMs, promotes standards for labelling generative AI content, and calls for transparency into data quality.  

Indirect regulations (Data & IP)

Singapore’s National AI Strategy (published in November 2019) includes a plan to review IP legislation “to ensure that our laws support the development and commercialization of new AI technologies.” This was given effect to by, amongst other things, Section 244(4) of Singapore’s Copyright Act of 2021 which provides that use of copyright works for computational data analysis does not amount to a breach of copyright.

Singapore’s Personal Data Protection Act (“PDPA”) would also apply to personal data used in training AI systems. Companies are required to comply with the PDPA’s data protection obligations, such as legitimate purpose, consent, accuracy, protection, and transfer limitation. The Personal Data Protection Commission (“PDPC”) has recently published  its Advisory Guidelines on use of Personal Data in AI Recommendation and Decision Systems and a Public Consultation for the Proposed Advisory Guidelines on Use of Personal Data in AI Recommendation and Decision Systems to consider how the PDPA applies to the collection of data used to train AI models. Singapore’s PDPC also published a Guide on Responsible Use of Biometric Data in Security Applications to help safeguard the collection of biometric data.

Sector-specific measures

Singapore has also released sector-specific guidelines for the use of AI in different industries. Jointly with the Ministry of Health, Singapore’s Health Sciences Authority (“HSA”) and the Integrated Health Information Systems (“IHiS”) released the Artificial Intelligence in Healthcare Guidelines, which provide guidelines for the development of AI medical devices. The Monetary Authority of Singapore (“MAS”) released Guidelines on Provision of Digital Advisory Services, offering guidance for digital advisors on online trading. In 2018, MAS also co-developed with the financial industry Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT) in the Use of Artificial Intelligence and Data Analytics in Singapore’s Financial Sector (“FEAT Principles“), and this year released a Toolkit for Responsible Use of AI in the Financial Sector, to help financial institutions (FIs) carry out the assessment methodologies for the FEAT Principles. These principles serve as business best practices to promote responsible use of AI by strengthening internal data governance.

Singapore has also made efforts to standardise AI regulation internationally. In June 2023, Singapore’s Minister for Communications and Information Josephine Teo and UK Deputy Prime Minister Oliver Dowden signed two Memoranda of Understanding to improve trade and security. The first is a Memorandum of Understanding on Emerging Technologies that commits to sharing telecommunications innovation, promoting AI business partnerships, identifying trustworthy AI, aligning technical AI standards, and promoting new joint research on how AI can improve health services. The second is a Memorandum of Understanding on Data that commits to increasing digital trade, discussing domestic data regulation, protection, and international transfer, sharing research data, improving public services, creating new standards for publishing government data sets, and finding best practices on data sharing between government and industry. Singapore was also one of 193 UNESCO member states which adopted the Recommendation on the Ethics of Artificial Intelligence, an ethical framework on the use of AI, in November 2021. 

Japan

Japan’s policy on AI consists of voluntary frameworks, indirect regulation (data & IP), and sector specific guidance. 

Voluntary frameworks

Japan’s voluntary framework approach to regulating AI includes promoting innovation that supports human centricity and accountability. In 2019, Japan released the Social Principles of Human-Centric AI, which advocates for the following ethical principles: (1) human-centric; (2) education/literacy; (3) privacy protection; (4) ensuring security; (5) fair competition; (6) fairness, accountability, and transparency; and (7) innovation. Japan does not support horizontal regulation, as the AI Governance in Japan Ver. 1.1 report published by the Ministry of Economy, Trade, and Industry (METI) pronounced that “legally-binding horizontal requirements for AI systems are deemed unnecessary at the moment.”  Instead of legal enforcement, Japan provides organisations with guidance principles for following AI frameworks. METI’s Governance Guidelines for Implementation of AI Principles summarises the action targets for implementing the Social Principles and provides specific examples on how to achieve them.

Japan encourages the development of AI in areas that align with their national interest. The Japan National AI Strategy highlights how AI can help deal with national crises, address decline in strength due to a decline in population, and create significant value. Japan’s National Strategy promotes 3 core principles: Dignity for People, Diversity, and Sustainability for Developing AI. The goals are: Improve AI reliability, enhancement of data supporting AI utilisation, development of environment for securing human resources, promotion of AI utilisation in the government, and integration of AI in fields where Japan has strengths. Japan’s initiatives strive for “responsible AI” such as “explainable AI,” in addition to AI being “human-centric.” The Japanese government also established an “Artificial Intelligence Strategy Council” on May 11, 2023 to discuss issues surrounding AI and future legal framework on AI.  In light of this, METI and the Ministry of Internal Affairs and Communications (“MIC”) integrated and updated the existing related guidelines and compiled the Draft AI Guidelines for Business.

Indirect regulations (Data & IP)

Japan’s Act on the Protection of Personal Information No. 57 of 2003 (“APPI”) establishes privacy protections, security controls, accuracy standards, and proper methods for handling personal information. Japan’s privacy law places restrictions on how data can be collected to develop AI models.  The Personal Information Protection Commission (“PPC”), which is the main regulatory body dealing with the APPI, has published a “Precautionary notice on use of generative AI services” on June 2, 2023 , in which the PPC stated that any entity that handles personal data for its business should ensure that any personal information is entered into generative AI services only to the extent necessary to achieve the purpose of use of such personal information. The PPC also states in the precautionary notice that entities should ensure that no personal data entered into a generative AI service will be used for machine learning purposes by the AI service provider, and has cautioned that the use of personal data for secondary purposes other than for providing responses to a user prompt (such as for training the AI system) could be a breach of the APPI. The precautionary notice also includes some specific instructions to OpenAI, the provider of the ChatGPT service.

Japan has facilitated AI innovation by increasing access to large amounts of reliable data. Two Japanese Real-World Databases, Medical Data Vision and Japanese Medical Data Center, register anonymized healthcare data and share it with pharmaceutical companies to assist in the development of smart health and AI assisted medicine. In 2018, Japan amended the Copyright Act to allow data mining of copyrighted material when training AI. Japan has also focused on international efforts that promote transparent international data flows. During the G7 Summit in Hiroshima, the Leaders Communiqué reiterated “the importance of facilitating Data Free Flow with Trust (DFFT) to enable trustworthy cross-border data flows.” Japan has taken measures to remove barriers for accessing data when training AI models.

Sector-specific measures

Japan has sector-specific pre-existing laws that may have an impact on AI compliance. The Digital Platform Transparency Act places requirements on online malls, app stores, and digital advertising businesses to promote transparency and fairness in transactions with business users, such as disclosing factors determining search ranking. The Financial Instruments and Exchange Act requires businesses engaging in algorithmic high-speed trading to register with the government, establish a risk management system, and maintain transaction records. The Japan Fair Trade Commission analysed the risks of cartel and unfair trade enabled by algorithms and concluded that many issues could be covered by the existing Antimonopoly Act. The Product Liability Act reduces the victim’s burden of proof for claiming tort liability from products with AI installed in it. METI published the Contract Guidelines on Utilization of AI and Data (see Appendix 6) which addresses data transfer for AI development with model clauses. 

Japan has also developed tools to create benchmarks and standards for AI. To evaluate the quality of AI, METI provides the Machine Learning Quality Management Guideline, which establishes quality benchmark standards for machine learning-based products or services. The University of Tokyo also developed the Risk Chain Model to analyse areas for risk of bias in an algorithm.

China 

China’s AI policy consists of direct AI regulations, voluntary frameworks, indirect regulations (data & IP), and sector-specific measures.

Direct AI regulations

While China does not have a comprehensive AI law yet, it does have direct AI regulations that regulate specific types of AI technologies. The Interim Measures for the Management of Generative Artificial Intelligence Services released by the Cyberspace Administration of China (“CAC”) took effect from 15 August 2023. The Interim Measures apply to the use of generative AI technologies to provide services to the public in China, and impose various obligations on “generative AI service providers” (i.e., organisations or individuals that use generative AI technology to provide generative AI services), such as the obligation to carry out security assessments and file with the central authorities algorithms for generative AI services that possess “public opinion properties or social mobilisation capacity” (i.e., the capacity to shape public opinion or to influence the actions of members of the public), uphold core socialist values including not generating content that endangers national security, protect personal data in compliance with the Personal Information Protection Law (“PIPL”), ensure non-infringement of others’ IP rights when handling training data, employ measures to address illegal content including removing it and adopting model optimization training, etc. The Interim Measures, however, are not applicable to industry associations, enterprises, education and research institutions, public cultural bodies and related professional bodies, etc. that simply research, develop and/or use generative AI technologies without providing such services to the public.

In addition, China’s Administrative Provisions on Deep Synthesis in Internet-based Information Services which took effect from 10 January 2023 seek to address AI technologies that produce “deepfakes”. The Provisions set out the obligations as well as data and technology management standards for “deep synthesis service providers” (i.e., organisations or individuals that provide deep synthesis services) and “technical support providers” (i.e., organisations or individuals that provide technical support for deep synthesis services). Obligations of deep synthesis service providers include not using such services to produce or disseminate information or engage in activities prohibited by laws, establishing and improving management systems for user registration and real identity authentication, algorithmic mechanism review, data security and personal information protection, etc., adopting technical or manual methods to audit input data and synthesised results, taking timely measures to dispel rumours, keeping relevant records and reporting to relevant competent departments if there has been dissemination of false information using the services, and ensuring that labels are added to the content generated using deep synthesis technology. 

The Internet Information Service Algorithmic Recommendation Management Provisions which took effect from 1 March 2022 regulate the use of “recommender” or similar content decision algorithms in apps and websites used in the Mainland China, and algorithmic recommendation mechanisms and services implemented by third parties. The Provisions set out the service norms and obligations of “algorithmic recommendation service providers”, such as taking primary responsibility for algorithmic security, establishing and completing management systems and technical measures for algorithmic mechanism examination and verification, user registration, information dissemination examination and verification, security assessment and monitoring, security incident response and handling, data security and personal information protection, not setting up algorithmic models that violate laws and regulations or ethics and morals (e.g. by leading users to addiction or excessive consumption), not using algorithms to falsely register users, illegally trade accounts, or manipulate user accounts, not using algorithms to carry out monopolistic or unfair competition acts, among other things.

Voluntary frameworks

China has further developed voluntary frameworks to regulate AI. For instance, the Secretariat of the National Information Security Standardization Technical Committee (“NISSTC”) released the Practice Guidelines for Cybersecurity Standards – Content Identification Method for Generative AI Service in August 2023, outlining methods for content identification by generative AI services in providing generated text, graphics, audio and videos. The China Academy of Information and Communications Technology (“CAICT”) released the Evaluation Methods for Generative AI Technology and Product series of standards in March 2023, which provide multiple indicators such as technical capabilities, product capabilities, etc. for systematically assessing generative AI technologies and products. The Ministry of Science and Technology (“MOST”) issued the Circular on Supporting the Construction of Pilot Application Scenarios for the New Generation of AI in 2022 which pinpoints the first batch of pilot application scenarios for AI in ten key areas including smart factory, smart home, autonomous driving, and smart diagnosis and treatment, etc. The National New Generation Artificial Intelligence Governance Specialist Committee under MOST published the New Generation Code of Ethics (see unofficial English translation here) in September 2021 which aims to incorporate ethics into the entire AI life cycle and provide ethical guidance to entities engaged in AI-related activities. Earlier in 2019, MOST released the Principles on Governing the New Generation of AI: Developing Responsible AI which include: harmony and friendliness, fairness and justice, inclusiveness and sharing, respect for privacy, security and controllability, shared responsibility, open cooperation, and agile governance.

Indirect regulations (Data & IP)

China also has indirect regulations that seek to regulate AI through data in particular, including:

  • The PIPL (effective from 1 November 2021) which regulates personal information processing activities and authorises the CAC to develop special rules and standards for personal information protection in relation to AI and other new technologies and applications.
  • The Data Security Law (effective from 1 September 2021) which regulates data processing activities, data development and data utilisation and restricts the flow of “important data” outside of the Mainland China.
  • The Cybersecurity Law (effective from 1 June 2017) which sets out the general obligations of “network operators” (i.e., owners, administrators of networks and network service providers) to adopt technical and other necessary measures to ensure the security of personal information and prevent such information from being divulged, damaged, or lost, and sets out the specific obligations of “critical information infrastructure operators” (critical information infrastructure is defined as any information infrastructure that can endanger national security, national strategy and civil welfare in the event of a data breach, compromised network or system malfunction).

Sector-specific measures

Sector-specific initiatives, rules and guidelines have also been actively formulated in the areas of healthcare, education, live-streamed marketing, robots, etc. For instance, in September 2023, the Draft Academic Degree Law was submitted to the Standing Committee of the National People’s Congress for review and discussion, proposing legal responsibilities for use of AI in writing dissertations. In January 2023, a total of 17 central government departments jointly issued the Robotics+ Application Action Implementation Plan, which aims to expand the use of robotics and encourages the application of AI and the integration of AI with robotics in the areas of healthcare, elderly care, and education. The Detailed Rules for the Supervision and Administration of Internet Diagnosis and Treatment (for Trial Implementation) (effective from 8 February 2022) provide that AI software shall not replace the doctors to provide diagnosis and treatment services and using AI to automatically generate medical prescriptions is strictly prohibited. The Administrative Measures for Livestreaming Marketing (for Trial Implementation) (effective from 25 May 2021) set out specific requirements for the use of virtual images displayed by using AI, digital vision, virtual reality, speech synthesis and other technologies in livestreaming marketing activities.

India

India’s AI policy consists of voluntary frameworks, indirect regulation (data & IP), and sector-specific measures. India does not have horizontal laws regulating AI, but is in the process of passing further regulation.

Voluntary frameworks

India released a proposal for the Digital India Act (“DIA”), which attempts to propose global standard cyber laws to guide the development of the AI industry. The DIA seeks to overhaul India’s current ageing framework under the Information Technology Act, 2000, which is now more than 22 years old – to address the complexities of the internet and enable growth of emerging technologies (such as AI) while also managing associated risks. Most notably, the DIA proposes to have distinct rules for the regulation of AI intermediaries and specifically seeks to define and regulate high-risk AI systems. The DIA also proposes creating a specialised adjudicatory mechanism for online civil and criminal offences.

India is a founding member of the Global Partnership on Artificial Intelligence (“GPAI”). Additionally, when presenting the Union Budget 2023-24, an Indian Finance Minister called for realising a vision to “Make AI in India and Make AI work for India” – and announced (a) that 3 centres for excellence are to be set up in top educational institutions; and (b) the government of India’s plan for industry partnerships for AI research and development.

India has released AI frameworks to assist businesses and policymakers on regulations and compliance. NITI Aayog, the apex think tank for the government of India, released the National Strategy for Artificial Intelligence in 2018 which proposes to develop an ecosystem for the research and adoption of AI.  NITI Aayog thereafter published Responsible AI for All – Approach Document for India – Part 1 – Principles for Responsible AI that establishes ethical principles for the design, development, and deployment of AI. The ethical framework establishes the following 7 principles: (1) Principle of Safety and Reliability; (2) Principle of Equality; (3) Principle of Inclusivity and Non-discrimination; (4) Principle of Privacy and Security; (5) Principle of Transparency; (6) Principle of Accountability; and (7) Principle of Protection and Reinforcement of Positive Human Values. The ethical framework also addresses systematic considerations, such as the black box problem, inaccurate data, addressing bias, privacy risks, security risks, job loss, and psychological profiling. NITI Aayog also released Responsible AI for All – Approach Document for India – Part 2 – Operationalizing Principles for Responsible AI to guide companies on implementing the ethical principles. 

Indirect regulations (Data & IP)

India has a new data protection law which may function as indirect regulation of the development, deployment or use of AI. India’s recently-enacted Digital Personal Data Protection Act, 2023 (“DPDP Act”) seeks to regulate automated processing of personal data. Once implemented, the DPDP Act will require organisations to collect data lawfully with users’ consent.

India also has the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 which impose due diligence obligations on intermediaries to ensure that they take steps to ensure that their users operate in compliance with law and prohibit content and activities which are opposed to public policy. 

Sector-specific measures

India has implemented AI rules and initiatives in specific industries. The Securities and Exchange Board of India (“SEBI“) issued a circular in January 2019 to financial institutions on the use of AI. India has promoted AI innovation through its National Semiconductor Mission and its draft National Data Governance Framework Policy. The Telecom Regulatory Authority of India has also issued Recommendations on “Leveraging Artificial Intelligence and Big Data in Telecommunication Sector” – which include non-binding recommendations to the government of India to develop a sector-agnostic framework to regulate AI immediately.

Thailand

Thailand has issued direct AI regulations, voluntary frameworks, and indirect regulations (data & IP) to regulate AI.

Direct AI regulations

The Draft Royal Decree on Business Operation Using Artificial Intelligence System was proposed by the Office of National Digital Economy (“ONDE”) during August – October 2022 for limited public consultation. Another draft law, the Draft Artificial Intelligence Innovation Promotion and Support Act, proposed by the Electronic Transaction Development Agency (“ETDA”), was put on public hearing between 31 March – 14 April 2023, and the latest version between 18 July – 20 August 2023.

Voluntary frameworks

The Thai Digital Economy and Society (“DES”) Ministry drafted the AI Ethics Guideline. The AI ethics principles promote: responsibility, fairness, human centricity, and sustainability. The DES released a second AI ethics guideline which provides examples of use cases to assist organisations in complying with AI ethical principles. The National Science and Technology Development Agency (NSTD) also released an AI Ethics Guideline, which is intended to be used by the NSTD personnel and affiliated researchers. Thailand promotes AI innovation and development. Thailand’s National AI Strategy and Action Plan attempts to prepare the social body, law, and regulation for AI application. The strategy also plans to promote AI development, expand AI education, increase use of AI in public and private sectors, and use AI to support inclusive growth. 

Indirect regulations (Data & IP)

The Royal Decree on Digital Platform Service Businesses that Require Notification, B.E. 2565 was published on December 23, 2022. The Royal Decree places restrictions on digital platforms that manage information between businesses and consumers, requiring digital platform services to notify the ETDA on an annual basis and prior to operation.

Thailand also has privacy laws which function as indirect regulation of the development, deployment or use of AI. Thailand’s Personal Data Protection Act (“PDPA”) places obligations on companies processing personal data to receive consent from data subjects or rely on other legal bases. PDPA also provides the data subjects with the right of consent withdrawal and the option to opt out from certain processing, for instance, processing on the basis of legitimate interest for direct marketing. Thailand’s PDPA is similar to the EU’s General Data Protection Regulation (GDPR), as both of them require user consent or companies processing personal data to rely on other legal bases (e.g., performance of a contract), contain legal obligations, have extraterritorial influence, and have legitimate interest as a legal basis. It appears that Thailand’s regulatory approach is influenced by that of the EU.  

The Philippines

The Philippines is seeking to regulate the development, deployment or use of AI through indirect regulations (data & IP), but further regulations could be forthcoming.

Indirect regulations (Data & IP)

The Philippines passed the Data Privacy Act of 2012 that creates a National Privacy Commission (“NPC”) to enforce the protection of personal information. The Act requires that data is collected for “legitimate purposes”, and that processing is based on the data subject’s consent or other legal bases. The NPC released Notifications regarding Automated Decision-Making Operations, which require companies that are engaged in personal data processing activities involving automated decision-making or profiling to register their data processing systems with the NPC. Safeguards are in place to prevent harmful use of consumer data by AI systems. 

There is political motivation in the Philippines for legally regulating AI. There are calls from the Senate to regulate AI and its impact on job loss, implying that there is political sentiment for regulation. Estimates from the Philippines government predict that AI could replace 6 million agriculture jobs, 3.4 million retail jobs, and 2.4 million manufacturing jobs. We anticipate that the passage of the AI Act in the EU will move forward local initiatives on AI regulation, both from IT and data protection perspectives, since the Philippines has historically used EU legislation as a guidepost for local legislation on these matters.

Philippines’ Department of Trade and Industry (“DTI”) launched an AI Roadmap. The roadmap addresses four dimensions of AI readiness: (1) Digitization and Infrastructure, (2) Research and Development, (3) Workforce Development, and (4) Regulation. The roadmap also supports the creation of the National Center for AI Research (“NCAIR”) that houses full-time scientists and AI engineers. Although there are no existing AI ethics principles, there is advocacy for developing a national AI framework.

Malaysia

Malaysia currently seeks to regulate AI by voluntary frameworks, indirect regulations (data & IP), and sector-specific measures. 

Voluntary frameworks

Malaysia has an AI Roadmap issued by the Ministry of Science, Technology and Innovation which has the following priorities: Strategy 1: Establishing AI Governance; Strategy 2: Advancing AI R&D; Strategy 3: Escalating Digital Infrastructure to Enable AI; Strategy 4: Fostering AI Talents; Strategy 5: Acculturating AI; and Strategy 6: Kick-Starting a National AI Innovation Ecosystem. With the adoption of generative AI, Malaysia is currently in the process of determining whether further horizontal regulation is needed. While the AI Roadmap is not specifically a voluntary AI governance framework, the intent of developing an approach to AI governance is one that may give rise to relevant developments in the future.

Indirect regulations (Data & IP)

On 1 August 2023, the Malaysian Securities Commission also issued its Guidelines on Technology Risk Management which among others regulate the use of AI technology by capital market entities. Particularly, capital market entities are required to adopt the following guiding principles when using AI: (1) Accountability, (2) Transparency and Explainability, (3) Fairness and Non-Discrimination and (4) Practical Accuracy and Reliability.

Sector-specific measures

Malaysia also regulates AI through indirect regulations (data & IP). Malaysia passed the Personal Data Protection Act 2010 (“PDPA”) that established the Personal Data Protection Department. The PDPA would also apply to personal data used in training AI systems. Companies processing personal data need to comply with the PDPA’s data protection obligations, including among others having a legal basis for collecting and processing personal data, notifying data subjects of the processing of their personal data and putting in place appropriate technical and organisational measures to protect personal data. 

Hong Kong

Hong Kong regulates AI through voluntary frameworks, indirect regulations (data & IP), and sector specific measures. 

Voluntary frameworks

Hong Kong’s Office of the Government Chief Information Officer released an Artificial Intelligence Framework that complements the guidance issued by the Privacy Commissioner for Personal Data (“PCPD”). Specifically, it sets out 12 ethical AI principles to be followed when designing and developing AI applications, consisting of 10 general principles and 2 performance principles, namely transparency and interpretability on one hand, and reliability, robustness and security on the other. The framework provides use cases and an assessment template to guide organisations implementing AI. The framework also introduces concepts such as the AI lifecycle and provides details on how to conduct AI impact assessments. 

Indirect regulations (Data & IP)

The Personal Data Protection Ordinance (“PDPO”), which took effect in 1996, granted the PCPD certain statutory powers. The PCPD has issued the Guidance on the Ethical Development and Use of Artificial Intelligence (“AI Guidance”) and Guidance on Collection and Use of Biometric Data. The objectives of the AI Guidance are to facilitate the healthy development and use of AI in Hong Kong and assist organisations in complying with the provisions of the PDPO in their development and use of AI.  The AI Guidance also sets out 7 ethical principles for AI, which are: Accountability; Human Oversight;  Transparency and Interpretability; Data Privacy; Fairness, Beneficial AI; Reliability, Robustness and Security, each of which falls under one of the data stewardship values. The three data stewardship values are for AI to be respectful, fair, and beneficial. The Guidance also sets out the recommended practices for organisations when they develop and use AI throughout the life cycle of their business processes, focusing on the following 4 business processes: AI Strategy and Governance, Risk Assessment and Human Oversight, Development of AI Models and Management of AI Systems, and Communication and Engagement with Stakeholders. 

Sector-specific measures

Hong Kong has also released sector-specific AI guidance. The Hong Kong Monetary Authority (“HKMA”) published the High-level Principles on Artificial Intelligence, which include governance, application design and development, and ongoing monitoring and maintenance obligations.  The HKMA has also released Artificial Intelligence in Banking: The Changing Landscape in Compliance and Supervision and Artificial Intelligence and Big Data in the Financial Services Industry: A Regional Perspective and Strategies for Talent Development to inform business strategy and assist in compliance. In addition, the HKMA has issued a set of Guiding Principles on Consumer Protection in respect of Use of Big Data Analytics and Artificial Intelligence by Authorized Institutions to provide guidance to the banks in respect of the use of AI. These principles focus on four key areas, namely governance and accountability, fairness, transparency and disclosure, and data privacy and protection.

New Zealand

New Zealand’s policy on AI consists of voluntary frameworks and indirect regulations (data & IP).

Voluntary frameworks

New Zealand provides voluntary guidance frameworks to safeguard AI without damaging innovation.

First, New Zealand, along with 41 other jurisdictions, have adopted the Organization for Economic Co-operation and Development (“OECD”) Principles on Artificial Intelligence. This is an international voluntary framework set on ensuring AI systems are “robust, safe, fair and trustworthy”. The OECD Value Based Principles on AI privacy include: inclusive growth, sustainable development and well-being, human-centred values and fairness, transparency and explainability, robustness, security and safety, and accountability.

Next, New Zealand has partnered with the World Economic Forum on a project titled Re-imagining regulation in the age of AI. The goal of this partnership is to develop a roadmap that guides policymakers in developing AI regulation, shape national and international conversations on regulating AI, and scale innovative regulatory approaches for AI.

Further, New Zealand has agreed to a Digital Economic Partnership Agreement (“DEPA”) with Singapore and Chile that establishes new rules and guidelines for digital trade including the use of AI in e-commerce and digital trade. Article 8.2 of DEPA Artificial Intelligence promotes the adoption of ethical AI governance frameworks, which proposes the principles for responsible AI. It also provides that AI should be fair, transparent, explainable, and have human-centred values.

There has also been interim generative AI guidance for the public service jointly published by data, digital, procurement, privacy and cyber security system leaders. This guidance is intended to support agencies to make more informed decisions about using generative AI, balancing benefits and risks. It is expressed as being the first collective effort of system leaders to help agencies start the trial and use of this new class of technology safely, ethically and in privacy-protecting ways, and to provide “guardrails” supporting safe learning of generative AI tools.

The government is not the only organisation within the country working on the provision of AI guidance, as non–government bodies have taken “industry initiative” to create non-binding best use frameworks for AI. Organisations such as the AI Forum of New Zealand created “Trustworthy AI in Aotearoa: AI Principles”, and universities such as the University of Otago’s Centre for Artificial Intelligence and Public Policy (“CAIPP”) created the Artificial Intelligence and Law in New Zealand Project, which works to analyse AI policy. 

Indirect regulations (Data & IP)

New Zealand regulates AI through indirect regulations, as existing data privacy laws provide the legal framework for how personal information can be used. New Zealand’s Privacy Act 2020 (“Privacy Act”) defines how personal information can be collected, stored, processed and disclosed. The Privacy Act gives a legally binding authority to international privacy obligations including the OECD Privacy Guidelines and the International Covenant on Civil and Political Rights. This Act provides the foundation for the use of data in AI technologies.

The Privacy Commissioner, appointed under the Privacy Act, has published guidance on his office’s expectations in relation to the use of generative AI. These guidelines are non-binding but clearly an indication of matters the Privacy Commissioner’s office will take into consideration when assessing the Privacy Act compliance of any generative AI tools that may have been implemented and used by an agency.

Taiwan

Taiwan’s AI policy consists of voluntary frameworks, indirect regulations (data & IP), and sector specific measures. 

Voluntary frameworks

In 2022, Taiwan released an AI readiness report that provides an overview of where Taiwan stands in the AI ecosystem. In April 2023, Taiwan released the AI Action Plan 2.0, which built upon the first version of the AI Action Plan that ended in 2021. Along with this plan, the report lays out the regulatory framework currently within Taiwan. Including the establishment of the Ministry of Digital Affairs (“MODA”) in 2022, which is responsible for promoting Taiwan’s overall digital policy and innovation. 

Taiwan has also provided some voluntary frameworks with respect to the use of AI. In 2019, the Ministry of Science and Technology (“MOST”), which has been renamed as the National Science and Technology Council (“NSTC”), released the AI technology R&D guidelines, which focus on three core values: human-centric, sustainable development and diversity and inclusion. Further, eight specific ethical guidelines were specified, including: (1) common good & well-being; (2) fairness and non-discrimination; (3) autonomy and control; (4) safety; (5) privacy and data governance; (6) transparency and traceability; (7) explainability; and (8) accountability and communication.

On 31 August 2023, the Executive Yuan approved the guidelines for use of generative AI by Executive Yuan and its subordinate agencies. On 15 August 2023, the Financial Supervisory Commission (“FSC”) sought public feedback on draft principles and policies regarding the use of AI in the financial industry. The NSTC is expected to propose a draft Artificial Intelligence Fundamental Act but it is uncertain when this will be released.

Indirect regulations (Data & IP)

Taiwan has also established legally binding indirect regulations which focus on inputs through data protection and cybersecurity laws. The legal systems that are already in place include Taiwan’s Personal Data Protection Act (“PDPA”), pursuant to which government agencies introduced personal data security management systems and non-government agencies should have measures for securing personal data. Taiwan is using these frameworks to act as the legal basis for AI instead of sole AI regulation. The frameworks and relevant laws are laid out in Taiwan’s AI Readiness report

AI regulation groundwork began in 2005 with the Freedom of Government Information Law which, according to Taiwan’s AI Readiness report, “lays out relevant operating principles and specifications to facilitate people to share and fairly utilise government information, protect people’s right to know, further people’s understanding, trust and overseeing of public affairs, and encourage public participation in democracy.” 

Taiwan joined the Asia–Pacific Economic Cooperation’s (“APEC”) Cross-Border Privacy Rules System

Finally, the National Development Council (“NDC”) established MyData, a service platform that allows data subjects to access a copy of personal data stored on My Data and provide it to government agencies or trusted companies.

Sector-specific measures

Taiwan has sector-specific AI regulation for the healthcare industry. The government passed AI-specific regulation on medical devices. The Ministry of Health and Welfare (“MOHW”) has been advancing the regulatory system of artificial intelligent medical devices, which includes providing guidelines for the inspection and registration of AI medical devices. This is to ensure safety when AI devices are used on patients. In addition, MOHW created the Smart Medical Device Office which serves ICT manufacturers, hospitals, and research institutions to get AI and Machine Learning devices approved for medical use. Taiwan also has a national health insurance database that supports smart health technology and access to AI-assisted health insurance. 

Vietnam

Vietnam’s current AI policy consists of indirect regulations (data & IP). However, Vietnam is also working to pass horizontal AI regulation.

Direct AI regulations

The National Standard on Artificial Intelligence is a proposed framework for AI regulation. As of 20 April 2023, Vietnam’s Ministry of Information and Communication has requested for public feedback on the draft. The draft National Standard establishes “the concept of AI module lifecycle, consisting of the conception, development, deployment, operation, and retirement of AI modules”. The Standard aims to develop and establish a safe and private environment that allows the use of AI. It also highlights that the first necessary step is to determine the level of risk and what should be considered to assess that risk. While requiring an evaluation of ethical conduct in the development of an AI module, this document does not establish a precise risk evaluation process or an ethical design framework.

Vietnam also has a National Strategy on R&D and application of Artificial Intelligence. This strategy focuses on increasing the use of AI and encouraging innovation. It mentions the need for AI regulation to avoid abuse and infringement on the rights of both organisations and individuals. The strategy also pushes the Ministry of Information and Communications to develop regulation on sandboxes to create a favourable AI testing space in potential industries and to develop standards and technical regulations on AI technologies and products.

Vietnam also has other AI-related regulations, including a proposed Telecommunication Law. It is unclear whether data centres and cloud computing services will be defined as telecommunications services. If the answer is in the affirmative, this will create a string of new rules that both cloud and data services would have to comply with. Most AI models run through big data centres and cloud services, and so this telecommunication law could alter how AI would be used within Vietnam.

Indirect regulations (Data & IP)

Vietnam also issued a personal data protection decree that went into effect on 1 July 2023. The Personal Data Protection Decree is the first of its kind in Vietnam and provides the groundwork for personal data protection for agencies, organisations, and individuals both inside Vietnam and internationally.

Further, the newly issued Law on Protection of Consumers’ Rights, which will take effect on 1 July 2024, requires operators of large digital platforms to periodically evaluate the use of AI and fully/partially automated solutions. Under the draft Decree guiding the Consumer Law, the AI evaluation must be conducted on a six-month basis and submitted to a competent agency under the Ministry of Industry and Trade. The draft Decree also defines a digital platform as a large digital platform if it meets certain thresholds concerning (i) the total number of monthly visits and users, (ii) the total value of online yearly transactions in the Vietnamese market, (iii) the platform operator being in the market-leading or dominating position, or (iv) if so defined under e-transaction laws. 

Indonesia

Indonesia’s AI policy comprises voluntary frameworks and indirect regulations (data & IP).

Indonesia has introduced AI-specific governance frameworks, although its longer-term regulatory strategy is still presently unclear. In December 2023, Indonesia’s Ministry of Communication and Information (“KOMINFO”) released a circular providing guidance to private and public sector organisations on the ethical use of AI. The Circular is intended to serve as a reference for the ethical use of AI by business actors, and public and private sector operators of electronic systems. While non-binding, it draws its legal basis from several laws, such as the Law on Electronic Information and Transactions and the Personal Data Protection Law. 

Contemporaneously, KOMINFO issued a press release stating that the Indonesian government is preparing a Presidential Regulation on the use of AI. The regulation would reportedly be aimed at providing stronger protections against potential AI risks while fostering the development of Indonesia’s national AI ecosystem. It remains unclear when the Presidential Regulation will come to be passed, with anecdotal reports suggesting that it will not likely happen in the near future. 

Prior to these developments, Indonesia released a National AI Strategy named “Strategi Nasional Kecerdasan Artifisial(“Stranas KA”) in 2020. This AI strategy focuses on four main areas: Ethics and Policy, Talent Development, Infrastructure and Data, Industrial Research, and Innovation. The Strategy also names five national properties: Health services, Bureaucratic reform, Education and Research, Food Security, Mobility, and smart cities. Notably, the National AI Strategy does not provide any indication of Indonesia’s AI regulatory plans. It also does not set out any particular principles or approaches that future AI regulatory initiatives would follow. The Circular could thus be seen as laying the groundwork for future regulatory initiatives. 

Indirect regulations (Data & IP)

Presently, given the non-binding nature of the Circular, Indonesia regulates AI broadly through several laws and regulations. The main laws regulating AI include Law No. 11 of 2008 on Electronic Information and Transactions, as amended by  Law No. 19 of 2016 (“EIT Law”), and its implementing regulation, Government Regulation No. 71 of 2019 on the Implementation of Electronic Systems and Transactions, which apply to all “electronic agents”, including AI technologies. These regulations regulate the electronic information sector and apply to AI. The EIT Law defines electronic agents as “automated electronic means that are used to initiate an action to certain Electronic Information”. This means that AI must comply with the same regulation as electronic agents, including assuming responsibility for all electronic transactions. AI is also covered under Presidential Regulation No. 95 of 2018 on Electronic-based Government Systems and Presidential Regulation No. 122 of 2020 on the Updated Government’s Work Plan of 2021, and the Making Industry 4.0 paper by the Ministry of Industry.

The recently passed Personal Data Protection Law, i.e., Law No. 27 of 2022 on Personal Data Protection, which came into effect in October of 2022, also regulates AI and the use of Indonesian persons’ data. This law provides the groundwork for how personal data is used in Indonesia, including personal rights to users’ data, such as the right to information, rectification, access, erasure, withdrawal consent, not being subject to decisions based solely on automated processing, compensation, and data portability. As well as personal rights, the data privacy law further details data controller obligations, cross-border transfers, and penalties. 

South Korea 

South Korea currently regulates AI through voluntary frameworks and indirect regulations (data & IP). However, this may quickly change as South Korea is working to adopt direct AI regulations using a risk-based approach. In addition, more sector-specific regulations are in the making.

Direct AI regulations 

South Korea is working to create horizontal AI-specific regulations that target different risks. As of February 14, 2023, the Science, ICT, Broadcasting and Communications Committee of the Korean National Assembly has given its assent to a new proposed Act on Promotion of AI (Artificial Intelligence) Industry and Creation of Trustworthy Basis. This legislation would organise prior AI regulation into one framework to provide clear rules for organisations creating and running AI in South Korea. Full texts of the bill have yet to be released to the public, but some key points include that anyone is allowed to develop new AI without government approval. It also creates a “high risk” definition for some AI systems where additional trust levels must be proved. It also sets up a basic guide for policy formulation moving forward. 

Recently, the MSIT announced the “Digital Bill of Rights” that includes principles relating to freedom and rights in the digital environment, fair access and opportunities, safety, etc.

In addition, the subject matter of regulations under the Intelligent Robots Development and Distribution Promotion Act is intelligent robot, but such a robot is defined as “a mechanical device (including software required for operating the mechanical device) that perceives the external environment for itself, discerns circumstances, and moves voluntarily, so this Act could also be applicable to AI relating to robots.

Voluntary frameworks 

South Korea has also passed frameworks and ethics-based guidelines that are voluntary recommendations that the AI industry must follow.

The Ministry of Science and ICT (“MSIT”) announced its strategy to realise trustworthy artificial intelligence. This framework highlights the vision, goals, and strategies for developing trustworthy AI. The three key features of this strategy are to create an environment to realise reliable AI, lay the foundations for the safe use of AI, and spread AI ethics.

South Korea, along with 41 other jurisdictions, have adopted the OECD Principles on Artificial Intelligence. This international voluntary framework ensures AI systems are “robust, safe, fair, and trustworthy.” South Korea also supports the OECD value-based principles on AI. 

Indirect regulations (Data & IP)

The 2011 Personal Information Protection Act (“PIPA”) provides a basis for AI regulation. PIPA provides basic data safety laws for all persons within Korea or involved with Korean industry.  

Sector-specific measures

Further, more sector-specific regulations are in the making, including a new set of standards on AI and copyright laws.

Australia

Australia does not currently have specific AI laws and so relies on existing laws (including data protection, discrimination, workplace and employment and consumer laws). Nevertheless, Australia does have voluntary frameworks relating to AI. The Australian Federal Government is also considering sector-specific AI regulation, and has sought feedback on the possible need for horizontal regulation. Several State and Territory Governments in Australia are also considering whether AI-specific laws are needed.

Voluntary frameworks

Australia also has soft AI Ethics Principles, a voluntary framework that focuses on aspiring and complementing existing AI practices. This voluntary framework includes eight principles: Human, societal, and environmental well-being; Human-centred values; Fairness; Privacy protection and security; Reliability and safety; transparency and explainability; Contestability; Accountability.

The Australian Federal Government is conducting consultation on responsible AI management. As part of this, the Government in 2023 released a discussion paper titled “Supporting Responsible AI: a Discussion Paper”. Through this, Australia seeks to obtain feedback from the Australian population on how to regulate AI. Feedback will “inform considerations” across the government on appropriate regulation and policy and will build upon the Australian Government’s investment to grow critical Australian industry.  Australia is also conducting a public consultation on its Copyright Enforcement, which could affect the access to data for training AI algorithms.

Australia is working on producing more AI guidelines as the four government agencies that are the members of Australia’s Digital Platform Regulators Forum (“DP-REG”) met on 20 June 2023 to review the forum’s progress toward meeting strategic objectives. A top priority for DP-REG in the 2023-2024 financial year is to focus on impact assessments of algorithms, improve digital transparency, and increase collaboration between the four members. DP-REG will concentrate on assessing generative AI’s benefits, risks, and harms. These initiatives will build on ongoing efforts establishing a Digital Technology Working Group, Codes & Regulation Working Group, and Data & Research Working Group. 

It is important to note that some individual municipalities within Australia are taking additional steps from what the national Government has established. For example, the city of Darwin has developed a project titled “Switching On Darwin”, which will create a smart city using CCTV AI analytics. The city has also adopted Singapore’s Model AI Governance Framework and has a pilot use case featured in Singapore’s Compendium of Use Cases.

Indirect regulations (Data & IP)

Existing data protection laws potentially regulate how regulated government and businesses may utilise AI in connection with handling personal data. Specifically, Australia has the Privacy Act of 1988, which relates to the protection and handling of individuals’ personal data (among other things). While the original intent behind passing the Act had little to do with AI, its expansion and amendments over the years, including the expansion to cover the business sector in 2001, means that its regulatory proximity and relevance to the use of personal data in AI systems have been increasing. Since 2019, the Act has undergone an extensive review process, and draft amendments are expected to be issued in 2024. These draft amendments include aspects directly related to AI, including provisions addressing concerns around AI-based profiling. 

Conclusion

In sum, the present state of play is that few APAC jurisdictions have promulgated (or have expressed interest in promulgating) direct AI regulations. Instead, the focus of AI regulatory efforts have mainly been on the other governance mechanisms, such as voluntary frameworks or sectoral governance mechanisms. Data and IP regulations (i.e. Indirect Regulations (Data & IP)) may also play a part by exerting a regulatory impact through existing regulations on related subject matter. For instance, Singapore and Hong Kong’s data protection regulators leverage the importance of data in AI development and deployment to issue guidelines on the use of personal data in relation to AI. Other existing laws may also have tangential applicability to AI, such as the application of Indonesia’s EIT Laws to AI systems as “electronic agents”. It is possible that we may see more of such developments as other regulators or government agencies begin to realise the potential reach of regulatory tools in their existing toolbox. 

In our view, over the next couple of years, it is likely that the APAC jurisdictions will continue along this trend of developing more “soft” regulations as opposed to “hard” or “horizontal” regulations. However, how the development of more “soft” regulations will play out remains unclear. One issue is how soft regulations developed by different jurisdictions may interplay with each other. For example, while “fairness” is an ethical guideline that features in the AI governance frameworks of nine APAC jurisdictions, there is no clarity on how far each jurisdiction’s definition or interpretation of “fairness” overlaps with each other. The next phase of AI governance framework development regionally may thus well be to enhance consistency by creating technical tools and frameworks that can objectively measure, verify and benchmark AI systems’ adherence with these ethical or governance aspects.

Also of importance is the potential impact that the European Union’s AI Act will have on the APAC region. The AI Act received political agreement in December 2023 and passed a final vote in the European Parliament on 13 March 2024. In the short to medium term, the AI Act is expected to have a legal and compliance effect on APAC organisations developing and deploying AI systems given the extraterritorial reach of the AI Act. In the longer term, however, there is a possibility that several APAC jurisdictions may consider adapting and imbibing aspects of the AI Act into local regulations. Such an impact may be accelerated as organisations around the world adjust to the AI Act’s compliance regime, and call for more global harmonisation on AI regulations to reduce the risks of a globally fragmented AI regulatory landscape. This effect, colloquially known as the “Brussels Effect”, was observed when the EU enacted the General Data Protection Regulation in 2018 and when jurisdictions globally – including those in APAC – took inspiration from provisions in that law in formulating their own privacy or data protection laws.

Notwithstanding the potential “Brussels effect”, given the current regulatory trends, there remains a real risk of a globally fragmented regulatory landscape for AI. A fragmented AI regulatory landscape could serve to increase cross-border compliance costs, reduce consistent regulatory oversight, and potentially reduce the ability for consumers to use innovative AI technologies across borders. The jury is still out on whether such a scenario would actually play out, but we preliminarily propose that efforts to focus on softer and more flexible regulatory approaches, along with efforts to enhance regulatory interoperability (such as developing cross-border governance frameworks, cross-mapping governance frameworks, and developing AI governance standards, benchmarks and testing capabilities) would be beneficial in reducing compliance costs, offering flexibility and serving as a regulatory standard.

Editor’s note: This article is a guest contribution from Alistair Simmons and Matthew Rostick. LawTech.Asia is grateful to them for contributing this article, as well as to Ken Chia, Eugene Tan, Jacqueline Wong, and the team of lawyers from Baker McKenzie and member firms across the Asia Pacific, Simpson Grierson, JSA and Kim, Choi & Lim for their editorial inputs. We also wish to note for readers that given the constantly changing nature of AI governance and regulations, we have verified the information in the article to be updated as at September 2023. Nevertheless, where possible, we have striven to provide more recent updates up to March 2024.