Written by Terry Ng Tian Yu | Edited by Josh Lee Kok Thong
LawTech.Asia is proud to collaborate with the Singapore Management University Yong Pung How School of Law’s LAW4060 AI Law, Policy and Ethics class. This collaborative special series is a collection featuring selected essays from students of the class. For the class’ final assessment, students were asked to choose from a range of practice-focused topics, such as writing a law reform paper on an AI-related topic, analysing jurisdictional approaches to AI regulation, or discussing whether such a thing as “AI law” exists. The collaboration is aimed at encouraging law students to analyse issues using the analytical frames taught in class, and apply them in practical scenarios combining law and policy.
This piece, written by Terry Ng, argues that “AI law” as a body of law exists. In doing so, Terry studies the emergence of “hard” AI laws around the world, existing laws that apply to AI and the relevance of “soft” AI law initiatives.
Is there such a thing as AI law? Intuitively, the answer seems to be yes, especially given the recent passage of the EU AI Act. Yet, the nascency of this field has subjected the conceptual existence of AI law to some doubt. Unlike more established domains like criminal law, AI law today exists largely as non-binding principles as governments struggle not to over-regulate a fast-developing technology, raising doubts over whether such principles may be considered “laws”. Some jurisdictions also view AI as just another piece of technology that may be regulated using existing legal principles instead of AI-specific laws. On this view, there is no need to recognise the existence of a distinct “AI law”.
Nevertheless, this essay argues that AI law does exist. In part I, examples of “hard” legally binding AI laws from key jurisdictions are presented, proving that there is AI law in the world today. In part II, the counterargument that a special “AI law” need not exist as AI may essentially be governed by existing laws is considered but refuted. Part III considers another counterargument that the non-binding AI regulatory measures that exist in most jurisdictions today should not be considered AI law, and offers reasons to refute it. Part IV concludes the essay.
Hard AI laws exist
The most straightforward argument for AI law’s existence is that various jurisdictions have already passed binding laws that regulate the development and deployment of AI. The impetus for such laws is clear considering the host of AI-related issues that have arisen lately. For example, AI’s reliance on (personal) data to generate outputs has created privacy concerns.[1] Outputs generated by AI from skewed data sets may also lead to discriminatory outcomes against people from certain demographic groups[2] (e.g., an AI software used by courts suggesting that people of a particular race deserve longer sentences).[3] Then, there is the problem of AI-enabled products making decisions independent of humans and causing harm to others (e.g., accidents caused by autonomous vehicles).[4] Such issues erode people’s trust in AI and may discourage adoption of the technology. For society to maximise the productivity-related benefits that may be reaped from AI, clear laws must be introduced to minimise AI’s harmful risks. Examples of such laws from major economies like the EU, US and China are introduced below.
The EU
The EU AI Act, passed in March 2024 and with most provisions expected to become applicable in 2026,[5] is the world’s first horizontal AI-specific legislation (i.e., it applies across all sectors).[6]
Two types of AI are regulated under the Act – “general-purpose AI models” and “AI systems”. The former refers to AI models capable of being integrated into a variety of downstream applications.[7] OpenAI’s GPT-4 is an example of a general-purpose AI.[8] The latter refers to autonomous and adaptive machine-based systems that may infer from the inputs they receive how to generate outputs that can influence their environments.[9] AI systems may be built on general-purpose AI models – for example, ChatGPT is an AI system built on GPT-4.
The Act adopts a risk-based approach to AI regulation – the greater the risks posed by the AI, the more stringent the rules applicable. For example, providers of general-purpose AI models with “systemic risks” (e.g., those whose “training measures use more than 10^25 floating point operations compute”)[10] have additional obligations to mitigate those risks and ensure an adequate level of cybersecurity protection.[11] For AI systems, different levels of obligations apply depending on whether they are categorised as having unacceptable risk (ie, they constitute a prohibited AI practice)[12], high-risk, limited-risk,[13] or minimal risk.[14]
Austin said that law is the command of a sovereign backed by threat of sanction.[15] Hence, an important part of a conventional “law” is its penal provisions. The EU AI Act provides for stiff fines of up to 35 million euros or 7% of worldwide annual turnover (whichever is higher) for non-compliance.[16]
The US
In October 2023, the Biden administration issued a legally-binding[17] Executive Order 14110 on “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”. Unlike the EU AI Act, the Order mainly imposes obligations on the government instead of the private sector.[18]
Under the Order, various federal agencies are legally required to, amongst other things, develop standards, tools, and tests to help ensure that AI systems are safe to procure and use.[19] In response, the Office of Management and Budget issued a government-wide memorandum in March 2024 which, amongst other things, requires federal agencies to implement certain “minimum practices”[20] to manage the risks of safety-impacting or rights-impacting AI. These practices include periodically assessing the AI’s impacts on the public, and providing the public with transparency into how the government uses AI.[21]
If such executive orders are disobeyed by federal agencies, the government can seek administrative enforcement of them through a writ of mandamus (ie, a court order compelling a government entity to perform an act it is legally required to do).[22]
China
In August 2023, the Cyberspace Administration of China introduced laws that explicitly regulate AI in the Interim Measures for the Management of Generative AI Services. It imposes many obligations on generative AI service providers to achieve or avoid certain outcomes (e.g., ensure that data used have lawful sources and intellectual property rights are not infringed).[23] If the providers violate the measures, penalties may include an order to suspend their provision of services.[24]
Objection #1: a special “AI law” does not exist as AI is effectively regulated by existing laws
While Part I has introduced examples from major jurisdictions of legally-binding “hard” laws meant to regulate AI, most jurisdictions do not have similar AI-specific laws. This may partially be due to sentiments that AI is no different from other technologies, and existing legal frameworks suffice for its regulation.[25] A similar argument was offered in the early days of the Internet, when US Judge Frank Easterbrook argued that there was no need for a new, separate law called “cyberlaw” to govern the Internet – it was better to adapt and apply existing general principles to regulate it. He argued that a separate “cyberlaw” would make as little sense as having a “law of the horse” – there are many laws regarding horses (e.g., those governing sale of horses, injuries inflicted by horses, veterinarians’ standard of care towards horses etc), but collecting these disparate strands into a single “law of the horse” would result in missing important unifying principles.[26]
Easterbrook’s argument, however, was criticised by Professor Lawrence Lessig, who argued that cyberlaw was still worth studying because doing so would reveal the limits of hard law as a regulatory tool for cyberspace,[27] a realm distinct from the physical space because of the former’s unique characteristics (e.g., harder to zone certain people off from accessing undesirable products online,[28] less transparency in cyberspace[29], etc.).
While a large part of Lessig’s arguments revolved around alternatives to “law” as a regulatory tool for cyberspace, it is clear that he believed existing laws were insufficient to regulate the then-new technology because of its unique characteristics. In the context of AI, Turner has made similar arguments. He reasoned that new laws need to be created to regulate AI because of two unique traits which AI possesses – AI’s abilities to “make moral choices” and “develop independently”.[30]
Insofar as the objection to AI law’s existence is that existing legal principles are already sufficient to govern AI, this essay disagrees with that argument for three reasons. First, AI poses unique issues that existing laws will find challenging to address. Second, even if existing laws were relied upon, they should be (and have been) adapted to deal with AI. Third (and briefly), the idea that AI law relies upon and adapts a range of existing laws does not mean it cannot exist.
AI poses unique issues that existing laws may find challenging to address, so AI law must exist to resolve them
As mentioned, Turner believes that there are two unique traits about AI that render existing laws insufficient to regulate it. First, AI makes choices that, if taken by humans, would be regarded as having moral significance.[31] For example, if a child were to suddenly run across a road, an autonomous vehicle travelling down it at high speed unable to brake in time could be faced with the options of either running over and killing the child or swerving and crashing into a barrier, killing the passenger.[32] In such moral dilemmas, a kneejerk reaction could be to return to reliance on existing laws that apply to humans (who are also moral agents),[33] but human-targeted laws may be inappropriate as they take into account human frailties not shared by AI.[34] For example, a human driver causing a motor accident may be charged with causing hurt by being criminally “rash” in his driving. Yet, rashness involves doing something while being conscious of the risks involved (where it was unreasonable to have taken that risk).[35] Offences committed with a rash mental state often carry lesser penalties than those committed intentionally. Such concessions in penalties according to the offender’s mental state, however, is very much based upon considerations of human frailty. It seems inappropriate to characterize an AI’s decision to cause hurt as “rash” or “intentional” when the AI simply arrived at it after processing data it was trained upon. The policy of penalising those who are morally blameworthy with the law because they consciously took unreasonable risks falls flat on AIs, which are non-living and cannot meaningfully be said to be “conscious” of the undesirability of the risks attached to its decision. Hence, existing laws may struggle to reconcile dealing with AIs that can make decisions with moral consequences, yet lack the moral blameworthiness.
But even if considerations of states of mind are irrelevant (e.g., in strict liability laws), Turner adds that existing laws are insufficient because of AI’s ability to develop independently (i.e., learn from data in unplanned ways and create improved AI systems).[36] This is particularly relevant when the AI system undergoes unsupervised machine learning – a process where the algorithm is given unlabeled data and the system then groups the data into clusters of similar features.[37] This process may help the programmers “discover knowledge” because the programmers need not know what patterns exist in the data – the AI is able to infer these by itself and make decisions based on them.[38]
In such cases, AI’s autonomy in making decisions without direct reliance on human intervention makes it difficult to establish a causal link between the wrong and a legal person (i.e., a natural person or corporation). Even if a link may be established, there is the issue of which legal person to make liable (e.g., AI system’s deployer, developer, etc.). To further complicate matters, the information required to establish that causal link between the wrong and a legal person, as well as to understand which legal person is most blameworthy, usually reside in the AI’s proprietary algorithms which are not amenable to disclosure in court.[39]
Thus, without clear guidance from new AI laws, existing legal doctrines may struggle to provide a certain enough framework capable of plugging the causal gap between a wrong caused by the AI and a legal person on whom liability may be imputed.
Existing laws, if relied upon, should be adapted to deal specifically with AI and be explicitly recognised as “AI law” to assuage society’s AI-related worries
Even if existing laws were used as a foundation for AI regulation, they should be, and oftentimes have been, intentionally adapted to deal with AI-related concerns. Opponents to AI law’s existence may argue that if AI law were merely adaptations of existing legal principles, there is no need to recognise its existence as a distinct area of law. However, it is submitted that failing to explicitly recognise these adaptations as “AI law” per se does a disservice to the community. In the face of growing public concern about AI[40] and greater demand for clarity in AI regulation from businesses,[41] it is crucial that the government is seen to be directly addressing AI-related concerns and closing the AI “trust gap” which has emerged in society.[42] Recognising the tailored version of existing laws as “AI law” (and not mere application of old laws) could send a signal to society that the government has contemplated the new concerns brought about by AI and devised a clear regulatory framework for it, assuaging the people’s concerns.
Existing laws have in fact been intentionally adapted by governments around the world to address AI concerns. For example, in Singapore, the Personal Data Protection Commission (“PDPC“) adapted the Personal Data Protection Act (“PDPA“) to the AI context through the issuance of “Advisory Guidelines on Use of Personal Data in AI Recommendation and Decision Systems”.[43] These Guidelines were issued for the purposes of (i); providing certainty to businesses on how the PDPA would apply to AI systems; and (ii) assuring consumers that their personal data will be used responsibly by AI systems by setting out baseline guidance on this matter.[44] These two purposes align with the context introduced above of growing public concern regarding AI and greater demand for clarity of AI regulation from businesses. While the Guidelines are not themselves binding,[45] they indicate how the PDPC would interpret the PDPA,[46] and are therefore highly persuasive. The Guidelines indicate how the PDPA obligations would apply when AI is used. For example, the “Accountability Obligation” may be satisfied if the AI-deploying organisation develops policies outlining how they intend to use AI systems fairly and reasonably, but the level of details to be included in those policies depends on the risk level, which is affected by factors like how autonomous the AI system is.[47]
Regardless, AI law’s reliance on a range of existing laws does not mean it cannot exist
Finally, the “law of the horse” criticism loses strength when one considers that many areas of law today have been widely accepted to exist despite being, in some ways, a collation of existing laws applied to a new target of regulation. The prime example is company law, which is recognised in almost every jurisdiction.[48] In regulating companies, classic doctrines of contract, criminal, and tort law have been applied to companies without threatening the integrity of “company law” as a distinct area of law. For example, a company’s constitution has been seen as a contract amongst its members,[49] and rules on implied contractual terms are applied to it.[50] Tortious principles of vicarious liability, traditionally used to govern a master-servant relationship,[51] is now applied to hold companies liable for employees’ wrongs.[52] Companies may be held criminally liable for offences as well through traditional principles of agency law.[53] If company law exists in its own right despite its recourse to older legal principles, there is no reason why AI law may not exist the same way. Judge Easterbrook’s fear that studying laws that apply to a particular creation (e.g., horses, cyberspace, companies or AI) may risk missing unifying principles would unlikely materialise if students already understand those unifying principles (e.g., in contract, tort, or criminal law) before adapting them in their study of the particular (eg, company or AI) law.
Moreover, since AI is arguably much more versatile and applicable in many more use cases than horses, comparing AI law to a “law of the horse” may not be appropriate. Just because a distinct “horse law” may not be worthwhile having does not mean the same applies to “AI law”, especially given the greater societal concerns over AI today.
Objection #2: Non-binding AI regulation should not be considered AI “law”
Most AI regulations are not legally enforceable and compliance with such measures is often voluntary, making them appear to not be “laws”
As mentioned, many jurisdictions do not have AI-specific legislation like the EU AI Act that is legally binding. Instead, countries have chosen to adopt a more laissez-faire, light-touch approach to AI regulation to avoid unduly stifling innovation.[54] This often involves issuing non-binding guiding principles and governance frameworks for AI (either at the national level through a government, or international level through an intergovernmental organization). Alternatively, AI regulation may also come in the form of self-regulation or co-regulation. The common thread running through these measures is that compliance is voluntary.
An example of a non-binding set of guidance for AI governance is the PDPC’s Model AI Governance Framework, the second edition of which released in January 2020.[55] Not only does it provide two guiding principles towards “responsible AI” (ie, “human-centric” AI solutions, and “explainable, transparent and fair” AI decisions),[56] it suggests practical ways these principles may be applied to four areas (internal governance, human involvement in AI decisions, operations management, and stakeholder interaction).[57]
Internationally, an example would be the Organisation for Economic Co-operation and Development (“OECD“)’s Recommendation of the Council on Artificial Intelligence, first adopted in May 2019. It lays out five non-binding principles for “responsible stewardship of trustworthy AI”, and recommends certain national policies (eg, investment in AI research and development) and international co-operation towards trustworthy AI.[58]
Self-regulation is essentially regulatory actions voluntarily taken by industry players instead of the government.[59] Self-regulation may arise due to fears that the state would intervene with hard regulation if industry players failed to demonstrate their ability to take care of the public interest while running their businesses.[60] One example is the Frontier Model Forum, a self-regulatory body consisting of key AI players like Google, OpenAI, Microsoft and Anthropic with the aim of developing safety standards for the responsible development and deployment of AI.[61]
Co-regulation, on the other hand, is similar to self-regulation except that the government is involved in some way.[62] For example, Singapore’s Advisory Council on the Ethical Use of AI and Data was set up by the Infocomm Media Development Authority (“IMDA“) to bring together leaders from 13 industry stakeholders[63] to, amongst other things, assist the government in developing AI governance frameworks for the industry’s voluntary adoption.[64]
However, opponents to AI law’s existence would argue that such non-binding AI regulatory measures can never truly be termed “law” because compliance with them is not legally enforceable. Regulatory measures like “self-regulation” or non-binding “standards” are found on lower rungs of Ayres and Braithwaite’s “regulatory pyramid” compared to “laws”, a separate category at the apex, indicating it is the most coercive regulatory measure requiring the most government intervention.[65]
If we defined “regulation” broadly to be any “process involving the sustained and focused attempt to alter the behaviour of others with the intention of producing a broadly identified outcome”,[66] the above non-binding measures could be considered AI “regulation” since they are aimed at changing players’ behaviour towards more responsible development or deployment of AI, but they are still not AI “law”.
However, the reality of AI regulation is that non-binding measures play an inseparable and pivotal role alongside binding ones, and they are thus essential to one’s understanding of what “AI law” is
Nevertheless, this essay disagrees with the above argument because considering only legally binding measures in studying “AI law” misrepresents the realities of the AI regulatory landscape.
To avoid oversimplifying the AI regulatory landscape, one must consider the pivotal role non-binding measures play alongside legally binding laws for any study of “AI law” to be meaningful. In reality, binding AI laws exist against a backdrop of non-binding guidance which fleshes out the obligations imposed by the binding laws. Therefore, the binding laws are only as effective as the non-binding guidance helps people understand and implement them. For example, the legally-binding US Executive Order 14110 requires certain agencies and government officials to develop guidelines and best practices for AI safety.[67] Yet, the guidelines which industry players may refer to are non-binding in nature, such as the National Institute of Standards and Technology’s (“NIST“) AI Risk Management Framework,[68] adoption of which has been explicitly stated to be voluntary.[69]
The decentralisation of legally binding rules in AI law should not be a surprise. Like other forms of new technology, regulators face the “Collingridge Dilemma” and are slow to impose binding measures that may be hard to reverse later. The Dilemma holds that in the early stages of that technology’s development when intervening is easier and cheaper, there is hesitation to intervene because the consequences of the technology are yet unknown, and intervention may stifle innovation. Yet, by the time the consequences are known, intervening would become too costly.[70] Hence, if AI regulation were to be imposed, it has to be flexible and reversible to adapt to our changing understanding of AI as it develops, making non-binding measures preferable over binding ones.
Thus, an overemphasis on binding laws when contemplating technology regulation is undesirable. Lawrence Lessig said that there are four modalities of regulation constantly acting upon and constraining people’s behaviour. Binding laws are only one of them – social norms, architecture (ie, the physical world), and market prices also play a part.[71] Lessig argued that in technology regulation, binding laws play a limited role compared to the other modalities.[72] For example, non-binding guidelines that are widely accepted could dominate and become “social norms” that constrain AI players’ behaviours more effectively than binding laws do.
In fact, the understanding that people follow rules not merely because of threats of sanction, but because they have become socially accepted norms, is one reason why the modern positivist definition of “law” by Hart has moved away from the Austinian definition (discussed earlier)[73] to one simply based on a “rule of recognition” – as long as the rule (binding or not) comes from a source which society’s officials recognise as having authority to make law, it is law.[74] By this definition of law, most non-binding AI rules are “law”, insofar as they are made by the government (with lawmaking authority). Any study of AI law should therefore not leave non-binding measures out of the picture.
Additionally, the separation between binding and non-binding measures is not always a clean one. For example, jurisdictions may impose a risk-based regulatory framework where higher-risk AI face stricter binding measures, but lower-risk AI are governed by non-binding measures. The EU AI Act imposes many obligations on high-risk AI systems[75] which carry penalties for non-compliance.[76] However, it imposes no obligations on minimal-risk AI systems, only providing that such systems may voluntarily apply the rules for high-risk AI systems.[77] In such cases, both sets of guidance for minimal-risk and high-risk AI systems are explicitly provided for in the EU AI Act, which most would not hesitate to accept as “law”. Yet, only that for high-risk AI systems is binding. It would be artificial to argue that this means the lawmakers’ regulatory approach to minimal-risk AI systems is therefore not part of the “law”. After all, the non-binding nature of the guidance for minimal-risk AI is an integral component illustrating the larger risk-based approach for the EU AI Act.
Elsewhere, the EU AI Act provides that non-binding[78] codes of practice may be relied on by general purpose AI model providers for the purpose of conforming with their obligations.[79] This once again blurs the line between binding and non-binding measures, because adoption of the non-binding code of practice may lead to the legal effect of conformity with the Act.
Finally, semantically, it should be noted that non-binding AI regulatory measures have been termed “soft law”. Soft laws are measures that “create substantive expectations that are not directly enforceable by governments”.[80] For example, the PDPC’s “Advisory Guidelines on Use of Personal Data in AI Recommendation and Decision Systems”, while not binding (as explained earlier), creates substantive expectations about how the PDPC would interpret the PDPA to apply to AI systems, and is thus soft law.
Therefore, AI law must not be understood to only consist of binding measures, but also the large pool of non-binding ones.
Conclusion
This essay has argued that AI law exists. Major jurisdictions have passed binding AI laws. Claims that a special “AI law” does not exist because existing laws are sufficient should be refuted because AI poses new challenges. Recognising adaptations of existing legal principles as “AI law” may help assuage society’s AI-related concerns by signalling that the government has a clear regulatory answer to AI. Claims that non-binding AI regulatory measures should not be considered AI “law” must also be refuted. Such measures are appropriate for regulating technologies like AI and are often an inseparable part of the entire regulatory framework.
Editor’s note: This student’s paper was submitted for assessment in end-May 2024. Information within this article should therefore be considered as up-to-date until that time. The views within this paper belong solely to the student author, and should not be attributed in any way to LawTech.Asia.
[1] Simon Chesterman, We, The Robots? (Cambridge University Press, 2021) (“Chesterman”) at pp 171 – 246.
[2] Jacob Turner, Robot Rules: Regulating Artificial Intelligence (Springer, 2018) (“Robot Rules”) at p 337.
[3] Chesterman, supra n 1, at pp 63 – 82.
[4] Id, at pp 31 – 62.
[5] European Parliament, “Artificial Intelligence Act: MEPs adopt landmark law” (13 March 2024) <https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law> (accessed 12 April 2024).
[6] Sidley, “EU Formally Adopts World’s First AI Law” (21 March 2024) <https://datamatters.sidley.com/2024/03/21/eu-formally-adopts-worlds-first-ai-law/> (accessed 12 April 2024).
[7] Regulation Laying Down Harmonised Rules on Artificial Intelligence 2024 (EU) (“EU AI Act”) at Art 3(44b).
[8] European Parliament, “General-purpose artificial intelligence” (March 2023) <https://www.europarl.europa.eu/RegData/etudes/ATAG/2023/745708/EPRS_ATA(2023)745708_EN.pdf> (accessed 12 April 2024).
[9] EU AI Act, supra n 7, at Art 3(1).
[10] Id, at Art 52(a).
[11] Id, at Art 52(d).
[12] Id, at Art 5.
[13] Id, at Art 52.
[14] Id, at Art 95(1).
[15] Britannica, “Philosophy of Law: The 19th Century” <https://www.britannica.com/topic/philosophy-of-law/The-19th-century> (accessed 12 April 2024).
[16] EU AI Act, supra n 7, at Art 99(3).
[17] American Bar Association, “What Is an Executive Order?” (25 January 2021) <https://www.americanbar.org/groups/public_education/publications/teaching-legal-docs/what-is-an-executive-order-/> (accessed 12 April 2024).
[18] Future of Privacy Forum, “Comparison of State Executive Orders on Artificial Intelligence” (12 June 2023) <https://fpf.org/wp-content/uploads/2023/12/State-AI-Exec-Order-Comparison-Chart-1.pdf> (accessed 12 April 2024).
[19] The White House, “FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence” (30 October 2023) <https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/#:~:text=The%20Executive%20Order%20establishes%20new,around%20the%20world%2C%20and%20more> (accessed 12 April 2024).
[20] The White House, “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence” (28 March 2024) <https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf> (“M-24-10”) at s 5(c).
[21] The White House, “FACT SHEET: Vice President Harris Announces OMB Policy to Advance Governance, Innovation, and Risk Management in Federal Agencies’ Use of Artificial Intelligence” (28 March 2024) <https://www.whitehouse.gov/briefing-room/statements-releases/2024/03/28/fact-sheet-vice-president-harris-announces-omb-policy-to-advance-governance-innovation-and-risk-management-in-federal-agencies-use-of-artificial-intelligence/> (accessed 12 April 2024).
[22] Robert B. Cash, “Presidential Power; Use and Enforcement of Executive Orders” (1963) 39 Notre Dame Law Review 44 at 51.
[23] Interim Measures for the Management of Generative Artificial Intelligence Services 2023 (China) at Art 7.
[24] Id, at Art 21.
[25] Robot Rules, supra n 2, at p 39.
[26] Id, at p 40.
[27] Lawrence Lessig, “The Law of the Horse: What Cyberlaw Might Teach” (1999) 113 Harvard Law Review 501 (“Lessig”) at 502.
[28] Id, at 503.
[29] Ibid.
[30] Robot Rules, supra n 2, at p 64.
[31] Ibid.
[32] Id, at p 67.
[33] Id, at p 65.
[34] Id, at p 66.
[35] Penal Code 1871 (2020 Rev Ed) s 26E(1).
[36] Robot Rules, supra n 2, at p 70.
[37] Id, at p 72.
[38] Ibid.
[39] Law Reform Committee, Singapore Academy of Law, Report on the Attribution of Civil Liability for Accidents Involving Autonomous Cars(September 2020) at para 5.11 (Chairmen: Charles Lim Aeng Cheng & The Honourable Justice Kannan Ramesh).
[40] Pew Research Center, “Growing public concern about the role of artificial intelligence in daily life” (28 August 2023) <https://www.pewresearch.org/short-reads/2023/08/28/growing-public-concern-about-the-role-of-artificial-intelligence-in-daily-life/> (accessed 12 April 2024).
[41] Thomson Reuters, “Why AI still needs regulation despite impact” (1 February 2024) <https://legal.thomsonreuters.com/blog/why-ai-still-needs-regulation-despite-impact/#:~:text=Because%20of%20the%20risks%20of,impact%20of%20AI%20through%20regulation> (accessed 12 April 2024).
[42] Workday, “Closing the AI Trust Gap: Three Key Findings” <https://forms.workday.com/en-us/other/closing-the-ai-trust-gap/form.html?step=step1_default> (accessed 12 April 2024).
[43] Personal Data Protection Commission, “Advisory Guidelines on Use of Personal Data in AI Recommendation and Decision Systems” (1 March 2024) <https://www.pdpc.gov.sg/-/media/files/pdpc/pdf-files/advisory-guidelines/advisory-guidelines-on-the-use-of-personal-data-in-ai-recommendation-and-decision-systems.pdf> (“Advisory Guidelines”) (accessed 12 April 2024).
[44] Id, at [3.1].
[45] Id, at [2.2].
[46] Personal Data Protection Act 2012 (2020 Rev Ed) s 49(1).
[47] Advisory Guidelines, supra n 43, at [10.3].
[48] Cally Jordan, “International Survey of Corporate Law in Asia, Europe, North America and the Commonwealth” (1997) <https://law.unimelb.edu.au/__data/assets/pdf_file/0005/1721174/8-small_International_Survey.pdf> (accessed 12 April 2024).
[49] Companies Act 1967 (2020 Rev Ed) s 39.
[50] Sembcorp Marine Ltd v PPL Holdings Pte Ltd [2013] 4 SLR 193.
[51] Joachim Dietrich and Iain Field, “Statute and Theories of Vicarious Liability” (2019) 43 Melbourne University Law Review 515 at 523.
[52] Skandinaviska Enskilda Banken AB (Publ), Singapore Branch v Asia Pacific Breweries (Singapore) Pte Ltd [2011] 3 SLR 540.
[53] Meridian Global Funds Management Asia Ltd v Securities Commission [1995] UKPC 5.
[54] Benjamin Cedric Larsen, “The geopolitics of AI and the rise of digital sovereignty” (8 December 2022) <https://www.brookings.edu/articles/the-geopolitics-of-ai-and-the-rise-of-digital-sovereignty/> (accessed 12 April 2024).
[55] Personal Data Protection Commission, “Singapore’s Approach to AI Governance” <https://www.pdpc.gov.sg/help-and-resources/2020/01/model-ai-governance-framework> (“Singapore’s Approach”) (accessed 12 April 2024).
[56] Personal Data Protection Commission, “Model Artificial Intelligence Governance Framework Second Edition” (21 January 2020) <https://www.pdpc.gov.sg/-/media/Files/PDPC/PDF-Files/Resource-for-Organisation/AI/SGModelAIGovFramework2.pdf> (accessed 12 April 2024) at [2.7].
[57] Id, at [1.1].
[58] Organisation for Economic Co-operation and Development, “Recommendation of the Council on Artificial Intelligence” (8 November 2023) <https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449> (accessed 12 April 2024).
[59] Collins Dictionary, “Self-regulation” <https://www.collinsdictionary.com/dictionary/english/self-regulation> (accessed 12 April 2024).
[60] The Media Freedom Internet Cookbook (Christian Möller and Arnaud Amouroux gen ed) (Organisation for Security and Co-operation in Europe, 2004) at p 63.
[61] Frontier Model Forum, “How We Work” <https://www.frontiermodelforum.org/how-we-work/> (accessed 12 April 2024)/
[62] Glen Hepburn, “OECD Report: Alternatives to Traditional Regulation” <https://www.oecd.org/gov/regulatory-policy/42245468.pdf> at [0.12].
[63] Singapore’s Approach, supra n 55.
[64] Infocomm Media Development Authority, “Composition of the Advisory Council on the Ethical Use of Artificial Intelligence (“AI”) and Data” (30 August 2018) <https://www.imda.gov.sg/resources/press-releases-factsheets-and-speeches/archived/imda/press-releases/2018/composition-of-the-advisory-council-on-the-ethical-use-of-ai-and-data> (accessed 12 April 2024).
[65] Belinda Reeve, “The regulatory pyramid meets the food pyramid: can regulatory theory improve controls on television food advertising to Australian children?” (2011) 19 JLM 128 at 133.
[66] Julia Black, “Critical Reflections on Regulation”, (2002) <https://eprints.lse.ac.uk/35985/1/Disspaper4-1.pdf> at p 20 (accessed 12 April 2024).
[67] Executive Order 14110: Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (US) s 4.1.
[68] Id, s 4.3(iii).
[69] National Institute of Standards and Technology, “Artificial Intelligence Risk Management Framework (AI RMF 1.0)” (January 2023) <https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf> (accessed 12 April 2024) at p 2.
[70] David E. Winickoff and Sebastian M. Pfotenhauer, “Chapter 10. Technology governance and the innovation process” <https://www.oecd-ilibrary.org/sites/sti_in_outlook-2018-15-en/index.html?itemId=/content/component/sti_in_outlook-2018-15-en> (accessed 12 April 2024).
[71] Lessig, supra n 27, at 506 – 507.
[72] Id, at 502.
[73] Stephen Perry, “Hart on Social Rules and the Foundations of Law: Liberating the Internal Point of View” (2006) 75 Fordham Law Review 1171 at 1172.
[74] Ibid.
[75] EU AI Act, supra n 7, at ch III ss 2 – 3.
[76] Id, at Art 99(4).
[77] Id, at Art 95(1).
[78] Id, at Art 56(1).
[79] Id, at Art 53(4).
[80] Gary Marchant and Carlos Ignacio Gutierrez, “Soft Law 2.0: An Agile and Effective Governance Approach for Artificial Intelligence” (2023) 24 Minnesota Journal of Law, Science & Technology 375 at 377.