Written by Brendan Tan Liang En | Edited by Josh Lee Kok Thong
LawTech.Asia is proud to collaborate with the Singapore Management University Yong Pung How School of Law’s LAW4060 AI Law, Policy and Ethics class. This collaborative special series is a collection featuring selected essays from students of the class. For the class’ final assessment, students were asked to choose from a range of practice-focused topics, such as writing a law reform paper on an AI-related topic, analysing jurisdictional approaches to AI regulation, or discussing whether such a thing as “AI law” exists. The collaboration is aimed at encouraging law students to analyse issues using the analytical frames taught in class, and apply them in practical scenarios combining law and policy.
This piece, written by Brendan Tan, argues that “AI law” as a body of law exists. In doing so, Brendan explores how “AI law” should be defined, and develops reasons on why “AI law” can be seen as a legitimate social construct.
Introduction
Following large excitement over the approval of the European Union’s (“EU”) Artificial Intelligence Act[1] (“AIA”),[2]some speculate whether the Brussels effect – EU’s leverage in shaping global market norms – would be felt regarding global AI regulation.[3] Others respond in the affirmative,[4] concluding that EU’s AIA will influence regulation adopted by non-EU jurisdictions given its image as the “world’s first comprehensive AI law”.[5] However, amidst such hype, one ought to hold more scepticism on whether the concept of “AI law” truly exists, or is it merely the next buzzword after AI.[6] This paper therefore aims to demonstrate how AI law has cemented its status beyond mere fiction. Part II elaborates on the definitional difficulties of AI law, while part III expounds on reasons why AI law could still be deemed legitimate as a social construct. Lastly, part IV evaluates and refine one’s perspective towards AI law before concluding.
How does one define AI law?
Determining if AI law truly exists begs the question of how it may be defined. Some opine that such the term “AI law” is meaningless, given the uncertainty in defining what AI law entails. The following therefore aims to consider such difficulties and evaluate whether they would truly threaten the notion of “AI law” accordingly.
What is AI?
The first core challenge for AI law is establishing a precise definition of AI, which determines the scope of the regulations. Legal theorist Lon Fuller outlined eight key elements laws should possess for a functioning legal system and to facilitate individuals’ interaction with the law.[7] Particularly, laws must be well-communicated and use workable definitions when describing the subject of regulation. In defining AI, however, complications arise in discerning whether a system is “intelligent”. With little agreement on the definition of intelligence,[8] the late AI pioneer John McCarthy concluded that the definition of intelligence is multifaceted, frequently drawing parallels with human intelligence and characteristics like learning, adaptability, and even consciousness.[9] As technology advances, the definition of AI therefore changes with time – giving rise to the quote “as soon as it works, no one calls it AI anymore.”[10]
That conundrum aside, current attempts to define AI either lean towards the “human-centric” or the “rationalist” approach.[11] However, both approaches have their drawbacks. The human-centric approach, first suggested by Alan Turning, focused on the ability for AI to mimic human behaviour and deceive a human into thinking it is not a machine.[12] The glaring flaw was that intelligence could not be implied from simply mimicking human behaviour. Further, such a definition was both over-inclusive (as it would capture simple non-AI systems like car automatic headlights) and under-inclusive (as it would exclude systems that integrate deep neural networks with sophisticated search algorithms like AlphaGo).[13]
Rationalists, as Stuart Russell and Peter Norvig propose, would define AI as systems that work towards a fixed goal with the aim of achieving an optimal outcome.[14] Utilising this general concept would however be met with similar resistance, given that they may be over-inclusive (by ascribing needless regulation towards innocent programs such as opponents in video games) and under-inclusive (by not sufficiently accounting for future technology which may be capable of independent goal-setting). In light of the aforementioned difficulties, some therefore conclude that current definitions of AI are too vague for legal purposes,[15] thus dismissing the notion of “AI law”.
How to design AI law?
Aside from struggling with a definition for AI, there are multiple approaches towards fashioning the content of AI law. The variance on the form of AI law would depend on the motive behind such laws, types of AI systems and which sectors does the law apply to.
The approach regulators take towards AI laws would first depend on the underlying objectives for implementing such laws. The traditional rule-based approach, while promoting clarity and standardisation through specific and detailed rules, is too time consuming and generalistic given the complex nuances of AI systems. The more principled risk-based approach should therefore be preferred, given that such laws allows for targeted regulation of high-risk AI while minimising regulatory burdens on low-risk ones.[16] The AIA currently adopts such an approach, categorising AI systems into four risk brackets ranging from “minimal risk” to “unacceptable risk” and imposing tailored obligations for each category.[17] Systems falling within the “minimal risk” category includes AI-powered video games or spam filters, while systems that aim to manipulate vulnerable groups and real-time remote biometric systems are labelled as “unacceptable risk”.[18] China has also similarly announced its intentions to follow such a risk-based approach in their new draft AI laws.[19] However, given criticisms on how such an approach places insufficient emphasis on data inputs,[20] or may falsely estimate the risks of AI systems,[21] it remains to be seen whether any alternative approaches would be adopted. For example, countries could emulate the US Executive Order and craft laws to avoid discriminatory or biased outcomes, ensuring that the application of AI systems would only achieve “equitable outcomes”.[22] Hence, the objectives for enacting AI laws may vary between countries.
Next, regulators would need to decide on the subject of AI laws. AI systems can generally be represented on a spectrum, from narrow (weak) AI to strong (general) AI. Narrow AI, or “Good old Fashioned AI”,[23] refers to systems designed to fulfil the designated task. Often referred to as “expert systems”, these deterministic decision-making models guarantee a specific final output for any given input by following a predetermined set of rules, one example being recommendation algorithms.[24] Strong AI, contrastingly, has the ability to independently create new goals, learn, and solve problems.[25] Similar to their portrayal in science fiction,[26] strong AI is able to adapt its decision-making strategies in accordance to any environment, but no system can squarely fall within this definition right now.[27] Rather, neural networks and deep learning applications are simply described as being closer to strong AI on the scale. Therefore, should the scope of AI systems be under-inclusive and fail account for the rapid technological developments, regulators risk laws being yet again inadequate to impose obligations on the use of such AI. Yet, being over-inclusive causes two risks: applying regulations to naive systems such as calculators, and pre-emptively regulating future technology. The latter would fall squarely within the “precautionary principle”, which dictates that if the potential consequences of an activity may be severe, preventative measures should proactively implemented in light of scientific uncertainties.[28] However, such a move may hinder innovation and discourage adoption, potentially undermining the nation’s economic growth and competitive advantage.[29] Hence, regulators would similarly need to grapple with the scope of AI systems to be governed under AI laws.
Lastly, authorities would need to determine how AI laws would regulate the aforementioned myriad of systems. Countries may choose to regulate horizontally, which targets the optimization of cross-functional processes applicable across diverse industries. Horizontal regulation focuses on creating one overarching set of laws that encompass all applications of a particular system, much like how the AIA pertains to development and usage of AI systems across all sectors and industries.[30] In contrast, vertical regulation represents a sector or AI-specific approach.[31] It tailors regulations to address the particular challenges and risks associated with specific AI applications within a given industry, one example being China’s binding regulations on recommendation algorithms and AI-generated media, or generative AI for short.[32] While this distinction need not be binary, it is yet another consideration for regulators, adding to the uncertainty surrounding “AI law” – which adds for more force for naysayers to dismiss this term altogether.
What is the scope of “law”?
Wrapping up the discussion on the ambiguity of “AI law” is determining the scope of what counts as “law”. Regulators have an array of tools at their disposal to achieve regulatory intervention and enforcement, the two most interventionist ones being hard (regulatory) laws and standards – otherwise known as soft law.[33] Soft laws, unlike traditional top-down binding regulations that dictates behaviour, refers to non-binding legal instruments that instead create expectations for behaviour.[34] These recommended industry standards and ethical guidelines therefore account for the inadequacies of hard laws, given the limitations of comprehensive legislations in a fast-paced, complex and pervasive field like AI.[35] Soft law promotes collaboration between academics, industries and non-governmental organizations, while its informality allows for flexible governance and swift adoption. This facilitates the development and implementation of a diverse set of governance approaches, critical for navigating the everchanging AI environment.[36] However, one shortcoming stems from the absence of legal force and direct government enforcement mechanisms, which may lead to uneven compliance, with the most problematic actors in an industry potentially being the least likely to adhere to these voluntary guidelines.[37] Similarly, some worry about the “potential underperformance” of soft law which stems from the challenges associated with aligning the interests of industry and government stakeholders, and the difficulty of fully comprehending incentives that “drive participation in a voluntary regime”.[38]
That said, there has been increasing consensus for a hybrid approach that incorporates both hard and soft law to govern AI. For example, the OECD promoted utilising both hard and soft law for “agile” governance,[39] while the World Economic Forum emphasises how both forms of law are complimentary.[40]Notably, the AIA imposes extensive mandatory obligations on high-risk AI systems (such as complying with accuracy, transparency and human supervision requirements) while encouraging providers of lower risk systems to apply similar standards.[41] Approaches ratified by countries worldwide may however differ. Where the EU and China predominantly relies on hard law, countries like Singapore and the UK depend more on soft law. Singapore’s tripartite effort in launching the Singapore National AI strategy,[42] the Model AI Governance Framework and its AI testing governance framework (“AI Verify“)[43] reinforces Singapore’s “light touch”[44] voluntary approach to AI regulation while continuing to boost the digital economy.[45] Similarly, the UK’s “pro-innovation”[46] stance on AI regulation prioritize fostering development while managing risks – relying on five non-statutory principles outlined in their 2022 white paper to produce adaptable regulations.[47] Consequently, regulators would need to balance between utilising hard and soft law, adding another element into the uncertainty of what exactly constitutes as “AI law”.
Is AI law legitimate?
Having elucidated the difficulties in defining and designing what AI law truly is, we then explore how the term “AI law” may be perceived. The social construct theory summarises the idea that concepts exist because we agree it exists. Particularly, the Social Thesis construes law as a social phenomenon,[48] relying on social facts to proclaim law as legally valid.[49] The analysis of whether AI law can legitimately exist as a social phenomenon hence turns on whether various stakeholders would accept AI law as a norm.
Academia
Some academics may experience a feeling of déjà vu when questioned on the legitimacy of “AI law” – and rightfully so, given such questions could be viewed as the 21st century rendition of the debate on whether “cyber law” truly existed. First labelled as the “Law of the Horse” by Judge Easterbrook,[50] he claimed that cyber law was shallow, missed unifying principles, and likened it to an amalgamation of multiple broader laws referencing a specific subject (the “horse”). He also proclaimed that the “law of cyberspace” was unable to “illuminate the entire law” and therefore unsuitable as a law course for students.[51]
Professor Lawrence Lessig challenged this belief, asserting that cyber law goes beyond being “old wines in new bottles”.[52] While the “law of the horse” was conceptually accurate, there were gaps in the then legal framework that failed to address the problems arising in cyberspace. Cyberspace brought upon unprecedented phenomena of anonymity and data tracking, leading to untraceable defamation, unfettered access to explicit content,[53] and personal data breaches. Coupled with cyberspace’s nature of transmitting information across international borders almost instantly,[54] traditional principles of law were inadequate in tackling these risks – paving the way for a new form of cyber law. Additionally, Lessig saw value in teaching cyber law even if its theoretical foundation did not comprise of general legal principles. By comprehending how law is one of the “Four Modalities of Regulation”,[55] students under a cyberlaw course would better appreciate how norms, markets and code exert as much influence as legislation when regulating behaviour in cyberspace.
25 years later, and Lessig’s perspective remains relevant to current discussions on AI. The development of AI has manifested two unique problems that warrants a separate body of law.[56] First, AI systems are capable of independent development, boasting the ability to internalise data and adapt in ways unplanned by its designers. This, however, raises questions on who bears the legal responsibility for any mishaps. While tortious concepts like vicarious liability may apply to hold the human principal liable for damages caused by AI agents, they fail to account for the “black box” nature of AI systems.[57] Given the difficulties in fully understanding internal workings of AI systems like machine-learning algorithms and deep neural networks, current legal principles such as intent and causation to impute AI liability onto their designers are insufficient.[58]
Second, generative AI has exacerbated the threat of misinformation and disinformation in cyberspace.[59] By producing fabricated media through superimposing someone’s likeness or voice onto another person,[60] the proliferation of deepfakes adversely impacted the reputation of individuals, notably causing distress and humiliation to women featured in non-consensual deepfake pornography videos. Additionally, hyper-realistic synthetic voices significantly contributed to the evolution of cybercrime like “Vishing”, where scammers manipulate victims into carrying out unauthorized financial transactions.[61] However, current legal frameworks insufficiently address these concerns. While the tort of defamation may apply to deepfake pornography, its force is significantly weakened by the complete defence of justification should the publisher include a watermark clarifying that the video was doctored.[62] More importantly, the lack of ex ante regulation against the usage of such AI alongside cyberspace’s potential of anonymity results in the lack of traceability to potential deepfake abusers.[63] If left unregulated, it bodes disaster for the rampant application of deepfakes in cyberspace, implicating crucial spheres of society like politics.[64] These two risks highlights the disconnect between AI and existing legislation,[65] placing AI into the regulatory void – hence laying the foundation for the recognition of AI law as an independent discipline.
Lessig’s “Four Modalities of Law” similarly applies to why educational institutions see value in offering AI law as a course today. Students under such courses by established universities like London School of Economics,[66] Lund University and Singapore Management University would recognise the de-centrality of legislation in the realm of AI,[67] instead understanding how legislation, norms, market powers and AI have an equal role in influencing one another and shaping “AI law” – more on which to be expounded upon for the remainder of this paper.
Companies
Echoing warnings by the Computer & Communications Industry Association that premature regulatory intervention would hinder innovative AI developers,[68] technology companies worry that enforcing overcautious “AI law” would stifle innovation.[69] Besides these worries legitimising “AI law”, companies would resonate with this term on two additional fronts.
First, AI law is critical for firms to adapt their behaviour and avoid heavy financial penalties imposed against companies who misuse AI.[70] Europe’s General Data Protection Regulation (“GDPR”) has imposed hefty fines on technology companies, the ten biggest fines alone amounting to a staggering $4.1 billion, and total fines now surpass $5.5 billion.[71] Particularly, Meta faced a historic €1.2 billion fine under the GDPR for sending data collected from European Facebook users to the United States,[72] showcasing the severe repercussions organisations face for major violations. There are also increasing calls for harsher penalties for AI misconduct,[73] with administrative fines under the AIA reaching up to €30 million or 6 % of the company’s worldwide annual turnover.[74] Therefore, companies would regulate their practices to avoid contravening AI laws in fear of the aforementioned monetary sanctions.
More importantly, AI law helps firms maintain their clientele’s trust amidst the “age of uncertainty”.[75] In the wake of the fourth industrial revolution,[76] firms seek to capitalise on AI to enhance productivity and elevate customer experiences.[77] It is therefore paramount that stakeholders and customers trust that firms would not succumb to the multitude of risks surrounding AI.[78] Companies who abuse this trust and fail to account for AI risks like data leaks would suffer devastating long-term implications.[79] For example, the personal data of 147 million Americans were leaked in the 2017 Equifax data breach, severely tarnishing their reputation. This resulted in the persistent lack of trust over the next few years, undermining the success of the organisation.[80] Similarly, the 2018 Facebook-Cambridge Analytica data scandal saw the misuse of 87 million Facebook users’ data to sway the 2016 United States presidential election results.[81] This severely damaged trust in Facebook,[82] with only 28% of then users trusting the company’s commitment to privacy, while 74% of users altered their engagement with Facebook through altering their privacy settings or removing their accounts altogether.[83]
Conversely, demonstrating positive regulatory compliance with established guidelines enhances the company’s credibility.[84] Abiding by AI laws effectively reflect the firm’s commitment to prioritise data integrity and privacy, bolstering the company’s reliability which is a crucial competitive advantage.[85] With firms acknowledging how AI’s development rapidly outpaces regulation,[86] some have even taken proactive steps to develop internal data and AI use guidelines in anticipation of future regulations.[87] For example, companies may regulate large language models (“LLMs”) through techniques like reinforcement learning, continuous monitoring, human oversight, and stricter data access controls.[88] Therefore, firms self-regulating in view of upcoming AI laws would bolster trust and resilience for stakeholders navigating a evolving regulatory landscape, maintaining their competitive advantage.[89]
Governments
As alluded to above, governments worldwide would give recognition to “AI law” as a discrete body of law, given that it allows them to address regulatory voids created by AI and nudge companies towards ethical AI practices through clear guidelines and strong deterrents. Furthermore, governments would give weight to “AI law” in two further aspects.
First, implementing AI laws provides governments the opportunity to not only partake, but have an edge in the AI regulatory race. The evolving landscape of AI regulation creates an arena for global competition among governments to find the optimal balance between protecting citizens and fostering innovation. In striving for this balance, countries inadvertently compete through regulation to maintain their competitiveness in the global AI market.[90] For example, the advent of the AIA has placed pressure on the UK to respond, lest its quietude threaten the UK’s position as a frontrunner in European AI innovation despite hosting the world’s first global AI safety summit in November 2023.[91] Further, countries may utilize the economic concept of the “first-mover advantage” to gain an a competitive edge by implementing AI law to attract companies seeking clarity and stability. Countries who adopt new regulatory requirements for AI systems would compel domestic companies to adapt and gain a head start, while foreign companies wishing to operate in the country face additional compliance costs. This may hence push other countries to enact similar rules, but the first mover country keeps its advantage as a “rule-setter” due to experience and established systems. Such can be evinced from EU’s push of the AIA in hopes of enabling European businesses to “benefit from a competitive advantage”.[92]
Interestingly, countries may attempt to be competitive by crafting AI laws which emphasises on different objectives. The AIA reflects a strong commitment to regulation via binding legislation,[93] but some worry it could such restrictions would hinder European competitiveness in the global tech sector against China and the US.[94] In contrast, China prioritises fostering the exponential growth of its AI industry through facilitating industry development.[95] China’s new AI draft laws places more weight on innovation via a pro-industry stance towards copyright protections for generative AI outputs and introducing policy instruments such as tax incentives.[96] In line with their aim to position themselves as a world leader on AI,[97] China’s draft AI laws aimed at warmly welcoming foreign companies to develop AI within its jurisdiction indicates how other countries may utilise AI law as a strategic move to gain an edge in the global AI market.[98]
Secondly, governments would be alive to the implementation of AI law in order to have a voice in leading the way internationally on AI governance. While the United Nations General Assembly recently released the first global resolution on AI urging countries to protect personal data, monitor AI risks and safeguard human rights,[99] a multitude of international organisations has continuously served as platforms for collaboration on crafting international AI governance frameworks. For example, the Organization for Economic Cooperation and Development (“OECD)” members states established the first-ever set of international guidelines for ethical AI development in 2019, emphasising the importance of “Trustworthy AI” and aim to ensure AI systems are “designed to be robust, safe, fair and trustworthy”.[100] Similarly, the United Nations Educational, Scientific and Cultural Organization (“UNESCO”) revealed the first global standard on AI ethics in 2021, which all 193 Member States adopted.[101] In light of many other discussions amongst international bodies,[102] countries would therefore implement domestic AI law to advocating for their regulations to become the international benchmark, further securing their competitive advantage in the AI sphere. This was a huge motivating factor for the European Commission to advocate for the AIA, as it gave EU a significant opportunity to take the reins in shaping international AI norms, standardization, and collaborative efforts.[103] Conversely, the absence of US domestic AI laws hinders its ability to set the global agenda and offer a global leadership model, leaving them playing catch-up to regulations like the AIA.[104] Therefore, AI law would be of significance to governments worldwide in their endeavour to play a pivotal role internationally.
AI law – truth or myth?
The above discussion elucidated how various stakeholders may still acknowledge the term “AI law” despite the difficulties in defining and designing what AI law truly is. Are such difficulties truly salient threats to legitimising the term “AI law”? Arguably, the widespread debate over forms of AI governance and approaches towards AI law reinforces the construction of such a term due to the principle of performativity.[105] For example, collaboration between developers, users and academics with the government in forming AI policies means that stakeholders see value behind such conversations.[106] Similarly, the shared understanding of “AI law” and the ability to influence behaviour further justifies its existence.[107] AI law, despite many uncertainties, may therefore simply take a different shape and form across various jurisdictions to account for cultural differences. For example, China’s current AI laws on Generative AI uniquely aims at “promoting core socialist values”[108] and “actively spread positive energy”.[109] More importantly, it would not be the first field of law to witness variations across jurisdictions,[110] much like criminal and contract law.[111] Therefore, while AI law certainly exists, there is no one universal AI law.
Seen differently, perhaps the question of AI law’s existence is nothing but a red-herring.[112] Just like historical debates on the validity of “cyber law”, the practical reality is that artificial intelligence is significantly impacting human lives in various aspects, and continues to rapidly evolve for better or worse.[113] Similar to how cyber law today has evolved significantly from its infancy stages, the focus should shift towards the future of AI law and how to develop AI regulations effectively and efficiently.[114] Some examples include legal personality for AI (surrounding whether AI should be granted personhood and incur liability like humans) and copyright infringement concerns involving Generative AI (whether copyright infringements apply to image generative AI models),[115] with extensive academic literature on both topics to date.[116] Therefore, rather than dismissing AI law due to its uncertainty, it is recommended to “just get on with it”[117] and continuously search for optimal ways to enact AI laws amidst this ever-changing digital environment.
V. Conclusion
In closing, AI law is more than just a buzzword, and certainty goes beyond simply pointing towards the EU’s AIA. Whilst it is true that there remains much uncertainty over what is (or can be) AI law, there are multiple stakeholders in today’s society whom actively contribute towards shaping AI regulations. This paper therefore illuminates the nuances behind AI law, exploring what AI law may govern and how it should be designed to achieve such goals. A forward-looking mindset is hence highly encouraged to envision future streamlined and functional AI laws.
Editor’s note: This student’s paper was submitted for assessment in end-May 2024. Information within this article should therefore be considered as up-to-date until that time. The views within this paper belong solely to the student author, and should not be attributed in any way to LawTech.Asia.
[1] European Parliament News Website <https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law> (accessed 13 April 2024).
[2] Karen Gilchrist, “World’s first major act to regulate AI passed by European lawmakers”, CNBC (13 March 2024) <https://www.cnbc.com/2024/03/13/european-lawmakers-endorse-worlds-first-major-act-to-regulate-ai.html> (accessed 13 April 2024).
[3] Alex Krasodomski, “The EU’s new AI Act could have global impact”, Chatham House (13 March 2024)
<https://www.chathamhouse.org/2024/03/eus-new-ai-act-could-have-global-impact> (accessed 13 April 2024).
[4] Charlotte Siegmann, “The Brussels Effect and Artificial Intelligence: How EU regulation will impact the global AI market” <https://cdn.governance.ai/Brussels_Effect_GovAI.pdf> (accessed 13 April 2024).
[5] European Parliament Website <https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence> (accessed 13 April 2024).
[6] Nicole Goodkind, “AI was the buzzword of 2023. What happens in 2024?”, CNN (18 December 2023) <https://edition.cnn.com/2023/12/18/investing/premarket-stocks-trading/index.html> (accessed 13 April 2024).
[7] Supra n 15, at p 8.
[8] Jonas Schuett, “Defining the scope of AI regulations” (2023), Law, Innovation and Technology, 15(1) at p 63.
[9] Matthew U. Scherer, “REGULATING ARTIFICIAL INTELLIGENCE SYSTEMS: RISKS, CHALLENGES, COMPETENCIES, AND STRATEGIES” (2016), Harvard Journal of Law & Technology Volume 29, Number 2 at p 359-360.
[10] Supra n 65, at p 62.
[11] Supra n 15, at p 9-13.
[12] Supra n 66, at p 360.
[13] Supra n 15, at p 12-13.
[14] Supra n 69.
[15] Supra n 65, at p 69-71.
[16] Supra n 65, at p 72.
[17] AIA, Supra n 33, at p 3 of “explanatory memorandum”.
[18] AIA, Supra n 33, at p 12 of “explanatory memorandum”. See also <https://www.dnv.com/insights/eu-artificial-intelligence-act/#:~:text=Unacceptable%20risk,-AI%20systems%20that&text=Examples%20include%3A,Real%2Dtime%20remote%20biometric%20systems> (accessed 13 April 2024).
[19] Qiheng Chen, “China’s Emerging Approach to Regulating General-Purpose Artificial Intelligence: Balancing Innovation and Control” Asia Society Policy Institute (07 February 2024) <https://asiasociety.org/policy-institute/chinas-emerging-approach-regulating-general-purpose-artificial-intelligence-balancing-innovation-and> (accessed 13 April 2024).
[20] Martin Kretschmer, “The risks of risk-based AI regulation: taking liability seriously” (September 2023).
[21] Claudio Novelli, “How to evaluate the risks of Artificial Intelligence: a proportionality-based, risk model for the AI Act.” (July 2023).
[22] The White House Website <https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/#:~:text=The%20Federal%20Government%20will%20enforce,and%20other%20harms%20from%20AI> (accessed 13 April 2024).
[23] Supra n 15, at p 18.
[24] Ibid.
[25] Supra n 15, at p 6. See also <https://www.ibm.com/topics/strong-ai> (accessed 13 April 2024).
[26] Such as Marvel’s “Age of Ultron”, or “Skynet” in “Terminator Genisys”.
[27] Supra n 82. See also <https://builtin.com/artificial-intelligence/strong-ai-weak-ai> (accessed 13 April 2024).
[28] Daniel Castro, “Ten Ways the Precautionary Principle Undermines Progress in Artificial Intelligence” (2019) Information Technology & Innovation Foundation, <https://itif.org/publications/2019/02/04/ten-ways-precautionary-principle-undermines-progress-artificial-intelligence/> (accessed 13 April 2024).
[29] Ibid.
[30] Airlie Hillard, “Regulating AI: The Horizontal vs Vertical Approach” Holistic AI (16 August 2022) <https://www.holisticai.com/blog/regulating-ai-the-horizontal-vs-vertical-approach> (accessed 13 April 2024).
[31] Medium, “The AI Revolution: How Vertical and Horizontal AI are Transforming Industries” (01 April 2024) < https://medium.com/@LiatBenZur/the-ai-revolution-how-vertical-and-horizontal-ai-are-transforming-industries-65dea77d2c18#:~:text=While%20vertical%20AI%20focuses%20on,enabling%20data%2Ddriven%20decision%20making> (accessed 13 April 2024).
[32] Matt Sheehan, “China’s AI Regulations and How They Get Made” Carnegie Endowment for International Peace (10 July 2023) <https://carnegieendowment.org/2023/07/10/china-s-ai-regulations-and-how-they-get-made-pub-90117#:~:text=Vertical%20regulations%20target%20a%20specific,applications%20of%20a%20given%20technology> (accessed 13 April 2024).
[33] Professor John Braithwaite’s regulatory table on ResearchGate website <https://www.researchgate.net/figure/The-regulatory-pyramid_fig2_51711050#:~:text=The%20regulatory%20pyramid%20models%20the,a%20mixture%20of%20enforcement%20measures.> (accessed 13 April 2024).
[34] Gary Marchant, “’Soft Law’ Governance of Artificial Intelligence” (2019) UCLA: The Program on Understanding Law, Science, and Evidence (PULSE) at p 3.
[35] Gary Marchant, “GOVERNING EMERGING TECHNOLOGIES THROUGH SOFT LAW: LESSONS FOR ARTIFICIAL INTELLIGENCE” (2020) at p 7.
[36] Supra n 91, at p 4.
[37] Ibid.
[38] Gary Marchant, “Soft Law 2.0: An Agile and Effective Governance Approach for Artificial Intelligence” (2023) 24 MINN. J.L. SCI. & TECH. 375 at p 391.
[39] OECD Public Governance Policy Papers April 2024, see OCED website < https://www.oecd.org/mcm/Recommendation-for-Agile-Regulatory-Governance-to-Harness-Innovation.pdf> (accessed 13 April 2024).
[40] Supra n 95, at p 391-392.
[41] European Parliament Website <https://www.europarl.europa.eu/RegData/etudes/BRIE/2021/698792/EPRS_BRI(2021)698792_EN.pdf> (accessed 13 April 2024).
[42] Ming Yi Ho, “Singapore’s National Strategy in the Global Race for AI” REGIONAL PROGRAMME POLITICAL DIALOGUE ASIA (26 February 2024) <https://www.kas.de/en/web/politikdialog-asien/digital-asia/detail/-/content/singapore-s-national-strategy-in-the-global-race-for-ai> (accessed 13 April 2024).
[43] AI Verify Foundation Website <https://aiverifyfoundation.sg> (accessed 13 April 2024).
[44] Josh Lee, “AI VERIFY: SINGAPORE’S AI GOVERNANCE TESTING INITIATIVE EXPLAINED” Future of Privacy Forum (06 June 2023) <https://fpf.org/blog/ai-verify-singapores-ai-governance-testing-initiative-explained/> (accessed 13 April 2024).
[45] Angela Tan, “S’pore’s $1b AI boost will help sustain competitive edge in digital era, say business leaders” The Straits Times (21 Feb 2024) <https://www.straitstimes.com/business/s-pore-s-1b-ai-boost-will-help-sustain-competitive-edge-in-digital-era-say-business-leaders> (accessed 13 April 2024).
[46] Cynthia O’Donoghue, “A “light touch” approach to AI regulation in the UK”, ReedSmith (18 April 2023) <https://www.technologylawdispatch.com/2023/04/privacy-data-protection/a-light-touch-approach-to-ai-regulation-in-the-uk/> (accessed 13 April 2024).
[47] Valeria Gallo, “The UK’s framework for AI regulation” Deloitte. (21 Feb 2024) <https://www2.deloitte.com/uk/en/blog/emea-centre-for-regulatory-strategy/2024/the-uks-framework-for-ai-regulation.html/#:~:text=The%20UK%20Government%20has%20adopted,governance,%20and%20contestability%20and%20redress> (accessed 13 April 2024).
[48] Priel, Dan. “LAW AS A SOCIAL CONSTRUCTION AND CONCEPTUAL LEGAL THEORY.” Law and Philosophy, vol. 38, no. 3, 2019, pp. 267–87. JSTOR, <http://www.jstor.org/stable/45214399> (accessed 13 April 2024).
[49] Andrei Marmor, Law in the age of pluralism (Oxford University Press, 2007) at p 36-37.
[50] Frank H. Easterbrook, “Cyberspace and the Law of the Horse,” (1996) University of Chicago Legal Forum p 207.
[51] Id, at p 208.
[52] Ira Steven Nathenson, “Best Practices for the Law of the Horse: Teaching Cyberlaw and Illuminating Law Through Online Simulations”, (2011) 28 Santa Clara High Tech 657 at p 663.
[53] Lawrence Lessig, “The Law of the Horse: What Cyberlaw Might Teach Lawrence”, (1999) Harvard Law Review at p 504.
[54] Michael Geist, “Cyberlaw 2.0”, (January 2003) Boston College law review vol 44 at p 325.
[55] Supra n 12, at p 506-507.
[56] Jacob Turner, Robot Rules (Palgrave Macmillan Cham, 2018) at p 64.
[57] Yavar Bathaee, “The Artificial Intelligence Black Box And The Failure Of Intent And Causation” (2018) Harvard Journal of Law & Technology Volume 31.
[58] Id, at p 906-p27.
[59] Tiffany Hsu, “As Deepfakes Flourish, Countries Struggle With Response”, The New York Times (22 Jan 2023) <https://www.nytimes.com/2023/01/22/business/media/deepfake-regulation-difficulty.html> (accessed 13 April 2024).
[60] Harvard Kennedy School, “Deepfakes: The Implications of this Emerging Technology on Society and Governance” (2021) <https://hksspr.org/deepfakes-the-implications-of-this-emerging-technology-on-society-and-governance/> (accessed 13 April 2024).
[61] Stu Sjouwerman, “Deepfake Phishing: The Dangerous New Face Of Cybercrime”, Forbes (Jan 23 2024) <https://www.forbes.com/sites/forbestechcouncil/2024/01/23/deepfake-phishing-the-dangerous-new-face-of-cybercrime/?sh=109b24a84aed> (accessed 13 April 2024).
[62] Review Publishing Co Ltd v Lee Hsien Loong [2010] 1 SLR 52.
[63] Hyperverge, “Are Deepfakes Illegal? Overview Of Deepfake Laws And Regulations” (06 Feb 2024) <https://hyperverge.co/blog/are-deepfakes-illegal/> (accessed 13 April 2024).
[64] Ali Swenson, “Election disinformation takes a big leap with AI being used to deceive worldwide”, AP news (14 March 2024) <https://apnews.com/article/artificial-intelligence-elections-disinformation-chatgpt-bc283e7426402f0b4baa7df280a4c3fd> (accessed 13 April 2024).
[65] Brownsword, Roger, The Challenge of Regulatory Connection (Oxford Academic, 1 Jan. 2009).
[66] London School of Economics and Politics website, <https://www.lse.ac.uk/study-at-lse/summer-schools/summer-school/courses/law/ll204> (accessed 13 April 2024).
[67] Singapore Management University website, < https://computing.smu.edu.sg/newsletter/mitb-new-open-course-july-term> (accessed 13 April 2024).
[68] Pascale Davies, “Could the new EU AI Act stifle genAI innovation in Europe? A new study says it could”, euronews.next (22 March 2024) <https://www.euronews.com/next/2024/03/22/could-the-new-eu-ai-act-stifle-genai-innovation-in-europe-a-new-study-says-it-could> (accessed 13 April 2024).
[69] Alessio Tartaro, “Assessing the impact of regulations and standards on innovation in the field of AI” (2023), Cornell University at p 3.
[70] Nikitha Anand, “The High Cost of Non-Compliance: Penalties Issued for AI under Existing Laws”, Holistic AI (28 March 2024) <https://www.holisticai.com/blog/high-cost-non-compliance-penalties-under-ai-law> (accessed 13 April 2024).
[71] BCG, “How Tech firms can respond to increased regulation” <https://www.bcg.com/publications/2024/how-tech-firms-can-respond-to-increased-regulation> (accessed 13 April 2024).
[72] European Protection Data Board website <https://www.edpb.europa.eu/news/news/2023/12-billion-euro-fine-facebook-result-edpb-binding-decision_en> (accessed 13 April 2024).
[73] Jessica Nall, “United States: Department of Justice announces new corporate compliance directives for AI along with increased penalties for AI-related misconduct”, Global Compliance News (5 April 2024) <https://www.globalcompliancenews.com/2024/04/05/https-insightplus-bakermckenzie-com-bm-antitrust-competition_1-united-states-department-of-justice-announces-new-corporate-compliance-directives-for-ai-along-with-increased-penalties-for-ai-related/> (accessed 13 April 2024).
[74] Regulation of the European Parliament and of the council laying down harmonised rules on Artificial Intelligence, European Commission 2021/0106 COD (Artificial Intelligence Act), (“AIA”), Article 71.
[75] Gary Grossman, “The AI ‘Age of uncertainty’”, VentureBeat (24 September 2023) <https://venturebeat.com/ai/the-ai-age-of-uncertainty/> (accessed 13 April 2024).
[76] Jesse Martin, “The Forth Industrial Revolution: AI and Automation”, The Observer (October 2017) <https://theobserver-qiaa.org/the-fourth-industrial-revolution-ai-and-automation#:~:text=Today%2C%20the%20Fourth%20Industrial%20Revolution,systems%20to%20conquer%20complex%20tasks> (accessed 13 April 2024).
[77] Temitayo Oluwaseun Abrahams and others, “Mastering Compliance: A comprehensive review of regulatory frameworks in accounting and cybersecurity”, (January 2024) Computer Science & IT Research Journal, Volume 5, Issue 1 at p 131.
[78] Mike Thomas, “12 Risks and Dangers of Artificial Intelligence (AI)”, builtin (01 March 2024) < https://builtin.com/artificial-intelligence/risks-of-artificial-intelligence> (accessed 13 April 2024).
[79] Amina Hassad, “Cybersecurity’s Impact on Customer Experience: An Analysis of Data Breaches and Trust Erosion”, (2023) Emerging Trends in Machine Intelligence and Big Data Vol. 15 No. 9 at p 159.
[80] RepTrak, “The Equifax Breach Is a Reputational Crisis that Will Linger” (12 June 2019) <https://www.reptrak.com/blog/the-equifax-breach-is-a-reputational-crisis-that-will-linger/> (accessed 13 April 2024).
[81] Muhammad Arif Amran, “Trust-Repair Discourse on Facebook’s Cambridge Analytica Scandal”, (2016) ELITE: Journal of English Language and Literature 1 (2) p 74-81.
[82] NBC news, “Trust in Facebook has dropped by 66 percent since the Cambridge Analytica scandal” (11 April 2018) <https://www.nbcnews.com/business/consumer/trust-facebook-has-dropped-51-percent-cambridge-analytica-scandal-n867011> (accessed 13 April 2024).
[83] Amy Binns, “Cambridge Analytica scandal: Facebook’s user engagement and trust decline”, The conversation (27 March 2018) < https://theconversation.com/cambridge-analytica-scandal-facebooks-user-engagement-and-trust-decline-93814#:~:text=According%20to%20a%20recent%20Reuters,to%20abide%20by%20privacy%20laws> (accessed 13 April 2024).
[84] Trustpath, “Building trust and credibility: How AI compliance elevates your market position?” < https://www.trustpath.ai/blog/building-trust-and-credibility-how-ai-compliance-elevates-your-market-position> (accessed 13 April 2024).
[85] Supra n 38, at p 171.
[86] Xero, <https://www.xero.com/sg/media-releases/small-businesses-concerned-ai-development-adoption-outpacing-regulation/> (accessed 13 April 2024).
[87] Isabella Bousquette, “AI Is Moving Faster Than Attempts to Regulate It. Here’s How Companies Are Coping” The Wall Street Journal (27 March 2024), <https://www.wsj.com/articles/ai-is-moving-faster-than-attempts-to-regulate-it-heres-how-companies-are-coping-7cfd7104> (accessed 13 April 2024).
[88] PYMNTS, “Tech Companies Point to Self-Regulatory Strategies Before Senate AI Hearings” (11 September 2023) <https://www.pymnts.com/news/regulation/2023/tech-companies-point-to-self-regulatory-strategies-before-senate-ai-hearings/> (accessed 13 April 2024).
[89] Emilia Kallioinen, “The Making of Trustworthy and Competitive Artificial Intelligence” (April 2022), Master’s Thesis, University of Helsinki Faculty of Social Sciences at p 33.
[90] Nathalie A. Smuha, “FROM A ‘RACE TO AI’ TO A ‘RACE TO AI REGULATION’: REGULATORY COMPETITION FOR ARTIFICIAL INTELLIGENCE” (2021), Law, Innovation & Technology, Vol. 13 Iss. 1
[91] Gavin Poole, “Balancing regulation and innovation to ensure the UK remains a global leader in AI” ComputerWeekly.com (28 March 2024) < https://www.computerweekly.com/opinion/Balancing-regulation-and-innovation-to-ensure-the-UK-remains-a-global-leader-in-AI> (accessed 13 April 2024).
[92] Supra n 48, at p 57.
[93] Karen Gilchrist, “World’s first major act to regulate AI passed by European lawmakers” CNBC (13 March 2024) <https://www.cnbc.com/2024/03/13/european-lawmakers-endorse-worlds-first-major-act-to-regulate-ai.html> (accessed 13 April 2024).
[94]EURACTIV, “AI Act’s global effects might be overstated, experts say” (21 March 2024) <https://www.euractiv.com/section/artificial-intelligence/news/ai-acts-global-effects-might-be-overstated-experts-say/> (accessed 13 April 2024).
[95] Johanna Costigan, “China’s New Draft AI Law Prioritizes Industry Development” Forbes (22 March 2024) < https://www.forbes.com/sites/johannacostigan/2024/03/22/chinas-new-draft-ai-law-prioritizes-industry-development/?sh=4a8b6b6c6095> (accessed 13 April 2024).
[96] Liu Caiyu, “Chinese scholars unveil draft on artificial intelligence law” Global Times (17 March 2024) < https://www.globaltimes.cn/page/202403/1308981.shtml> (accessed 13 April 2024).
[97] Graham Webster, “Full Translation: China’s ‘New Generation Artificial Intelligence Development Plan’” DIGICHINA (01 August 2017) <https://digichina.stanford.edu/work/full-translation-chinas-new-generation-artificial-intelligence-development-plan-2017/> (accessed 13 April 2024).
[98] Supra n 54.
[99] Aljazeera, “UN approves its first resolution on artificial intelligence” (21 March 2024) < https://www.aljazeera.com/news/2024/3/21/the-un-approves-its-first-resolution-on-artificial-intelligence#:~:text=The%20United%20Nations%20General%20Assembly,the%20safeguarding%20of%20human%20rights> (accessed 13 April 2024).
[100] Supra n 49, at p 20.
[101] Ibid.
[102] See discussions by international bodies such as G7, G20 and Global partnership on AI (GPAI).
[103] Supra n 48, at p 45.
[104] Joshua P. Meltzer, “The US government should regulate AI if it wants to lead on international AI governance” Brookings (22 May 2023) <https://www.brookings.edu/articles/the-us-government-should-regulate-ai/> (accessed 13 April 2024).
[105] Judith Butler concept of performativity website <https://www.cla.purdue.edu/academic/english/theory/genderandsex/modules/butlerperformativity.html> (accessed 13 April 2024).
[106] AI Singapore Website <https://aisingapore.org/wp-content/uploads/2024/02/Responsibility-for-AI_AISG-Roundtable-1_final.pdf> (accessed 13 April 2024).
[107] Li-ann Thio, “Soft constitutional law in nonliberal Asian constitutional democracies, International Journal of Constitutional Law” (2010), Volume 8, Issue 4, at p 771.
[108] Cyberspace Administration of China, Ministry of Industry and Information Technology of the People’s Republic of China Order No. 9, Internet Information Service Algorithm Recommendation Management Regulations.
[109] Cyberspace Administration of China, Ministry of Industry and Information Technology of the People’s Republic of China Order No. 12, Regulations on the in-depth synthesis of Internet information services.
[110] David Nelken, Comparative criminal justice: Making sense of difference (Sage, 2011).
[111] E. Allan Farnsworth, “Comparative Contract Law” (2006), The Oxford Handbook of Comparative Law.
[112] UCLA law review, “Emerging Digital Technology and the ‘Law of the Horse’” (19 February 2019) <https://www.uclalawreview.org/emerging-digital-technology-and-the-law-of-the-horse/> (accessed 13 April 2024).
[113] Andrew Murray, “Looking Back at the Law of the Horse: Why Cyberlaw and the Rule of Law are Important”, Scripted (11 April 2023) <https://script-ed.org/article/law-horse-cyberlaw-rule-law-important/> (accessed 13 April 2024).
[114] Sermo Civilis, “The Development of Cyber Law: Past and Future” (04 January 2019) <https://hamzakaroumia.com/2019/01/04/the-development-of-cyber-law-past-and-future/> (accessed 13 April 2024).
[115] Mark Fenwick, “Originality and the future of copyright in an age of generative AI” (2023), Computer Law & Security Review.
[116] Supra n 15, p 175. See also <
[117] Cerneva, “Former Regulator Calls for Agile UN Agency to Regulate ‘AI on Steroids’” (16 February 2024) <https://cenerva.com/articles/former-regulator-calls-for-agile-un-agency-to-regulate-ai-on-steroids/> (accessed 13 April 2024).