Reading time: 16 minutes

Written by Victoria Rui-Qi Phua | Edited by Josh Lee Kok Thong

We’re all law and tech scholars now, says every law and tech sceptic. That is only half-right. Law and technology is about law, but it is also about technology. This is not obvious in many so-called law and technology pieces which tend to focus exclusively on the law. No doubt this draws on what Judge Easterbrook famously said about three decades ago, to paraphrase: “lawyers will never fully understand tech so we might as well not try”.

In open defiance of this narrative, LawTech.Asia is proud to announce a collaboration with the Singapore Management University Yong Pung How School of Law’s LAW4032 Law and Technology class. This collaborative special series is a collection featuring selected essays from students of the class. Ranging across a broad range of technology law and policy topics, the collaboration is aimed at encouraging law students to think about where the law is and what it should be vis-a-vis technology.

This piece, written by Victoria Phua, puts forward an argument for attributing electronic personhood status for “strong AI”. According to her, algorithms trained by machine learning are increasingly performing or assisting with tasks previously exclusive to humans. As these systems provide decision making rather than mere support, the emergence of strong AI has raised new legal and ethical issues, which cannot be satisfactorily addressed by existing solutions. The ‘Mere Tools’ approach regards algorithms as ‘mere tools’ but does not address active contracting mechanisms. The ‘Agency’ approach treats AI systems as electronic agents but fails to deal with legal personality and consent issues in agency. The ‘Legal Person’ approach goes further to treat AI systems as legal persons but has drawn criticism for having no morality nor intent. To address the legal personality in strong AI, Victoria proposes to extend the fiction and concession theories of corporate personality to create a ‘quasi-person’ or ‘electronic person’. This is more satisfactory as it allows for a fairer allocation of risks and responsibilities among contracting parties. It also holds autonomous systems liable for their actions, thereby encouraging innovation. Further, it facilitates the allocation of damages. Last, it embodies the core philosophy of human-centricity.

Introduction

It is 11 November 2025. Grab delivery rider John Lim, aged 30, just received an invitation by the police to attend another interview for the fifth housebreaking of the year in his neighbourhood (geographically referenced to the group representation constituency in the last national election). Being a former drug offender, John was picked as a suspect by a security and law enforcement system powered by artificial intelligence (“AI“) used by the police.[1] As the prospect of more lost earnings loomed, John’s heart grew heavy. After his discharge from jail for drug offences, John’s multiple loan applications to start a business were rejected by banks that used AI for credit scoring.[2] Recently, he was horrified to receive another rejection from a small financial institution that seemed open to consider loan applications of former offenders like himself. An AI virtual assistant programmed to assess borrowers’ creditworthiness sent him a short text to seal his ‘unbankable’ status yet once more. Meanwhile, endless alerts reminded him of his weekly Fast and Easy Testing Rostered Routine Testing (“FET RRT“) required of all Grab riders who wish to avoid restrictions on assignments from Grab.[3] In dejection, he headed to the nearby mall hoping to cheer himself by picking up 11.11 bargains. Unfortunately, his access was denied by an automated mall gate as he had prior exposure to a COVID-24 case (a more serious, and hypothetical, variant of COVID-19).[4] Hoping to seek alternative comfort in food, he attempted to purchase a natural chicken sandwich from a vending machine only to be rejected by the AI-powered machine as he had maxed out his personal carbon allowance (“PCA“) after having exhausted his weekly quota of natural meat. He was not impressed by the plant-based and cultured chicken sandwiches offered to him. By now, fully distraught by this sequence of events, he turned to his Wysa app[5] on his phone to pour out his sorrows to an AI-based mental health chatbot. 

This scenario may sound like an apocalyptic movie to some. But an AI-powered world is imminent and real. AI algorithms are expected to increasingly “act on behalf of user and to meet certain objectives or complete tasks without any direct input or direct supervision from user”.[6] They can assist in criminal profiling and predictive policing, making a case for cost-effective security and law enforcement with less human errors.[7] In the financial industry, banks and financial institutions rely on AI technologies across their value chain of key processes from detection of default risks to providing advice on investment products.[8] In public health, machine learning (ML) and AI have been used in the contact tracing of SARS-Cov-2 spread to break the chain of virus transmission. ML algorithms can be trained to collect material information on a person’s lifestyle preferences to estimate carbon emissions, track PCA and even recommend options for carbon mitigation.[9] Chatbots are increasingly being used to support individuals experiencing depression with behavioural therapy to strengthen mental resilience and enhance emotional wellbeing.[10]

AI systems have evolved rapidly in the past four decades and their increasing sophistication raises difficult legal and ethics questions in commercial transactions where the human intermediary has been supplanted by algorithms. For example, in Mutnick v Clearview AI, a US AI company Clearview AI collected facial images and scanned biometric identifiers on websites for resale to companies (e.g. Macy’s). It was sued for allegedly violating the Illinois Biometric Information Privacy Act[11]. Separately, Macy’s was sued for using Clearview AI to identify shoppers from the store’s surveillance photos.[12] In the UK, the healthcare regulator pointed out the legal gaps around AI-powered mental health chatbots.[13]  In Uber v Uber drivers,[14] Uber was ordered by the Amsterdam District Court to compensate and reinstate six drivers unfairly terminated by algorithms.

In this light, this paper examines the legal implications of strong AI, defined by Searle as a computer that is “not merely a tool”, but programmed like a human mind that can “understand and have other cognitive states”.[15] In particular, what are the legal implications of applying electronic personality to AI in commercial transactions? In addressing this question, this paper will build on Ooi’s analysis[16] which examines AI systems from three conceptual perspectives: the “Mere Tools”, “Agency” and “Legal Person” approaches. The paper adds to the analysis in two ways. First, the scope of the paper goes beyond software contracts to study issues arising from human-robot co-existence in the context of strong AI. Second, instead of an extended objective theory of contract, this paper advocates to extend the fiction and concession theories of corporate personality to grant third existence personality for strong AI. The third existence status was a concept first introduced by Shuji Hashimoto[17] to describe entities that are “neither living/biological (first existence) nor non-living/non-biological (second existence)”. [18] Third existence robots initially referred to those that “will resemble living beings in appearance and behaviour, but they will not be self-aware.”[19] The concept has subsequently been expanded to include next generation autonomous robots that fall short of human-based intelligence.[20] This approach of granting third existence personality not only addresses Solaiman’s concern about legal personhood for current systems that are generally weak AI,[21] but also offers a reliable vanguard to safeguard the human centricity of a society and workplace where strong AI will be nothing but inevitable. 

Before proceeding, it should be noted that this paper does not suggest that strong AI systems already exist today. Neither does it imply that the advent of strong AI is possible or likely. Rather its goal is merely to consider a potential future situation where such systems are developed.

AI systems as mere tools

The “Mere Tools” theory[22] characterises AI systems as mere tools of users such that transactions formed remain binding since any representation by the AI systems is treated as that of the user. This approach is attractive in that contracts formed through passive software contracts (formed via communication devices) are readily justified by the Objective Theory of Contract since parties demonstrate an intention to be bound by the modality.[23] However, when AI systems autonomously determine the terms of contracts, there is arguably a lack of ad idem if it can be shown that the outcome is not sufficiently certain to hold the parties bound. For example, Charles Schwab Corporation—who used its robo-advisor to automatically build, monitor and rebalance a portfolio based on investment goals set by the user—has to defend a class-action for an alleged breach of contract and violations of fiduciary duties in putting its own interests above clients. Its share price fell by 3% when lawsuit was announced.[24]

While the “Mere Tools” approach deserves merit for allocating risks between parties employing passive contracting mechanisms, it fares poorly between parties adopting active contracting mechanisms.[25] More recent research warned that ML algorithms may circumvent anti-trust legislation by enabling tacit price collusions in dynamic pricing models, thereby transforming these algorithms from ‘mere tools’ to “super tools”.[26] In labour law, the potential for biased or discriminatory robo-hiring/firing are top concerns that draw heavy criticism. 

AI systems as artificial agents

To address the active nature of the contracting mechanisms, the “Agency” approach regards AI systems as non-human electronic agents[27] or artificial agents[28] whose actions are based on embedded knowledge and sequence of data captured by their sensors. Unlike humans who select their course of action based on desires, preferences or values, artificial agents choose actions that are largely the product of how they have been programmed to respond optimally to different permutations of data captured by sensors. 

The notion of an artificial agent has its roots in the law of agency where an agent is an entity or person distinct from the principal but acts on behalf of the principal within the authority granted to it.[29] AI algorithms can generate an optimum output based on data provided from various sources. Unlike traditional software which generally responds in a predictable manner based on a pre-determined set of rules, AI algorithms using ML are capable of making self-determined decisions beyond that anticipated by the programmers. 

The “Agency” approach offers two key benefits. First, the doctrines of authority[30] and ratification[31] delineate the character of relationship involving the artificial agent. One limits the liability of the principal only to actions of the agent acting within its authority while the other allows retroactive ratification of agent’s action by the principal despite its the lack of authority. Second, economic efficiency can be advanced by allocating the risks of errors by the artificial agents on the party who would incur the least costs to avoid them.[32]

Critics of the “Agency” approach however contend that artificial agents lack legal personality,[33] and hence can neither consent to an agency relationship nor be made subject to fiduciary duties. For a meaningful application of the agency law in characterizing AI systems, scholars argue that major revisions to the fundamental aspects of the law and the legal personality of the artificial agent are required. [34] To circumvent the objection of a lack of legal personality, some scholars suggest extending “the objective intention theory of contract to give validity to contracts formed by software” on “the normative grounds of fairness and commercial certainty” that “can be further augmented by the law of mistake”.[35]

AI system as legal persons

In patent law, most jurisdictions would assume the inventor to be human. However, Stephen Thaler attempted to test these assumptions in 2019 by submitting patent applications that named an AI system DABUS as the inventor in three patent offices: the European Patent Office,[36] the US Patent Office[37] and the British Intellectual Property Office.[38] All three offices rejected his application on the ground the inventor should be a natural person.[39] In Thaler v Commissioner of Patents, an Australian court made the first decision in the world in 2021 by recognising an AI system DABUS as the inventor.[40] The decision was however overturned on appeal.[41] This gave rise to the “Legal Person” debate: could AI systems be regarded as legal agents? 

In recognising AI as having a legal personality, there is less difficulty dealing with social robots — those  possessing creativity, knowledge about themselves and the outside world, and the ability to communicate with others and work towards specific goals.[42] In 2021, the social robots market reached US$2.6 billion.[43] Social robots are defined by Neumann as “machines that can socially interact and communicate intelligently with humans”.[44] Unlike the limited industrial robots designed to operate only within industrial plants or restricted areas, social robots can navigate complex environments to interact with humans and other AI systems. For example, in 1981, an AI robot in a Kawasaki manufacturing plant deployed its powerful hydraulic arm to push a 37-year-old factory worker away and killing him after having erroneously identified him to be a threat to its own operation.[45]  That was the first recorded victim of harm inflicted by a robot.  Today, social robots are physically embodied autonomous robots widely deployed in retail for COVID-19 safety measures and workload relief,[46] healthcare for surgical and administrative support,[47] and education for teaching enhancement.[48] Although the current social robots have started with weak AI to facilitate human acceptance, I am foreshadowing a situation where strong AI will increasingly be implemented among social robots as societal adoption grows.

As social robots become more “intelligent” (that is, possessing capabilities and functionalities that are presently mainly attributable to humans), they will become ubiquitous in homes and workplaces. Social robots have been projected to grow at compound annual growth rate (CAGR) of 31.3% from 2022 to reach US$13.3 billion by 2027.[49] This raised concerns about the risks to body injuries and property damages. Building on analogy between corporations and AI entities in criminal liability, Hallevy[50] submits that AI entities are no less culpable if they are capable of committing both the actus reus and mens rea of offences punishable if humans were guilty of the same. However, critics of the “Legal Person” approach have argued that AI entities should not be treated as legal persons as they do not satisfy the requirements of legal personhood: (a) an ability to be a “subject of law”, (b) an “ability to exercise rights and perform duties”, and (c) the “enjoyment of rights needs to exercise awareness and choice”.[51]

New regulatory solutions needed for strong AI

Broadly, the above three approaches appear to suffice for addressing contractual and tortious issues based on existing legal frameworks. However, the proponents of these approaches have acknowledged the need for new regulatory solutions in cases involving strong AI where the interaction and interdependencies between humans and AI intensifies the level of complexity in algorithm management.[52] [53]

The ultimate solution must uphold the primacy of humans in human-robot co-existence. In particular, there are inherent limitations to confer full legal status on AI. In 2018, more than 150 European AI experts submitted an open letter to the European Commission contending that “creating a legal personality for a robot is inappropriate whatever the legal status model”.[54] From the “Natural Person” model, the experts argued that a robot should not be granted legal status since that would mean robots holding human rights (e.g. right to citizenship, right to dignity), “in contradiction with the Charter of Fundamental Rights of the European Union and the Convention for the Protection of Human Rights and Fundamental Freedoms”.[55] Based on the “Legal Entity” model, the experts refuted the applicability of legal status for a robot by virtue of the requirement for the “existence of human persons behind the legal person to represent and direct it” which is “not the case for a robot”.[56] From the perspective of the “Anglo-Saxon Trust” model, the experts emphasised the similar need for the “existence of a human being as a last resort — the trustee or fiduciary — responsible for managing the robot granted with a Trust”.[57] It would be inappropriate to confer certain human rights such as a right to citizenship and dignity to AI. AI probably deserves recognition as sui generis, a “special status of an intermediate between things and human beings”.[58]

Extending the fiction and concession theories of corporate personality 

The “fiction” theory of personality of corporate bodies reportedly originated from Pope Innocent IV (1243-1254). Under this theory, “corporate bodies are personas fictae“, and have “neither a body nor a will”.[59] The “concession” theory of juridical persons arose from the emergence of nationhood and its claim to exercise sovereignty by granting power to corporations. Despite their difference in origins, both theories share a common concern to limit the power of corporate bodies. As noted by the US Supreme Court, a corporation “is an artificial being, invisible, intangible, and existing only in contemplation of law”.[60] The State, as a supreme power, can grant personality to a corporation for policy reasons or to maintain the integrity of the legislative system. The original purpose of explicitly granting personality through legislation and charter has somewhat become a ‘mere formality’ in modern society. Chesterman observed that these developments “most closely align with the legislative and judicial practice in recognising personality, and could encompass its extension to AI systems”.[61] In the literature on AI legal personality, arguments have been made against the parallel between corporations and AI systems because the former is made of people who will bear liability for the corporations, while the latter is made by people as products programmed with limited self-control[62] and currently governed under product liability regime.[63] While this author recognises that these arguments are persuasive for weak AI systems that are typically company products, they may not be adequate for strong AI systems that are self-aware, autonomous and likely to operate across multiple organisations. To address the liability gap, this author proposes the extension of fiction and concession theories to strong AI systems in the form of granting them status as a “quasi-person” or “electronic person”.

It is submitted that this proposed extension to strong AI systems is ideal for several reasons. First, granting strong AI systems legal status in the form of a “quasi-person” or “electronic person” allows for a fairer allocation of risks among the system’s manufacturer, the system’s owner, the user and the customers. Unlike the “Mere Tools” approach, it does not inflexibly attribute all algorithm mistakes to users. In the case of pricing collusions through algorithms deployed as super tools, this extension of legal personhood on AI harmonises the application of civil and criminal laws including antitrust laws.[64] Similarly, where AI manages task allocation to human employees through algorithm, the “electronic person” could be made responsible to employees who are prejudiced by the electronic AI personality. 

Second, the conferment of a legal “quasi-person” or “electronic person” status on strong AI systems is arguably superior to the ‘Agency” approach as it satisfies the normative justifications of the agency doctrine. It avoids the conceptual incoherence created by the acceptance of a non-legal person as an agent for the sake of protecting the principal and third party from the risk of algorithm errors. This would be consistent with Section 107(d) of the Unified Computer Information Transactions Act (UCITA) introduced in the US, which binds every person who uses an electronic agent to the transaction regardless of whether the person is aware of or has reviewed the agent’s operations or the results of the operations.[65] A similar provision was made in The Australian Electronic Transactions Act.[66] As an electronic person, strong AI systems could still be sued by the principal or third party for losses arising from a breach of fiduciary duties to the principal or unauthorised transactions.[67]

Third, the proposed status of a legal “quasi-person” or “electronic person” for strong AI does not express a new form of personality. To this end, the proposed status does not contradict the human rights enshrined in the Charter of Fundamental Rights of the European Union and the Convention for the Protection of Human Rights and Fundamental Freedoms. Instead, it provides an artificial type of legal personality for functional purposes. The creation of a legal electronic personality that is short of a legal natural person is preferable to the “Legal Person” approach; it overcomes the problem of a lack of legal personhood[68] and preserves the necessary redress for harm occasioned by its use.[69]  

Last, the concept of electronic personhood is consonant with the proposed legislation of European Commission for AI regulation (April 2021) based on EU values and fundamental rights that reflect a “human-centric approach in which the human being enjoys a unique and inalienable moral status of primacy in the civil, political, economic and social fields”.[70] Besides being “a tool for people”, AI is “a force for good in society with the ultimate aim of increasing human well-being”.[71]

Conclusion

There is no escaping from the problem: AI is here to stay. It will become more pervasive and intrusive. With computer visioning and improved bandwidth in data transmission, the potential applications for stronger AI may sound like science fiction. The development of the law is most certain to lag behind the development of technology. As far as AI is concerned, we can ill-afford to be contented with tweaks and repurposing laws designed to address issues created by technological solutions that did not exist.  In other areas, we have had to create new legislation like Electronic Transactions Act to adopt digital signatures. As for the emergence of digital assets and national digital currencies, it is a matter of time we have to create new frameworks to regulate the relationships between actors and the state. 

Notwithstanding the flaws of the various approaches canvassed above, the valiant attempts are worthy developments. There can be no easy answer to a disruption as significant as that wrought by AI. Some technological enhancements are clearly mere tools; yet the impact of others may be disputable. The struggles to fit existing conceptions of law into such unprecedented displacement of human intervention in daily activities should come as no surprise to anyone. The limitations of the approaches are self-evident but the scholarly attempts not so much expose inadequacies of the laws as with the novelty and scale of the impact that AI has on human intercourse. A sui generis problem deserves bold legal response. Never since the industrialisation age have legal minds struggled so hard to cope with legal reform to address the impact of technological breakthroughs. This is truly a new age born out of confluence of technology, data and connectivity. The future bodes well but only if the best legal minds remain engaged in the technology space. 

This piece was published as part of LawTech.Asia’s collaboration with the LAW4032 Law and Technology module of the Singapore Management University’s Yong Pung How School of Law. The views articulated herein belong solely to the original author, and should not be attributed to LawTech.Asia or any other entity.


[1] AI is based on the input of dispositional and situational data from multiple data sources. See eg in State v Loomis 881 N.W.2d 749 (Wis. 2016) where US court used a sentencing algorithm called Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) to profile Loomis as a high-risk offender, thereby rejecting his request for parole.    

[2] N Mejia, ‘AI for Credit Scoring – An Overview of Startups and Innovation’ Emerj (18 January 2019). 

[3] Grab, Covid-19 updateshttps://help.grab.com/driver/en-sg/360043608652-Learn-more-about-the-Vaccinate-or-Regular-Testing-(VoRT),-and-Fast-and-Easy-Testing-Rostered-Routine-Testing-(FET-RRT)-Regimes

[4] L Leo, ‘New COVID-19 measures in Singapore’ CNA (14 May 2021) 

[5] In recent years, there is a growing number of mental health chatbots powered by AI to help patients with mental health concerns while ensuring anonymity and privacy. Wysa is one such app founded by Jo Aggarwal, launched in 2016 and operating in 30 countries. See Wysa, Mental Health Support for Everyone, https://www.wysa.io/

[6] JJ Borking, BMA Van Eck and P Siepel, Intelligent software agents and privacy (The Hague: Registratiekamer 1999). 

[7] R Jenkins and D Purves, ‘Artificial Intelligence and Predictive Policing: A Roadmap for Research’ (aipolicing.org 2020) 1-31. 

[8] O Subhani, ‘Only 13% of global banking, insurance industry uses AI solutions across bulk of processes’ The Straits Times (18 August 2021) 

[9] RK Sullivan et al., ‘Smartphone apps for measuring human health and climate change co-benefits’ (2016) 4 JMIR mHealth uHealth e135.

[10] G Dosovitsky, BS Pineda, NC Jacobson, C Chang, and EL Bunge, ‘Artificial Intelligence Chatbot for Depression: Descriptive Study of Usage’ (2020) 4(11) JMIR Form Res e17065. 

[11] Mutnick v. Clearview AI, Inc., No. 20 C 0512 (N.D. Ill. 12 August 2020).

[12] Carmean v. Macy’s Retail Holdings, Inc. 1:20-cv-04589 (26 January 2021).

[13] N Lomas, ‘UK’s MHRA says it has ‘concerns’ about Babylon Health’ TechCrunch (6 March 2021). 

[14] A Nawrat, ‘HR tech gone wrong? Uber told to reinstate drivers after ‘robo-firing’’ Unleash (14 April 2021). 

[15] JR Searle, ‘Minds, brains, and programs’ (1980) 3(3) Behavioral and Brain Sciences 419.

[16] V Ooi, ‘Contracts formed by software: An approach from the law of mistake’ (2019) Centre for AI & Data Governance 1-19.

[17] S Hashimoto, ‘The Robot generating an environment and autonomous’ In: T Ojima, and K Yabuno  (eds) The book of wabot 2 (2003). Tokyo: Chuko (in Japanese).

[18] YH Weng, CH Chen, CT Sun, ‘Toward the human–robot co-existence society: On safety intelligence for next generation robots’ (2009) 9 Nov (1) International Journal of Social Robotics. 275.

[19] Weng (n 18) 275.

[20] Weng (n 18) 275.

[21] SM Solaiman, ‘Legal personality of robots, corporations, idols and chimpanzees: a quest for legitimacy’ (2017) 25(2) Artificial intelligence and law 155-179.

[22] E Weitzenboeck, ‘Electronic Agents and the Formation of Contracts’ (2001) 9(3) IJLIT 204. 

[23] J Winn and B Wright, The Law of Electronic Commerce (4th edn, Aspen Law & Business 2000)  

[24] R Neal, ‘Legal woes grow for Schwab robo advisor with new class action lawsuit’ FinancialPlanning (14 September 2021). 

[25] Ooi (n 16).

[26] G Zheng and H Wu, ‘Collusive algorithms as mere tools, super-tools or legal persons’ (2019) 15(2-3) Journal of Competition Law & Economics 123-158.

[27] I Kerr, ‘Ensuring the Success of Contract Formation in Agent-Mediated Electronic Commerce’ (2001) 1 Electronic Commerce Research 183.

[28] SJ Russell and P Norvig, Artificial intelligence: a modern approach (3rd edn, Prentice Hall 2010) 1–5. 

[29] S Chopra and L White, ‘Artificial Agents and the Contracting Problem: A Solution via an Agency Analysis’ (2009) University of Illinois Journal of Law Technology & Policy 363

[30] Criterion Properties plc v Stratford UK Properties LLC [2004] 1 WLR 1846; [2004] UKHL 28.

[31] Boston Fruit Co v British and Foreign Marine Insurance Co [1906] AC 336.

[32] Chopra and White (n 26) 394.

[33] A Bellia, ‘Contracting with Electronic Agents’ (2001) 50 Emory Law Journal 1047.

[34] Ooi (n 16).

[35] Ooi (n 16) 12 and 14.

[36] Grounds for the EPO Decision of 27 January 2020 on EP 18 275 163 (European Patent Office) para 22.

[37] In re Application No. 16/524,350 was refused by US Patent and Trademark Office’s Decision on Petition of 22 April 2020.

[38] Two patent applications GB1816909.4 and GB1818161.0 were refused by UK Intellectual Property Office’s decision BL O/741/19 of 4 December 2019. 

[39] S Chesterman, ‘Artificial intelligence and the limits of legal personality’ (2020) 69(4) International & Comparative Law Quarterly 819-844.

[40] Thaler v Commissioner of Patents [2021] FCA 879, [128].  

[41] Commissioner of Patents v Thaler [2022] FCAFC 62.

[42] Solaiman (n 21).

[43] Imarc, ‘Social Robots Market: Global Industry Trends, Share, Size, Growth, Opportunity and Forecast 2022-2027’ (2021) https://www.imarcgroup.com/social-robots-market#:~:text=The%20global%20social%20robots%20market,31.3%25%20during%202022%2D2027.

[44] MM Neumann, ‘Social robots and young children’s early language and literacy learning’ (2020) 48(2) Early Childhood Education Journal 157-170.

[45] R Whymant ‘From the archive, 9 December 1981: Robot kills factory worker’ The Guardian (9 December 2014) https://www.theguardian.com/theguardian/2014/dec/09/robot-kills-factory-worker

[46] A Carter, ‘German supermarket Edeka trials new store staffed by robots’ I AM EXPAT (20 February 2021) https://www.iamexpat.de/lifestyle/lifestyle-news/german-supermarket-edeka-trials-new-store-staffed-robots

[47] R Cairns, ‘More than 50 robots are working at Singapore’s high-tech hospital’ CNN (26 August 2021) https://edition.cnn.com/2021/08/25/asia/cgh-robots-healthcare-spc-intl-hnk/index.html

[48] Furhat Robotics, ‘Stockholms Stad to test social robots in school classrooms’ (5 April 2019) https://furhatrobotics.com/press-releases/stockholms-stad-to-test-social-robots-in-school-classrooms/

[49] Imarc (n 43).

[50] G Hallevy, ‘Virtual criminal responsibility’ (2010) 6(1) Orig Law Rev 6

[51] Solaiman, (n 21) 161.

[52] Chesterman (n 39).

[53] Ooi (n 16).

[54] ‘Open Letter to the European Commission: Artificial Intelligence and Robotics’ (April 2018) para 2 1

[55] Open Letter (n 54) para 2a 1.

[56] Open Letter (n 54) para 2b 1

[57] Open Letter (n 54) para 2c 1

[58] P Nowik, ‘Electronic personhood for artificial intelligence in the workplace’ (2021) 42(105584) Computer Law & Security Review 8.

[59] J Dewey, ‘The historic background of corporate legal personality’ (1925) 35 Yale LJ 655

[60] Trustees of Dartmouth Coll. v Woodward 17 US 518, 636 (1819).

[61] Chesterman (n 39) 823.

[62] Solaiman (n 21) 174

[63] A Bertolini, ‘Robots as products: the case for a realistic analysis of robotic applications and liability rules’ (2013) 5(2) Law, innovation and technology 214-247.

[64] Zheng and Wu (n 26).

[65] Uniform Computer Information Transactions Act 2002 § 107, § 107 cmt. 5.  

[66] Australian Electronic Transactions Act 1999 (Act No. 162 of 1999 as amended), s 15(1).  

[67] DC Vladeck, ‘Machines without principals: liability rules and artificial intelligence’ (2014) 89(1) Wash Law Rev 117–150.

[68] Solaiman (n 21).

[69] Nowik (n 58).

[70] European Commission, ‘High-level Expert Group on Artificial Intelligence, Ethics Guidelines for Trustworthy AI’ (8 April 2019) 10.

[71] See Art 3(1) of Artificial Intelligence Act and Annex I which listed approaches involving machine language, statistical methods, logic and knowledge. European Commission, ‘Proposal for a Regulation of the European Parliament and of The Council’ (21 April 2021)