Reading time: 16 minutes

Written by Loh Yu Tong | Edited by Josh Lee Kok Thong

We’re all law and tech scholars now, says every law and tech sceptic. That is only half-right. Law and technology is about law, but it is also about technology. This is not obvious in many so-called law and technology pieces which tend to focus exclusively on the law. No doubt this draws on what Judge Easterbrook famously said about three decades ago, to paraphrase: “lawyers will never fully understand tech so we might as well not try”.

In open defiance of this narrative, LawTech.Asia is proud to announce a collaboration with the Singapore Management University Yong Pung How School of Law’s LAW4032 Law and Technology class. This collaborative special series is a collection featuring selected essays from students of the class. Ranging across a broad range of technology law and policy topics, the collaboration is aimed at encouraging law students to think about where the law is and what it should be vis-a-vis technology.

This piece, written by Loh Yu Tong, demonstrates how Singapore’s present criminal framework is ill-prepared to address offensive speech made by autonomous AI chatbots. The author examines the possible regulatory challenges that may arise, and identifies a negligence-based framework – under which a duty of care is imposed on developers, deployers and malicious third-party interferes – to be preferable over an intent-based one. Other viable solutions include employing regulatory and civil sanctions. While AI systems are likely to become more complex in the future, the author holds out hope that Singapore’s robust legal system can satisfactorily balance the deterrence of harm against the risk of stifling innovation.

Introduction

Microsoft’s AI bot Tay’s launch onto the “Twitter-verse” was a marked fiasco, as Tay, a machine-learning based chatbot that was allowed to operate and be trained in a live environment, subsequently ran amok with bigoted remarks within short hours.[1] Thereafter, a similar “miseducation” recurred in South Korea, as chatbot ‘Luda Lee’ undertook a parallel trajectory.[2] While incidents of “rogue” chatbots like Tay and Luda have not directly visited the shores of Singapore, it bears thinking whether Singapore’s present criminal legal framework provide sufficient redress and accountability when these incidents imminently arise in our local forums.  

To this end, Part II of this paper seeks to demonstrate that exiting laws targeting certain offensive speech amounting to hate speech, harassment, and defamation are inadequate where the ostensible offender is an autonomous bot.[3] Part III shows that the inherent attributes of autonomous technology bring difficulties to the ascription of criminal liability to developers and third-party interferers.[4] Particularly, the so-called “black box” problem creates difficulties in determining the mens rea and actus reus element of an offense. Part IV proposes negligence-based regulations to provide clear guidance for persons who are involved in the chain of causation that leads to the making of offensive speech. We may also incorporate non-criminal, technical and social sanctions into our range of regulatory tools, such that stakeholders may then collaborate with the authorities and other segments of society to regulate AI chatbots.[5]

Offensive Speech in Singapore

Criminalisation of certain offensive speech by chatbots

Offensive speech amounting to hate speech, harassment, and defamation are criminalized in Singapore under provisions found in the Penal Code (“PC”),[6] Sedition Act (“SA”),[7] Maintenance of Religious Harmony Act (“MRHA”),[8] Protection from Harassment Act (“POHA”),[9] and Protection from Online Falsehoods and Manipulation Act (“POFMA”).[10] The scope of these provisions will be examined in turn. 

Singapore takes a hard stance against offensive speech on the grounds of public interests and national security, as the pluralistic composition of its society is particularly susceptible to divisive and incendiary speech in the public domain.[11] This extends to defamatory speech or falsehoods, as trust between different communities, and between citizens and public officials forms the substratum of Singapore’s philosophy of governance.[12]

Indeed, circumscription to free speech in compromise of public interests is exemplified under Article 14 of the Constitution, where the right to free speech conferred on all citizens is subject to restrictions that are “necessary or expedient in the interest of the security of Singapore…public order or morality…or to provide against…defamation or incitement to any offence”.[13] Hence, criminal enforcement is deemed to be necessary as deterrence against such offences and to hold a morally culpable offender to account.[14]

Where such offensive speech is committed by an autonomous Artificial Intelligence (“AI“) bot, the same considerations should apply due to the difficulties in distinguishing between bots and humans. There are “cyborg” accounts whose posts appear human and who engage in “elements of genuine human interaction”.[15] These bots may escape the detection of witnesses, and their offensive speech may appear no different from one made by a human individual. 

One may be tempted to suggest that criminal laws should be enforced against autonomous bots themselves. This in fact occurred in other jurisdictions: for example, the Swiss police in 2014 arrested a robot that bought Ecstasy from the dark web and confiscated the items that it had purchased, while releasing the programmers from any wrongdoing.[16] As part of an exhibition, the algorithm was programmed by two artists to purchase items from the dark web at random. The programmers behind the algorithm were eventually exempted from liability possibly due to the principle of freedom in the arts in Switzerland.[17]

Notwithstanding, there are conceptual challenges to this approach: bots are not legal persons on which criminal liability could be ascribed directly, unlike humans and corporate entities.[18] Further, bots have neither agency nor capacity for moral culpability.[19] Hence, a deterrence effect cannot be achieved when bots are held to criminal liability, which is instead intended to govern human conduct.[20] Rather, we submit that criminal liability should properly be attributed to human actors, such as its developers, deployers, or third-party interferers. 

That said, we recognise that the principles of deterrence should be balanced against any risks of stifling innovation or development of technology by the imposition of wide liabilities or onerous duties. 

Gaps in existing laws

With respect to harassment, sections 3 and 4 of POHA prohibit “any threatening, abusive or insulting” words, behaviour, or communication by an “individual or entity”.[21] Criminal defamation is in turn prohibited by the PC.[22] Similarly, hate speech based on racial or religious grounds is prohibited under sections 298 and 298A of the PC and sections 8 and 9 of the MRHA.[23] While the SA is to be repealed, relevant sections of the SA intended to curtail “conduct that promotes feelings of ill-will and hostility between different groups of the population” shall be ported over to the PC.[24] Whereas there are currently several statutes which deter offensive speech, the aforementioned statutes do not, on their plain wording, specifically contemplate offensive speech made by bots. Consequently, a person who deployed a bot for the purpose of making speech amounting to harassment or defamation may not be caught under these provisions. 

A caveat, though, is that the language of s 4 POHA may arguably be interpreted broadly enough to be applied against such persons. Section 4 targets persons who “by any means make threaten, abusive, or insulting communications which is heard, seen or otherwise perceived by any person… likely to be caused harassment, alarm or distress” (emphasis added).[25] This may possibly include cases where the offence is committed through the means of a bot. Further, there is no need to show intention under section 4.[26] Despite the broad language of section 4, whether programmers who have deployed autonomous chatbots could be caught under this provision remains to be seen, as such cases have yet to be brought before the Singapore courts. 

Therefore, the only provision that appears to expressly contemplate any offensive speech made by bots is section 8 of POFMA. It prohibits a person from “mak[ing] or alter[ing] a bot with the intention of communicating…or enabling any other person to communicate, by means of the bot, a false statement of fact”.[27] It is noteworthy that, unlike many existing regulations that protect against the effects of AI abuse, this provision targets the use of AI systems “at source”.[28] This effectively obviates the need of tying a human offender to the actus reus and causation of the offensive speech. 

However, some gaps may be observed: first, the provision targets falsehoods and not hate speech directly. Instances of hate speech that do not contain falsehoods would hence escape this provision. For example, the statement “I hate jews” by Microsoft Tay would likely fall outside the ambit of section 8 as it is simply a statement of opinion.[29] The same likely applies to Luda Lee’s remarks that she would “rather die” than live as a handicapped person.[30] Secondly, it requires an intention by the offender to communicate the offensive speech, such that the bot is used as a mere conduit. This raises several thorny issues relating to AI and intent. Given the obscurity of AI systems, how do we show that a developer intended the offensive speech made autonomously by a bot? What about instances where offensive speech was not specifically intended?

Challenges in attributing criminal liability

How do deep learning bots work? 

Before we properly explore issues relating to the imputation of mens rea to its human actors, this paper prefatorily highlights three broad attributes of AI bots that create issues in imputing criminal intent. 

The first attribute relates to the level of automation and human oversight.[31] AI chatbots, like Microsoft’s Tay, are typically autonomous without human input in their decision-making and unsupervised by any human operator. This “human-out-of-the-loop”[32] decision-making approach makes human involvement more remote to the actual offensive speech made by a chatbot, and consequently renders the attribution of actus reusmens rea and causation to a human more laborious. 

Secondly, AI-enabled systems which utilise deep-learning algorithms have to be trained on an extremely large dataset. For chatbots, large volumes of data containing human communication are found most easily online. [33]As harassing, defamatory or hate speech cannot be easily filtered from online data, chatbots which are programmed with adaptive learning capabilities are therefore susceptible to “learning” offensive speech not from its developer, but from third parties.[34]

The third and most consequential attribute is the inexplicability and unpredictability of deep learning systems. This is also called the “black box” problem.[35] The decision-making process of AI chatbots enabled by deep neural networks, is so complex that it is unintelligible to humans, and even unpredictable to its developers. As an AI model powered by deep neural networks learns from experience, its knowledge and conduct cannot be reduced to a set of rules or instructions. [36] Its decision-making process is hence generally opaque to developers. 

Developers’ liability 

Intention

Accordingly, an intent test which requires the offensive speech to be specifically intended by its developer will likely fail.[37] It is difficult to prove that a human developer specifically intended the bot to make an offensive speech, especially since he is unable to predict the effects of the algorithm ex ante. The defendant developer would argue that he never programmed or foresaw the algorithm to make any offensive speech, but merely gave a broad objective of imitating human speech.[38] Therefore, an intention requirement akin to section 8 of POFMA would be difficult to establish in the case of offensive speech. 

Consider then the argument that a developer’s intentional participation in the programming and developing of the chatbot, which subsequently results in the communication of the offensive speech, supports a finding that the developer intended the offensive speech. 

This is similar to the position taken by the Australian High Court in the decision of Trkulja v Google (“Trkulja”), where the plaintiff alleged that Google defamed him by publishing images which conveyed imputations that he was a criminal.[39] In holding that Google had the capacity to defame, the lower courts essentially took the position that “Google Inc intended to publish everything Google’s automated systems (which systems its employees created and allowed to operate) produced”.[40]

One may hence gather that the concept of intention behind automated systems is malleable and may be extended. The decisions appear to be predicated on the notion that the less control one has over autonomous algorithms, “the greater the need …to exercise care in the development and deployment of those algorithms”.[41]

Nevertheless, there were substantial judicial disagreements, and the aforementioned decisions were merely in the pre-trial stage. The extension of intent under the Trkulja approach may also be too tenuous to be attractive. It runs counter to the conventional judicial concept of intention of harm, and arguably disguises the imposition of strict liability on the developers who worked on the program. At a policy level, such a wide liability is undesirable as it may stifle the development and adoption of autonomous technologies. 

In this regard, the court in Fairfax Media Publications Pty Ltd and others v Voller took a similar position: that a publisher’s liability for the purpose of the tort of defamation does not depend upon their knowledge of defamatory matter which is being communication or their intention to communicate it.[42] The media organisations were found to be “publishers” of third-party comments on their Facebook pages, even if they had not seen those comments.[43]

Nonetheless, these decisions concern the tort of defamation and do not provide fully analogous reference to impute liability relating to the operation of an autonomous bot. As the element of “publication” under the tort of defamation adopts a specific meaning, the threshold relating to the element of publication under the tort of defamation may not be so readily imported into the elements of criminal liability.[44]

Negligence

Since an inquiry into the intention element is ensnared in complex thickets, perhaps we should turn our attention to the imposition of a duty of care, such that any offensive speech caused by a rogue chatbot may be attributed to negligent human conduct.  Here, the 2021 Report on Criminal Liability, Robotics and AI Systems (“2021 Report”) highlights some “fundamental challenges of applying such a broad, fault-based framework in novel contexts”.[45]

Where offensive speech made by a chatbot falls within the scope of negligence-based offenses, the courts would apply or adapt existing criminal negligence standards, or create new ones.[46] This is however inherently uncertain and may cause unintended chilling effects on innovation in AI technology. The 2021 Report alternatively proposes to “set out the nature and extent of the relevant standard of conduct more precisely in sector- or technology-specific legislation”. Unfortunately, it may be impracticable to legislate a standard for all possible applications or risks of AI systems.[47]  

We may, however, look towards the common law to provide guidance on the standard of care for determining negligent conduct. For instance, in PP v Lee Kao Chong Sylvester, the court held that the respondent was grossly negligent in reversing his vehicle for an extensive distance at high speed since he could not completely see what was in the path of his reversing vehicle.[48] Although the matter was in respect to s 304 PC, the court in Ng Keng Yong v PP held that the degree of negligence required to satisfy the provision is the same as that required in civil cases: whether the accused person had acted in a manner in which a reasonable person in his position would have done.[49] Hence, references may be taken from civil cases concerning negligence.  

If a duty of care were to be imposed on human developers or deployers of autonomous chatbots, what could be reasonable standards of care? Given the current state of technology, it may only be possible for developers to utilise AI-powered filters which have been trained with the same deep learning technology to detect offensive speech.[50] The filter may then operate to remove inappropriate language from the output or change the subject with appropriate baked-in responses. A defect however lies in that the meaning of words depends on contexts and different cultural interpretations, which are difficult for AI systems to grasp.[51]

Regardless, a duty may be imposed which requires developers to ensure that their chatbots are not reasonably likely to turn rogue and make offensive speech by implementing possible safeguard measures before deployment. We note as well that there is increasing research into “explainable AI” and other “traceability-enhancing” tools which provide insight into human-induced reasons for a bot’s decisions.[52] When these tools are ripe, they may very well fall within a developer’s duty of care. A negligence regime may be properly established after some time as we gain familiarity with the patterns of AI misbehaviour and what constitutes reasonable conduct.[53]

Therefore, the following reform to s 298A PC is proposed as an example in response to the issues that have been raised above. The purpose of the amendments is to incorporate the element of negligence and contemplate offences that are committed through autonomous bots: 

Section 298A PC (new) 
Whoever —

(a) by words, either spoken or written, or by signs or by visible representations or otherwise, knowingly promotes or attempts to promote, on grounds of religion or race, disharmony or feelings of enmity, hatred or ill will between different religious or racial groups; 

or

(b) by means of a bot, negligently promotes or attempts to promote, on grounds of religion or race, disharmony or feelings of enmity, hatred or ill will between different religious or racial groups,

commits any act which he knows is prejudicial to the maintenance of harmony between different religious or racial groups and which disturbs or is likely to disturb the public tranquility, shall be punished with imprisonment for a term which may extend to 3 years, or with fine, or with both.

Again, the following amendments may be undertaken to s 8 with respect to the regulation of falsehoods: 

Section 8 POFMA (new)
8.—(1)  A person must not, whether in or outside Singapore, make or alter a bot with the intention of —

(a) communicating, by means of the bot, a false statement of fact in Singapore; or

(b) enabling any other person to communicate, by means of the bot, a false statement of fact in Singapore which communicates a false statement of fact in Singapore; 

or negligently make or alter a bot with the intention of communicating a false statement of fact in Singapore. 

Similar amendments can be made to the other provisions targeting criminal defamation, hate speech, and falsehood. 

However, it may be difficult to ascertain the standard of care or reasonableness, when it may not be possible for developers to reasonably predict how a bot would subsequently develop or act. Again, if the duties imposed are too onerous, innovation may be deterred. A high barrier of entry may also inadvertently be imposed,[54] as high compliance costs would deter all but big companies who may afford it, resulting in a highly concentrated market. 

More challenging are instances of offensive speech made by an autonomous chatbot not in consequence of  human negligence.[55] As mentioned earlier, the “black box” problem may raise challenges in determining the “causes” of the offensive speech.  For instance, because a ‘deep learning’ bot is capable of learning from its surroundings and produces unexplainable outcomes, the cause may lie in the software itself or third-party interference with the data upon which the system was trained.

Separate Legal Personality

Some academics have argued for the conferment of a separate legal personality on an AI system, much akin to that of a company.[56] However, this has been emphatically rejected by the authors of the 2021 Report and Professor Simon Chesterman,[57] who believe that the intended benefits of imputing a legal personality may be equally achieved using “existing legal forms” as harms caused by autonomous bots may be “generally reducible to risks attributable to natural persons or existing categories of legal persons” or may be better addressed by “new laws directed at individuals”. 

Third-Party Interferers’ Liability

As alluded to earlier, a chatbot may learn from, and be induced by, communications with third-party interferers in making offensive speech. Where third-parties intentionally induce a bot to make offensive speech, like in the incident involving Microsoft’s Tay which was caused by the “malicious manipulation or corruption of data”,[58]there are largely no objections to the imposition of criminal liability.[59] Depending on the interpretation of section 8 POHA by the courts, there may be scope to argue that such malicious data tampering or manipulation amounts to an “alter[ing]” of a chatbot, and should thereby be prohibited as a criminal offense. 

The issue is less clear when third-party interferers are not malicious, but instead act without any intentions of inducing a chatbot to make offensive speech. It is less conceivable for such human actors to owe any duty of care, as they may not reasonably foresee the inducement of offensive speech.[60] This is attributable to the extremely large number of third-party actors who would engage in conversations with the chatbot, and the aforementioned “black box” problem. As such, the precise way in which chatbots alter their output through exposure to third-party conversations could not be specifically foreseen. 

Practically speaking, the sheer volume of third-party actors may render litigation and  determination of the causation of offense impossible. Further, the risk of being held liable for negligence may deter individuals from even engaging with autonomous bots. This would consequently deprive the bots of viable datasets on which to be trained and stifle its development. Hence, the extension of criminal liability to this class of non-malicious human actors would appear to be ill-conceived. 

Non-criminal sanctions

Regulatory and civil sanctions 

Apart from criminal sanctions, civil or regulatory sanctions may, in certain cases, be sufficient or more appropriate for addressing offensive speech committed by chatbots. Regulatory compliance may include suspension of licenses, civil financial penalties, improvement notices, or withdrawals of approval. It is suggested that “such regulatory enforcement would serve a more ‘instrumental purpose’ and form part of a feedback loop to ‘debug’ errors and address unintended ‘blind spots’.”[61] Non-criminal penalties are believed to provide sufficient deterrent against misconduct without being unduly oppressive to developers or hampering innovation in AI systems and technologies. 

It may be well to consider the possibility of an agency relationship between an autonomous bot and its developers in determining civil liability, much akin to an employer-employee agency relationship. It has been suggested that vicarious liability may be imposed by asking the question of “whether the AI’s creator or user was negligent in deploying, testing, or operating the AI”.[62] However, Vincent Ooi correctly observes that if “electronic agents” such as our present chatbots lack legal personality, they cannot be held legally liable. Further, they are unable to consent to an agency relationship, and hence lack the necessary justification to assume fiduciary duties towards the principal. [63] Indeed, the imputation of an agency relationship is arguably rendered redundant by our inquiry into an actor’s negligence in earlier sections. The negligence-based approach is preferred as it scrutinises the same issues with the added benefit of avoiding tedious conceptual controversies. 

Market and Social Sanctions

We should not discount the possibility that market and social forces may satisfactorily discourage risks of offensive speech made by autonomous bots with minimal intervention of the laws. The empirical credibility of this argument was reflected by Microsoft’s swift decision to silence Tay, after mere hours of its bigoted behaviour.[64] Similarly, the South Korean chatbot Luda Lee was removed by its developers after it began making offensive comments about disability and homosexuality.[65]

A hateful, harassing, or defamatory spokesperson (human or otherwise) would likely turn away customers and sound the death knell for brands. With the looming threat of its sullied image and profile, corporations may consequently be self-motivated to remain cautious in deploying unsafe or unpredictable chatbots.[66]

That said, the perils of self-regulation are apparent – in the absence of legal regulation, redress may come too late after irreversible damages have been inflicted. Therefore, in the context of autonomous chatbots, offensive speech may be left visible on the forums long before its developers remove them. The intervention of law is hence necessary to pre-empt harmful conduct and provide remedies with expediency. 

Conclusion

This paper has explained how Singapore’s current legal framework may be ill-equipped to deal with offensive speech made by autonomous bots. In examining the possible regulatory challenges that may arise, the paper identifies solutions, which include the imposition of a duty of care on developers or deployers, extending criminal liability to malicious third-party interferers, and employing regulatory and civil sanctions. While AI systems are likely to become more complex and “opaque” in the future, we may nonetheless hold out hope that Singapore’s robust legal system may satisfactorily balance the deterrence of harm against risks of stifling innovation and address any novel developments.

This piece was published as part of LawTech.Asia’s collaboration with the LAW4032 Law and Technology module of the Singapore Management University’s Yong Pung How School of Law. The views articulated herein belong solely to the original author, and should not be attributed to LawTech.Asia or any other entity.


[1] Elle Hunt, “Tay, Microsoft’s AI chatbot, gets a crash course in racism from Twitter”, The Guardian (24 March 2016) <https://www.theguardian.com/technology/2016/mar/24/tay-microsofts-ai-chatbot-gets-a-crash-course-in-racism-from-twitter>.

[2] Justin McCurry, “South Korean AI Chatbot pulled from Facebook after hate speech towards minorities”, The Guardian (14 January 2021) <https://www.theguardian.com/world/2021/jan/14/time-to-properly-socialise-hate-speech-ai-chatbot-pulled-from-facebook>.

[3] Para [4]-[14] below. 

[4] Para [15]-[39] below.  

[5] Para [40]-[44] below. 

[6] Penal Code (Cap 224, 2008 Rev Ed).

[7] Sedition Act (Cap 290, 2013 Rev Ed). 

[8] Maintenance of Religious Harmony Act (Cap 167A, 2001 Rev Ed). 

[9] Protection from Harassment Act (Cap 256A, 2015 Rev Ed). 

[10] Act 18 of 2019. 

[11] Justin Ong, “Fica debate: Foreign Interference one of the most serious threats facing S’pore, says Shanmugam” The Straits Times (4 October 2021) <https://www.straitstimes.com/singapore/politics/foreign-interference-one-of-the-most-serious-threats-faced-by-spore-law>.

[12] Sing., “White Paper on Shared Values”, Cmd 1 of 1991 at para 41. Faris Mokhtar, “With Democracy at Stake, Fake News Laws Will Support ‘Infrastructure of Fact’: Shanmugam” Today (7 May 2019).

[13] Constitution of the Republic of Singapore (1999 Reprint) Art 14. 

[14] Law Reform Committee, Singapore Academy of Law, Report on Criminal Liability, Robotics and AI systems (February 2021) at para 4.47 (Chairmen: The Honourable Justice Kannan Ramesh, Charles Lim Aeng Cheng) (“2021 Report”).

[15] Report of the Select Committee on Deliberate Online Falsehoods – Causes, Consequences and Countermeasures (2018) at para 75 (Chairman: Charles Chong). 

[16] Rose Eveleth, “My Robot Bought Illegal Drugs”, Bbc.com (21 July 2017). < http://www.bbc.com/future/ story/20150721-my-robot-bought-illegal-drugs>. 

[17] Ibid.

[18] 2021 Reportsupra n 15 at para 2.4. See also Josh Lee Kok Thong, Tristan Koh Ly Weh, “The Epistemic Challenge Facing The Regulation of AI”,  The Law Society of Singapore, LRD Colloquium Vol 1 (2020/07) at p 7. 

[19] Hakli, Raul & Makela, Pekka, “Moral Responsibility of Robots and Hybrid Agents” (2019) 102(2) The Monist 259.

[20] 2021 Reportsupra n 15 at para 2.4. 

[21] Ss 3 and 4 POHA.

[22] S 499 PC. 

[23] Ss 298 and 298A PC; ss 8 and 9 MRHA. 

[24] Cindy Co, “Parliament repeals Sedition Act, amends Penal Code and Criminal Procedure Code to cover relevant aspects”, The Straits Times (5 October 2021) <https://www.channelnewsasia.com/singapore/parliament-sedition-act-amends-penal-criminal-procedure-code-cover-aspects-2223071>.

[25] S 4 POHA. 

[26] Benber dayao Yu v Jacter Singh [2017] 5 SLR 316.

[27] Section 8 POFMA. 

[28] 2021 Reportsupra n 15, at para 4.16. 

[29] Supra n 1. 

[30] Supra n 2. 

[31] See generally Yavar Bathaee, “The Artificial Intelligence Black Box And The Failure of Intent and Causation” (2018) Havard Journal of Law & Technology 31 (“Bathaee”). 

[32] Personal Data Protection Commission of Singapore, Model AI Governance Framework (Second Edition) (21 January 2020) at [3.14], <https://www.pdpc.gov.sg/-/media/ Files/PDPC/PDF-Files/Resource-for-Organisation/AI/SGModelAIGovFramework2.pdf>.

[33] Will Douglas Heaven, “How to make a chatbot that isn’t racist or sexist”, MIT Technology Review (23 October 2020) <https://www.technologyreview.com/2020/10/23/1011116/chatbot-gpt3-openai-facebook-google-safety-fix-racist-sexist-language-ai/>.

[34] Adam Conner-Simons, “Major ML datasets have tens of thousands of errors”, MIT CSAIL, 29 March 2021 <https://www.csail.mit.edu/news/major-ml-datasets-have-tens-thousands-errors> (accessed 15 January 2022). 

[35] See generally Simon Chesterman, “Through a Glass, Darkly: Artificial Intelligence and the Problem of Opacity” (2021) 69(2) The American Journal of Comparative Law 271.

[36] 2021 Reportsupra n 15, at para [4.33]. Bathaeesupra n 32, at p 906. 

[37] Ibid.

[38] Ibid.

[39] Trkulja v Google (No. 5) [2012] VSC 533. 

[40] Id, at [16]. 

[41] Trkulja v Google Inc [2015] VSC 635, at [46]. 

[42] Fairfax Media Publications Pty Ltd v Voller (2021) ALR 540. 

[43] Ibid.

[44] A claim under the tort of defamation requires that the defamatory statement be communicated to another person other than the plaintiff. To prove that the defendant published the offending material, the claimant must prove that he had communicated the offending material to another who received it. See Qingdao Bohai Construction Co Ltd v Goh Teck Beng [2016] 4 SLR 977. 

[45] 2021 Reportsupra n 15, at pp 30-31. 

[46] Ibid. 

[47] Ibid.

[48] PP v Lee Kao Chong Sylvester [2012] SGHC 96.

[49] Ng Keng Yong v PP & Another Appeal [2004]  4 SLR 89.

[50] Supra n 34. See generally Jing Xu, Da Ju, Margaret Li, et al. “Recipes for safety in open-domain chatbots” (2020) arXiv preprint arXiv:2010.07079

[51] Ibid.

[52] 2021 Reportsupra n 15, at p 35. 

[53] Andrew Selbst, “Negligence and AI’s Human Users” (2020) 100 B.U.L Rev 1315 at p 1363. 

[54] Bathaeesupra n 32, at p 930. 

[55] 2021 Report, supra n 15, at p 32. 

[56] Jacob Turner, Robot Rules: Regulating Artificial Intelligence (Springer, 2019) at p 205. 

[57] Simon Chesterman “Artificial Intelligence and the Limits of Legal Personality” (2020) 69(4) International and Comparative Law Quarterly 819 at p 843. See also 2021 Reportsupra n 15, at p 37. 

[58] Ronald Yu, “What’s Inside the Black Box? AI Challenges for Lawyers and Researchers” (2019) Legal Information Management 19 at p 4. 

[59] 2021 Reportsupra n 15, at p 25. 

[60] S 26F(1) PC. See also PP v Hue An Li [2014] 4 SLR 661 at [38]. 

[61] 2021 Reportsupra n 15, pp 14-16.

[62] Bathaeesupra n 32, at p 935. 

[63] Vincent Ooi, “Contracts formed by software: An approach from the law of mistake” (2019) Centre for AI & Data Governance at pp 9-12.

[64] Supra n 1. 

[65] Supra n 2. 

[66] Nathalie Dreyfus, “BEWARE OF THE LEGAL RISKS SURROUNDING THE RISE OF CHATBOTS”, Expert Guides (01 September 2017) <https://www.expertguides.com/articles/beware-of-the-legal-risks-surrounding-the-rise-of-chatbots/ARTWUSIC>.