Reading time: 19 minutes

Written by Andrea Christine Suki | Edited by Josh Lee Kok Thong

LawTech.Asia is proud to collaborate with the Singapore Management University Yong Pung How School of Law’s LAW4060 AI Law, Policy and Ethics class. This collaborative special series is a collection featuring selected essays from students of the class. For the class’ final assessment, students were asked to choose from a range of practice-focused topics, such as writing a law reform paper on an AI-related topic, analysing jurisdictional approaches to AI regulation, or discussing whether such a thing as “AI law” existed. The collaboration is aimed at encouraging law students to analyse issues using the analytical frames taught in class, and apply them in practical scenarios combining law and policy.

This piece, written by Andrea Christine Suki, examines whether criminal law should evolve or adapt to mitigate a range of harms posed by generative AI, and seeks to provide recommendations where the existing criminal framework is found to be possibly inadequate.

Introduction

Generative artificial intelligence (“GenAI“) refers to AI systems that can generate new content (eg. text, images, audio) based on existing data.[1] Considering the immense benefits and conveniences of GenAI, its use will likely only become more widespread. However, the rise of GenAI usage does not come without its dangers. Increasingly, we see GenAI being deployed for nefarious purposes and/or causing physical, emotional or financial harm. 

In recent years, there have already been incidents of: 

  1. Scammers using GenAI to mimic the voices of loved ones in distress to commit fraud;[2]
  2. Self-learning chatbots that were deployed to interact with Twitter users parroting their antisocial and offensive comments;[3]
  3. Fraudsters using GenAI to mimic a CEO’s voice to demand a fraudulent transaction by requesting for urgent payment of funds;[4]
  4. Circulation of a deepfake video of Prime Minister Lee Hsien Loong endorsing and advocating for a cryptocurrency scheme;[5]
  5. Disinformation researchers raising the alarm on AI Chatbots and their potential to be used for disinformation;[6] and
  6. The circulation of deepfake pornography (“DP”) depicting “hyper-realistic sex videos of non-consenting individuals”[7] causing humiliation and distress.[8]

While not all of these examples incurred or should incur criminal liability, they serve illustrate the real harms that arise from the use of GenAI. As a caveat, it is important to note that GenAI, much like other forms of AI, should not be demonised because of its potential risks or harms. GenAI has immeasurable potential and benefits to offer. Nonetheless, where harm is caused to humans or society, the question of criminal sanctions inevitably arises.[9]

With this context in mind, it is thus necessary to consider how criminal law should evolve or adapt so as to mitigate these harms. This report aims to build on the Singapore Academy of Law’s Report on Criminal Liability, Robotics and AI Systems[10] (“RAI Report”), and examines the criminal regulation of GenAI. Thus this report addresses whether and how Singapore’s criminal framework can be applied to address harms arising from the use of GenAI. Where the existing criminal framework is inadequate, recommendations will be made. 

Theoretical bases of criminal liability 

As noted in previous reform reports, criminal law is primarily concerned with regulating human behaviour. However, a discussion of regulation of human behaviour with respect to GenAI necessitates a discussion of the distinction between GenAI and other types of AI. While traditional AI which is focused on pattern recognition and prediction-making, GenAI “actively creates novel outputs”.[11]

The three aims of criminal law were set out in the RAI Report and remain applicable: (1) punishment of the offender; (2) protection of the community “against those who cause harm and are dangerous”;[12] and (3) to protect offenders by putting them through a system of criminal justice that aims to impose a punishment proportionate to the seriousness of the crime committed. Criminal liability hinges on the following three elements:

  1. The actus reus of the accused: This refers to the conduct of the accused that constitutes an offence. Notably, “conduct” extends beyond positive acts and applies to omissions (where there is a duty to act).[13] For the purposes of this report, the accused’s conduct must also be voluntary. Finally, the potential or threat of harm is relevant – there is no need for actual harm to be caused for liability to be attracted. 
  2. The mens rea of the accused: This refers to the state of mind of the accused which is needed to establish liability under an offence and asks what was in the mind of the accused at the time of the offence. The different states of mind are: (a) intention to cause harm, (b) knowledge that harm is likely to result, (c) wilfulness, (d) rashness in realising that harm may result, and (e) negligence.[14]
  3. Any applicable defences: this refers to any exculpatory “excuses” or “justifications” that may “mitigate or relieve the accused of criminal responsibility”.[15]

Liability of individuals and/or corporations 

For the purposes of this report, the relevant actors who may cause harms are:

  1. Users of GenAI who cause harm with the generated content (“Users”);
     
  2. Bad actors who interfere with GenAI systems and cause harm (“Bad Actors”); and
  3. Corporations or individuals who build GenAI systems (“Programmers”).

Notably, although corporations are not humans, criminal liability can be attached to them as legal persons.[16]

Intentional criminal use of or interference with GenAI systems

It is uncontroversial that a person who intentionally uses or causes a GenAI system to cause harm should be liable for the act.[17] In most cases, the harms arising from the use of GenAI will fall within the scope of Singapore’s existing criminal framework. However, our criminal framework is insufficient to address the harms arising from the creation and distribution of non-consensual deepfake pornography (“NCDP”). 

Bad Actors 

Where a bad actor hacks into a general AI system to cause harm, it is likely that s 4 of the Computer Misuse Act[18] (“CMA”) applies:

Access with intent to commit or facilitate commission of offence 
4.—(1) Any person who causes a computer to perform any function for the purpose of securing access to any program or data held in any computer with intent to commit an offence to which this section applies shall be guilty of an offence. 

However, some harms that arise from interference with GenAI may not fall within the ambit of s 4 as s 4(2) specifies that “offence” under s 4 extends only to offences “involving property, fraud, dishonesty or which causes bodily harm”.[19] Thus, a bad actor who hacks into a GenAI system to induce the generation of harmful output, without more, would not be caught. However, considering the growing reliance on GenAI like ChatGPT, bad actors may use GenAI to cause social unrest. On a societal level, the hack may be aimed at spreading misinformation or perpetuating certain propaganda or bias. None of these situations fall within the ambit of s 4(2) of the CMA. Nonetheless, other criminal provisions can apply. 

Recommendation: Existing criminal laws can be adapted to address Bad Actor crimes arising from GenAI

A bad actor who aims to cause harm on a societal level by influencing GenAI to generate output meant to incite hatred on a racial basis may be caught by s 298A of the Penal Code (“PC”) [20]. s 298A criminalises “any act which” an individual “knows is prejudicial to the maintenance of harmony between different racial groups and which disturbs or is likely to disturb the public tranquillity”.[21] The phrase “any act” is broad and should encompass the act of interfering with a GenAI system. s 17F of the Maintenance of Religious Harmony Act (“MRHA“)[22] also applies. S 17F(3) criminalises “conduct that incites feelings of enmity…or contempt for or ridicule of” a religious group in Singapore”.[23] There is no reason why the act of interference with a GenAI system should not fall under the definition of “conduct” under the MRHA. 

With respect to any other type of harmful misinformation, s 8(1) of the Protection from Online Falsehoods and Manipulation Act[24] (“POFMA”) criminalises the “production, manufacture or alteration” of a “bot” to communicate a false statement of fact in Singapore.[25] “Bot” is defined as “a computer program made or altered for the purpose of running automated tasks”.[26] The alteration of a GenAI system falls within this scope and the act of hacking constitutes “alteration”. This would cover statements of fact meant to incite enmity “between different groups of persons” even if the statement does not concern race or religion[27] (e.g., deliberately influencing a GenAI chatbot to generate misogynistic statements as statements of fact). It would also cover statements that affect public health, safety, tranquillity or public finances.[28]

Users of GenAI

Existing criminal framework adequately addresses scams (calls, messages, phishing) even if GenAI is used 

While the rise of GenAI has resulted in more convincing scam calls or messages, existing provisions and penalties for cheating are sufficient to address the harms arising from scam calls and messages, even considering the use of GenAI. The use of GenAI in the process of the offence does not change the nature of the mischief. Depending on the facts of each case, “scammers can be charged with one of the cheating-related offences under ss 417 to 420A of” the PC.[29]  Thus, a fraudster who sends a message under a falsified identity to defraud an unsuspecting individual would be caught by these provisions, whether the message was crafted with the assistance of a GenAI application or without. Additionally, phishing scams would be caught under s 416A.

Notably, as technology evolves, scammers may use text, images, videos or audio to deceive victims into thinking they are another person (e.g., a loved one, a government official). They may do this by using GenAI to craft more convincing messages free of grammatical and spelling errors[30], mimicking someone’s voice[31], or even creating a likeness of someone using deepfake technology. In these cases, scammers will be charged with s 419 of the PC, which criminalises cheating by personation. S 416 Defines “cheating by personation” as cheating “by pretending to be some other person…or representing that he or any other person is a person other than he or such other person really is”.[32] Further, illustration (b) provides that “A cheats by pretending to be B, a person who is deceased. A cheats by personation”.[33] This definition is broad enough to cover situations where GenAI is used to personate another individual, as there is no specification on how the scammer “pretends” to be another individual. 

Deepfake Technology: Existing criminal framework adequately addresses defamation and misinformation, but reform needed to better address non-consensual pornography

Deepfakes are a type of synthetic media (a type of media generated or manipulated using AI).[34] While deepfake technology has been used for positive purposes[35], its ability to “produce content that convincingly shows people saying or doing things they never did or create people that never existed in the first place” has been harnessed for criminal activity.[36]

This sub-section will discuss the following criminal activities:

  1. Defamation; 
  2. Spreading harmful misinformation; and
  3. Non-consensual pornography.

Considering how the growing use of deepfakes is a worldwide phenomenon, Europol’s findings provide a valuable starting point. The proceeding section will thus discuss whether Singapore’s existing criminal framework remains applicable to the use of deepfakes in furtherance of these criminal activities. It is submitted that Singapore’s existing criminal framework adequately addresses defamation and the spread of misinformation arising from the use of GenAI. However, reform of the law is needed to better protect victims of non-consensual deepfake pornography.  

Adequacy of existing criminal framework in addressing defamation arising from the use of GenAI

Cyber-libel may take the form of a defendant impersonating the identity of the victim online and posting content that adversely affects the victim’s reputation (“impersonating post”).[37]  With the use of deepfake technology, perpetrators are able to generate convincing pictures or even videos that look like their target person”). They may even be able to mimic the target person’s voice and post a video in which the target person seems to be saying things that would affect their reputation (e.g., expressing controversial opinions, using vulgar language, etc.). For instance, in December 2023, then-Prime Minister of Singapore Lee Hsien Loong warned against a deepfake video of him that had been circulating online. The video falsely depicted him to be endorsing a cryptocurrency scam in an interview with China Global Television Network, a Beijing-based news outlet.[38] In the video, he promised a guaranteed return on investments and stated that the investment program had been approved by the Singapore government.[39] Notwithstanding PM Lee’s clarification, had the general public believed that PM Lee had actually endorsed this scam, his reputation would have been adversely affected. This may amount to cyber-libel.  

It is submitted that Singapore’s existing criminal regime adequately addresses harms arising from defamatory deepfake impersonation. Harms arising from defamatory deepfake impersonation will likely attract criminal liability under the PC and/or the Protection from Harassment Act (“POHA”). 

Defamatory impersonation using deepfakes would be criminal defamation under s 500 read with s 499 of the PC. The three elements of the offence of criminal defamation are as follows: 

  1. Making or publishing an imputation concerning any person; 
  2. Making such imputation by words either spoken or intended to be read or by signs or by visible representations; and 
  3. Making such imputation with the intention of harming or knowing or having reason to believe that such imputation would harm the reputation of that person.[40]

The element of “publication” has been fulfilled by online content.[41] Any defamatory text would constitute “words intended to be read” and the deepfake image or video itself would constitute “visible representation”. All that needs to be proven is element (c). 

However, if a poorly-made deepfake is posted, it may be difficult to establish element (c). The defendant may argue that he had no reason to believe the victim’s reputation would be harmed since the deepfake was clearly edited. Further, a perpetrator may be able to avoid criminal liability by including a disclaimer that the video is edited. Nonetheless, emotional harm may be caused to the victim. 

Nonetheless, even if the PC cannot apply, criminal liability under ss 3 and/or 4 of POHA can likely be made out.[42]  S 3 of POHA criminalises the following behaviour if it is done with the intent to cause harassment alarm or distress (“HAD”) to another person[43]:

  1. using any threatening, abusive or insulting words or behaviour;
  2. making any threatening, abusive or insulting communication; or
  3. publishing any identity information of the target person or a related person of the target person 

and causing HAD to another person as a result. 

Proof of threatening behaviour, communication or publication is not enough to constitute an offence under s 3 of POHA if intention is not established. However, under s4 of POHA, proof of threatening, abusive or insulting communication likely to cause a victim to feel HAD is sufficient to constitute an offence.[44] Intention to cause such an effect need not be established.[45]

For a perpetrator of defamatory deepfake impersonation to be found liable under POHA, it would need to be shown that:

  1. Threatening, abusive or insulting communication was made; and
  2. HAD was caused or was likely to be caused as a result.

It is submitted that both elements can be made out. 

An insulting impersonating post likely constitutes “insulting communication” under POHA. S 2 of POHA defines “communication” as “any words, image (moving or otherwise), message, expression, symbol or other representation that can be seen, heard or otherwise perceived by any person, or any combination of these”.[46] Case law has also shown that posts on social media platforms constitute “communication”. In Lai Kwok Kin v Teo Zien Jackson, the Respondent’s posting of adverse reviews of his ex-employer on various platforms, including Facebook constituted “insulting communication”.[47] Thus, an impersonating post where the target person is depicted to be saying things that would affect their reputation could be considered an insulting “communication”, which would likely bring the post within the ambit of ss 3 and 4 of POHA. 

An insulting impersonating post would likely cause HAD to a victim. In the case of Ye Lin Myint v Public Prosecutor [48],a former insurance agent sent anonymous letters and e-mails to his clients and prospective clients, threatening them with harm, joblessness, humiliation and destruction of their reputations. These communications “caused considerable distress” to the recipients.[49] Likewise, having one’s likeness used to promote offensive or vulgar content may cause victims to feel humiliated or to fear that their reputations will be negatively affected. This would similarly cause distress, even if the deepfake was poorly made or there was a disclaimer. In fact, the mere fact that the perpetrator has some form of control over the victim’s likeness would arguably be alarming and distressing to the victim as the victim would have no control over the content generated. Victims would likely be worried that the perpetrator could continue to generate impersonating content that may harm their reputation, which would result in feelings of alarm and distress. 

Adequacy of the existing criminal framework in addressing misinformation arising from the use of GenAI

Although GenAI is capable of improving the quality of misinformative content, the use of GenAI in creating content does not preclude the attachment of criminal liability to the act of sharing it. GenAI may be used to generate harmful misinformative content that supports the narratives of extremist groups, stokes social unrest, or disrupt financial markets.[50] These purposes are all accounted for under s 7 of POFMA. Further, POFMA criminalises the doing of “any act…in order to communicate in Singapore a statement”.[51] This is broad enough to include the act of generating misinformative content and subsequently circulating it. 

Reforming the Law to Address Non-Consensual Deepfake Pornography

It is submitted that reform of the law is needed to better protect victims of non-consensual deepfake pornography. This report shall first address the limitations of the existing criminal framework. Then, recommendations for reform will be made. 

A recent study found that 98% of online deepfake videos are pornographic.[52] This highlights one of the most pressing concerns about the use of GenAI. Perpetrators can use GenAI to generate and/or circulate pornographic images or videos of individuals.[53] As it stands, sections 377BE and 377BD of the PC may apply to DP. 

Section 377BE criminalises the distribution or threatening to distribute an intimate image or recording of an individual.[54] Illustration (a) of s 377BE explicitly illustrates that a doctored or manipulated image of recording is also caught under this provision – it need not actually depict the individual.[55] From this illustration, s 377BE appears to target distributors of DP directly.[56]

However, commentators have pointed out that 377BE was introduced to deal with “revenge pornography” (a former partner using intimate pictures or recordings as retaliation or blackmail), rather than DP specifically.[57]Thus, Singapore’s criminal regime is insufficient to catch certain harms arising from DP. 

The first issue concerns the exclusion of images or recordings that are “so altered that no reasonable person would believe that it depicts” the individual under s 377BE(5)(b). Such images would not fall within the scope of the provision as they are not “intimate recordings”. Much like how an edited image of the individual’s face onto a pornographic cartoon would not constitute an “intimate image”, a perpetrator may argue that a poorly edited deepfake video where the “victim’s face does not blend perfectly with the original video” may likewise not constitute an “intimate image”.[58]  Ironically, as less advanced deepfake applications (that generate less convincing deepfake videos) are more accessible to more perpetrators, this could result in a perverse situation where deepfake applications that are not as advanced but are more prevalent are not caught under this provision. Alternatively, if a perpetrator were to simply add a text disclaimer stating that the video is a deepfake, he may argue that no reasonable person would believe it depicted the individual.[59]

The second scenario is where distributors of non-consensual deepfake porn have no personal involvement with the victim.[60] Ss 377BE(1)(c) and 377BE(2)(c) PC state that the perpetrator must at least have reason to believe that the distribution or threat of distribution is likely to cause humiliation, alarm or distress to the victim.[61] It may be difficult to establish this requirement with respect to a perpetrator who has no connection to the victim.[62] The perpetrator could simply argue that he did not know who the victim was, assumed she was a pornographic actress, and had no reason to believe distribution would cause her humiliation.[63] Since the provision was meant to target “revenge pornography”, this knowledge element is intended to target a perpetrator who was/is in a personal relationship with the victim, and therefore knows that distribution will likely cause humiliation.[64]

For situations where NCDP is created but not distributed, s 377BD of the PC applies. So long as the victims knows of the creator’s possession of such a video and reports it, the creator of the pornography could be held criminally liable. Nonetheless, the same issues as s 377BE still apply, as knowledge that possession will likely cause humiliation, alarm or distress is a conjunctive requirement under s 377BD(1)(b)(iii).  

Recommendations: Modify the definition of an “altered intimate image” under ss 377BE and 377BD; Create a new offence to criminalise distribution of DP for sexual gratification 

Under the current criminal regime, a perpetrator cannot be caught for circulating or creating NCDP if the video is poorly edited, or if he does not know the victim personally. However, regardless of whether these elements can be made out, this video still humiliates the victim and clearly causes significant harm. Victims of non-consensual deepfake porn described the experience as “dehumanising, degrading [and] violating”.[65] Perpetrators should not be able to capitalise on loopholes in the law to escape liability when real psychological and emotional harm is caused to victims. Failure to protect victims in these scenarios essentially creates “exceptions” in the law, allowing perpetrators to create sexualised content of anyone as long as it falls into one scenario. Considering the magnitude of the harm caused, the creation and circulation of NCDP should attract the same degree of criminal liability as “revenge pornography”. 

As such, it is submitted that ss 377BE and 377BC do not adequately protect victims and ought to be expanded. The UK’s proposed measures for dealing with DP provide valuable guidance in this respect.[66] Our recommendations on this issue are as follows: 

Recommendation: Modify the definition of an “altered intimate image” under ss 377BE and 377BD

The requirement that a reasonable person would believe that the video depicts the victim should be done away with as it creates a loophole for poorly-edited deepfakes and deepfakes with clarificatory watermarks.[67] The emphasis of this provision should be the intention to depict an actual person (the victim), which can be seen from the attempt to mimic realism.[68] This maintains the original (reasonable) exclusion of cartoons images, while expanding the scope to include the abovementioned exceptions.[69] Further, “revenge pornography” would clearly also meet these requirements. Thus, the provisions would expand in scope while still targeting the initial intended harm. 

Recommendation: Create a new offence to criminalise distribution of DP for sexual gratification

Singapore should also import the UK’s approach of criminalising the distribution of DP for the purpose of sexual gratification[70] (regardless of whether for the perpetrator or someone else). Knowledge of harm need not be proven.[71] Thus, this new offence would protect victims of DP even where the perpetrator does not know them or includes a disclaimer that the video is deepfake. It has been suggested that the distribution of DP on a pornography website or chat group (such as SGNasiLemak) [72] could be a factor that points towards the purpose of sexual gratification.[73] Nonetheless, following the UK approach, the burden should be on the prosecution to prove the necessary purpose of sexual gratification. Consequently, a defendant could also argue that he was acting with a “different, non-relevant purpose”.[74]

Programmers 

All GenAI platforms have the potential to be used for malicious purposes. However, it is submitted that criminal liability should not be imposed on programmers unless the main purpose of the platform is to cause harm. In this respect, a programmer who programs a GenAI platform for the main or sole purpose of generating misinformation may be liable under s 8 of POFMA as this would constitute the “making” of a “bot with the intention of communicating” false statements.[75]

With respect to other malicious GenAI platforms (e.g., platforms to generate DP, platforms to assist with planning crime), criminal liability may not be appropriate. Considering how such platforms will emerge from all over the world, locating the programmers responsible would be resource-draining and virtually impossible. Blocking local access to such websites would be sufficient. This is the approach taken towards pornography websites.

Non-intentional criminal use or interference with GenAI systems

Considering the harms that can arise from GenAI, it is important to consider whether users and programmers should be held criminally liable for these harms even if they did not intend to cause these harms and did not know that these harms would occur. Non-intentional harms could be addressed using the criminal negligence framework, the imposition of new criminal offences or by imposing liability onto GenAI systems themselves (separate legal personality). However, it is submitted that non-intentional harms arising from GenAI should not be criminalised as the imposition of criminal liability will likely have chilling effects on the development and usage of GenAI in the jurisdiction.[76]

“Negligence” is defined in the Penal Code as follows[77]

Whoever omits to do an act which a reasonable person would do, or does any act which a reasonable person would not do, is said to do so negligently.

Thus, if non-intentional harms are addressed with criminal negligence, it is necessary to determine the standard of “reasonable” conduct (i.e., what a reasonable person would do in a given circumstance)[78]. As noted in the RAI report, this would be for courts in each case to determine.[79] Considering the absence of precedent, new criminal negligence standards would need to be defined. While this allows the law to develop alongside technology, it gives rise to uncertainty.[80] If users and developers lack sufficient certainty on standards to abide by, this may have a chilling effect on the adoption and development of GenAI technology.[81] Further, even if we were to set out the standards for reasonable conduct in legislation to provide some certainty, it may be difficult to define “what a reasonable person should (or should not) do” to mitigate or prevent the harm.[82] This is because it may not be possible to predict how a GenAI system “would act when developed or used in a certain way” with sufficient certainty.[83] Similar arguments can be made against the imposition of new offences to criminalise non-intentional harms arising from GenAI. This unpredictability of GenAI also increases the risk of the inequitable attribution of liability or even scapegoating. 

The possibility of creating a separate legal personality for AI was considered but not recommended in the RAI report.[84] Indeed, since criminal law is intended to correct, deter or punish human behaviour, it is unclear how the imposition of criminal liability would correct the GenAI system itself.[85] At the current state of technology, the argument that we should instead regulate and target the behaviour of those responsible for GenAI systems is more compelling.[86] This can be done using existing civil and criminal legal frameworks. 

From a policy perspective, Singapore strives to attract top industry players and to capitalise on the economic benefits of the GenAI industry. In line with these goals, non-intentional harms should be an issue of civil regulation. 

Conclusion 

This report has explored how harms arising from GenAI can be regulated using Singapore’s criminal framework. In most areas, it seems our criminal regime is sufficient and can be adapted to address intentional harms arising from the use of GenAI. However, reform is needed to provide greater protection for victims of the creation and distribution of NCDP. Considering the prevalence of pornographic deepfake material, action should be taken swiftly to reinforce Singapore’s strong stance against sex crimes. 

With respect to non-intentional harms, this report has established that they should not be regulated using a criminal framework. Criminalising intentional harms while leaving non-intentional harms to be civilly regulated strikes an adequate balance between the need for accountability and our desire not to dampen the development and deployment of GenAI. 

Finally, as technology continues to evolve rapidly, criminal law (or even the law in entirety) has never been and will never be a panacea to all harms arising from GenAI. Instead of relying on legal frameworks and mechanisms alone, measures like code should be also deployed to regulate behaviour with respect to GenAI in tandem with the law. This will provide more robust protection and guidance for users, developers and society alike.


[1] Emilio Ferrara, “GenAI against humanity: nefarious applications of generative artificial intelligence and large language models” (2024) J Comput Soc Sc at p 2 (“Ferrara”). 

[2] Carter Evans and Analisa Novak, “Scammers use AI to mimic voices of loved ones in distress” CBS News (19 July 2023). <https://www.cbsnews.com/news/scammers-ai-mimic-voices-loved-ones-in-distress/> (accessed 8 April 2024). 

[3] Oscar Schwartz, “In 2016, Microsoft’s Racist Chatbot Revealed the Dangers of Online Conversation”, IEEE Spectrum (25 Nov 2019) <https://spectrum.ieee.org/in-2016-microsofts-racist-chatbot-revealed-the-dangers-of-online-conversation> (accessed 9 April 2024).  

[4]Catherine Stupp, “Fraudsters Used AI to Mimic CEO’s Voice in Unusual Cybercrime Case”, Wall Street Journal (30 August 2019) <https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402> (“WSJ”) (accessed 9 April 2024) 

[5] Dymples Leong, “Commentary: PM Lee’s deepfake video and the risk when seeing is no longer believing”, Channel News Asia (17 January 2024) <https://www.channelnewsasia.com/commentary/deepfake-pm-lee-scam-danger-disinformation-election-4050821> (accessed 7 April 2024). 

[6] Tiffany Hsu and Stuart Thompson, “Disinformation Researchers Raise Alarms About A.I. Chatbots”, The New York Times (8 February 2023) <https://www.nytimes.com/2023/02/08/technology/ai-chatbots-disinformation.html> (accessed on 8 April 2024). 

[7] Poon Chong Ming, “Poon Chong Ming: Fake porn, real harm: Examining the laws against DP in Singapore”, LawTech Asia (3 October 2022) <https://lawtech.asia/fake-porn-real-harm-examining-the-laws-against-deepfake-pornography-in-singapore/> (accessed 8 April 2024) (“Poon”). 

[8] Justin Sherman, “Completely horrifying, dehumanizing, degrading: One woman’s fight against deepfake porn.” CBS News (14 October 2021) <https://www.cbsnews.com/news/deepfake-porn-woman-fights-online-abuse-cbsn-originals/> (accessed 9 April 2024). 

[9] Law Reform Committee, Singapore Academy of Law, “Report on Criminal Liability, Robotics and AI Systems” (February 2021) (“RAI Report”), at p 4 para 18. 

[10] Id.

[11] Ferrara, supra n 1, at p 2. 

[12] RAI report, supra n 9, at para 2.2. 

[13] Ibid.

[14] Ibid.

[15] Ibid.

[16] See Penal Code s2, definition of “person”.

[17] RAI report, supra n 9, at para 4.6. 

[18] Computer Misuse Act 1993 (2020 Rev Ed) (“CMA”) s 4. 

[19] Id, at s 4(2).

[20] Penal Code 1871 (2020 Rev Ed) (“Penal Code) s 298A. 

[21] Penal Code s 298A(b). 

[22] Maintenance of Religious Harmony Act 1990 (2020 Rev Ed) (“MRHA”) s 17F. 

[23] Id, at s 17F(3). 

[24] Protection from Online Falsehoods and Manipulation Act 2019 (2020 Rev Ed) s 8(1). 

[25] RAI, supra n 9, at p 27, footnote 44. 

[26] RAI p 27, citing POFMA s 2(1).

[27] POFMA s 8(3)(3)(e).

[28] POFMA s 8(3)(3)(b). 

[29] Written Reply to Parliamentary Question on Punishments for E-commerce and Online Scams, By Mr K Shanmugam, Minister for Home Affairs and Minister for Law (10 May 2021) <https://www.mha.gov.sg/mediaroom/parliamentary/written-reply-to-pq-on-punishments-for-e-commerce-and-online-scams-by-mr-k-shanmugam-minister-for-home-affairs-and-minister-for-law/#:~:text=Answer%3A,that%20the%20penalties%20are%20adequate.> (accessed 7 April 2024). 

[30] Feng Zengkun, “Stem the scams: Beware the bots to avoid being distraught” The Straits Times (28 April 2023) <https://www.straitstimes.com/tech/stem-the-scams-beware-the-bots-to-avoid-being-distraught> (accessed 8 April 2024). 

[31]Ibid; WSJ supra , 4. 

[32] Penal Code s 416. 

[33] Id, s 416 Illustration (b). 

[34] Europol, “Facing reality? Law enforcement and the challenge of deepfakes, an observatory report from the Europol Innovation Lab” (2022) (“Europol”), at p 5. 

[35] Ashish Jaiman, “Positive Use Cases of Synthetic Media (aka Deepfakes)”, Towards Data Science (15 August 2020) <https://towardsdatascience.com/positive-use-cases-of-deepfakes-49f510056387> (accessed 8 April 2024). 

[36] Europol, supra n 34, at p 4. 

[37] Gary Chan Kok Yew, “Reputation and Defamatory Meaning on the Internet, Communications, Contexts and Communities” (2015) 27 SAcLJ, at para 4. 

[38] Christine Chiu, “PM Lee warns against responding to deepfake videos of him promoting investment scams”, The Straits Times (4 Jan 2024) < https://www.straitstimes.com/singapore/pm-lee-warns-against-responding-to-deepfake-videos-of-him-promoting-investment-scams> (accessed 13 November 2024). 

[39] Ng Hong Siang and Firadus Hamzah, “PM Lee urges vigilance against deepfakes after ‘completely bogus’ video of him emerges”, The Straits Times (28 December 2023) <https://www.channelnewsasia.com/singapore/deepfake-video-pm-lee-investment-scam-4012946> (accessed 13 November 2024) 

[40] Xu Yuanchen v Public Prosecutor and another appeal [2023] 5 SLR 1210, at [19].

[41] See: Xu Yuanchen v Public Prosecutor and another appeal [2023] 5 SLR 1210. 

[42] Protection from Harassment Act 2019 (2020 Rev Ed) (“POHA”) ss 3 and 4. 

[43] Ibid

[44] Protection from Harassment Act, s 4. 

[45] Gary Chan, “Tort of Defamation: Establishing a Prima Facie Case” in Gary Chan Kok Yew, Lee Pey Woan, The Law of Torts in Singapore (Academy Publishing, 2016) ch 12, at 12.009.

[46] POHA s 2. 

[47] Lai Kwok Kin v Teo Zien Jackson [2019] SGDC 276, at [46]–[47]. 

[48] Ye Lin Myint v Public Prosecutor [2019] 5 SLR 1005

[49] Id, at [1]. 

[50] Europol, supra n 34, at p 10. 

[51] POFMA s 7. 

[52] Nicholas Kristof, “Why are we not doing more about deepfakes and the online abuse of women and girls?”, The Straits Times(24 March 2024), <https://www.straitstimes.com/opinion/why-are-we-not-doing-more-about-deepfakes-and-the-online-abuse-of-women-and-girls> (accessed 8 April 2024). 

[53]Syarafana Shafeeq, “No recent spike in tech-facilitated sexual harm, but AI poses concern for future: Women’s groups”, The Straits Times (28 Jan 2024) <https://www.straitstimes.com/singapore/no-recent-spike-in-tech-facilitated-sexual-harm-but-ai-poses-concern-for-future-women-s-groups> (accessed 7 April 2024). 

[54]Poon, supra n 7; Penal Code s 377BE. 

[55] Ibid; Penal Code s 377BE Illustration (a).

[56] Ibid.

[57] Id, citing Singapore Parliamentary Debates, Official Report (6 May 2019) vol 94 (‘Second Reading Speech on the Criminal Law Reform Bill’) (K Shanmugam, Minister for Home Affairs).

[58] Poon, supra n 7Penal Code s 377BE Illustration (b).

[59] Ibid.

[60] Ibid.

[61] Ibid; Penal Code s 377BE(1)(c) and (2)(c). 

[62] Ibid.

[63] Ibid.

[64] Id, citing  Singapore Parliamentary Debates, Official Report (6 May 2019) vol 94 (‘Second Reading Speech on the Criminal Law Reform Bill’) (K Shanmugam, Minister for Home Affairs).

[65] Justin Sherman, “”Completely horrifying, dehumanizing, degrading”: One woman’s fight against deepfake porn”, CBS News (14 October 2021) <https://www.cbsnews.com/news/deepfake-porn-woman-fights-online-abuse-cbsn-originals/> (accessed 5 April 2024). 

[66] Poon, supra n 7; United Kingdom, Law Commission, Consultation paper on Intimate Image Abuse (Paper No. 253, 2021) (“UK Law Commission”) at para 14.7.

[67] Poon, supra n 7. 

[68] Ibid; UK Law Commission at para 14.24.

[69] Ibid.

[70] Ibid; UK Law Commission at para 14.7. 

[71] Ibid.

[72] Ibid; David Sun, “SG Nasi Lemak chat admin jailed and fined; had more than 11,000 obscene photos and videos”, The Straits Times (9 March 2021) <https://www.straitstimes.com/singapore/courts-crime/sg-nasi-lemak-admin-jailed-and-fined-had-more-than-11000-obscene-photos-and> (accessed 7 April 2024). 

[73] Poon, supra n 7. 

[74] UK Law Commission, supra n 65, at para 10.48. 

[75] POFMA s 8. 

[76] RAI report, supra n 9, at para 4.55

[77] Penal Code s 26F. 

[78] RAI report, supra n 9, at para 4.25

[79] Id, at para 4.26.

[80] Ibid.

[81] Ibid.

[82] Id, at para 4.33.

[83] Ibid.

[84] Id, at para 4.45.

[85] Ibid.

[86] Ibid.