Reading time: 15 minutes

Written by Poon Chong Ming | Edited by Josh Lee Kok Thong

We’re all law and tech scholars now, says every law and tech sceptic. That is only half-right. Law and technology is about law, but it is also about technology. This is not obvious in many so-called law and technology pieces which tend to focus exclusively on the law. No doubt this draws on what Judge Easterbrook famously said about three decades ago, to paraphrase: “lawyers will never fully understand tech so we might as well not try”.

In open defiance of this narrative, LawTech.Asia is proud to announce a collaboration with the Singapore Management University Yong Pung How School of Law’s LAW4032 Law and Technology class. This collaborative special series is a collection featuring selected essays from students of the class. Ranging across a broad range of technology law and policy topics, the collaboration is aimed at encouraging law students to think about where the law is and what it should be vis-a-vis technology.

This piece, written by Poon Chong Ming, seeks to examine the laws against deepfake pornography in Singapore. Despite years since the emergence of deepfake pornography, it remains inadequately dealt with by the law. As a result, deepfake pornography is proliferating with greater prominence, inflicting more and more harm on victims while leaving them without proper recourse. This paper attempts to look at the issue of deepfake pornography specifically within Singapore, in light of the stark increase of local sexual abuse cases involving technology. The paper first explains the need for a strong legal framework due to the nature of deepfake pornography (hyper-realism combined with ease of production). Subsequently, the paper proceeds to examine the efficacy of current laws in Singapore (civil, criminal, and regulatory measures) in dealing with deepfake pornography. Finally, by looking at measures taken in the United Kingdom, the paper will provide suggestions as to the direction of the law in Singapore, with the most viable recommendation being to build upon Sections 377BE and 377BD of the Penal Code. 

Introduction 

As a generation raised alongside the Internet, there is always a tendency to wonder: how are we actually perceived online? We proceed to “google” ourselves, hoping to see our achievements listed on school websites, or to be mentioned on a close friend’s blog. Instead, what emerges are far from what we could ever imagine: explicit videos of ourselves that were never recorded. This is precisely the situation that 18-year-old Noelle Martin found herself in. To her, the experience was “completely horrifying, dehumanizing, degrading, and violating”.[1]

Such videos, labelled deepfake pornography, is one of the harmful ways in which deepfake technology is being used.[2] Here, hyper-realistic sex videos of non-consenting individuals are created by superimposing their faces onto pornographic videos using an algorithm.[3] Despite years since its emergence, it is still not adequately dealt with by the law, with deepfake pornography proliferating with greater prominence.[4] As such, the harm inflicted on victims also increases, while leaving them without proper recourse.

This paper attempts to look at the issue of deepfake pornography within Singapore, in light of the stark increase of local sexual abuse cases involving technology.[5] The paper will first explain the need for a strong legal framework to govern deepfake pornography. It will then proceed to examine the efficacy of current laws in Singapore in curbing the issue and providing recourse to victims. Finally, by looking at measures taken in the United Kingdom (“UK”), we will provide suggestions as to the direction of the law in Singapore. 

Tackling deepfake pornography

Occurrences of deepfake pornography

Notably, out of all deepfake pornography created, 95% are non-consensual. Of these, 90% feature women.[6] The most common instance of non-consented deepfake pornography is the superimposition of a celebrity or influencer’s likeness, published online on pornographic websites for the sexual gratification of its viewers. A less common instance would be one of revenge porn, where such material is created with the likeness of a past partner in order to humiliate them. Nonetheless, nobody is safe in this day and age, where deepfake pornography could be made with anyone’s likeness, by any person seeking financial gain or sexual gratification.[7]

The need for a strong legal framework 

The rise of deepfake pornography came with technology dramatically enhancing the ease of producing hyper-realistic videos, which would in the past only be possible with deep technical know-how and great effort. This development enabled widespread harm, which warrants strong legal intervention.

First, the hyper-realistic nature of these videos causes extraordinary harm. Most individuals are unable to tell apart good quality deepfakes from real videos.[8] Even if they do, due to the intentional attempt to mimic realism, victims of deepfake pornography still feel as though they are being “digitally raped”.[9] Although preceding technology such as photo alteration had created realistic portraits of nude individuals, the harm was never as great, because unlike videos, photos were never a benchmark of authenticity.[10]

Second, the ease of making such videos enables proliferation, leading to widespread harm. In the past, other than for film studios, hyper-realistic videos have always been too technically complicated to produce.[11] However, with deepfake technology, all that is required is a computer and a collection of photos easily obtainable through social media. There is no requirement for any robust technical know-how, and hence is available to anyone with a nefarious purpose.

Current legal recourse in Singapore

Although Parliament did not enact a specific provision to criminalise deepfake pornography, there are several provisions in existing legislation that may do so – for instance the Protection of Harassment Act (“POHA”)[12] and the Penal Code (“PC”).[13] For civil recourse, copyright laws and the tort of defamation or privacy could be potential causes of action. 

We will look at the efficacy of the abovementioned laws in turn, as well as any regulatory measures.

Criminal laws 

Penal Code

Section 377BE

Section 377BE PC criminalises the distribution or the threat of distributing an intimate image or recording of an individual (B).[14] These intimate images or recordings could be manipulated versions.[15] In this vein, s 377BE PC appears to directly punish the distributor of a deepfake pornography video upon the victim’s report. 

However, the provision was introduced in the 2020 PC amendment to deal with “revenge pornography” (where sexual materials are used as retaliation or blackmail by a former partner), rather than to specifically deal with deepfake pornography.[16] Therefore, there are still situations involving deepfake pornography where the perpetrator would not be caught.

The first scenario is with regard to poor quality deepfakes. According to the provision, as long as the manipulated video is “so altered that no reasonable person would believe that it depicts B”, it would not constitute an “intimate recording” and would hence not contravene the provision.[17]  An example given is by editing B’s face onto a cartoon.[18] However, a poor quality “realistic” deepfake pornography video could also result in no reasonable persons believing that it is B, where the victim’s face does not blend perfectly with the original video.[19] Nonetheless, the video still humiliates B and causes significant harm.[20] Yet, the distributor is not caught by the provision.

The second scenario is where distributors are not personally involved with the victim.  A requirement is that the distributor must minimally have reasons to believe that their distribution would likely harass or humiliate B.[21] However, this would be difficult for the prosecutor to prove if the perpetrator merely distributed the video without knowing who the victim is, since he could argue that he had no reason to believe it will cause her humiliation, as he could have assumed that she was simply a pornographic actress. The knowledge requirement thus highlights the act of “revenge pornography” which Parliament intended to target,[22] where the perpetrator is in a personal relationship with the victim, and knows he or she will be humiliated with their actions.

The third scenario is that the creator of the deepfake pornography is not caught unless he is also a distributor, with the provision solely targeting the act of distribution. Nonetheless, s 377BD PC could deal with this issue.

Section 377BD 

As a “sister” provision to s 377BE PC, s 377BD PC criminalises the possession of the deepfake pornography rather than its distribution.[23] Under this provision, the creator of the deepfake pornography could potentially be caught since he would have the possession of the video, assuming that the victim knows of this possession and reports it.[24]

Nonetheless, the other problems of 377BE also apply here.[25] Therefore, a distant creator, or a creator of a poor deepfake pornography, would not be caught. 

POHA

Section 3

Section 3 POHA criminalises the intentional causation of harassment, alarm, or distress (“HAD”) to individuals by using any threatening, abusive, or insulting behaviour.[26] The provision may thus be helpful to deal with creators or distributers of deepfake pornography, who had produced or uploaded the content with the intent to cause HAD to the victim. However, this does not apply to a large number of deepfake pornography cases, since the victim’s HAD is usually a consequence of the perpetrator’s intention for sexual gratification, rather than a result of direct intention.[27]

Section 4

Section 4 POHA serves the same purpose as s 3. The difference is that it does not require the element of intent, but the test for the victim’s harm is objective rather than subjective.[28] This seems to be equally unhelpful, as the objective standard for harm is unclear and thus unreliable in a deepfake pornography context. An objective viewpoint could determine a poor quality deepfake pornography video to not pass the threshold to cause HAD to victims, but they may well have felt HAD since the deepfake pornography still mimicked realism. This reflects a similar problem from s 377BE PC, where poor quality deepfakes are not caught. 

Section 7

Section 7 POHA criminalises the causation of HAD to individuals by engaging in a course of conduct involving acts associated with stalking. An act associated with stalking includes making (or attempting to make) communications relating to the victim, or purporting to originate from the victim.[29]

Although the provision merely requires the knowledge of a likelihood of harm,[30] and that the harm caused can be subjective,[31] it does require some form of repeated conduct. Therefore, only very niche scenarios of deepfake pornography distribution can be captured here. An example could be communicating the video multiple times to people related to the victim, or impersonating the victim on various channels and uploading the video to purport as originating from the victim.[32]

Section 11(1) – Allowing civil claims

It should be noted that POHA provides an avenue for civil recourse through s 11(1). Therefore, even with its gaps, POHA would be very useful if the victim falls into the specific deepfake pornography matrix in s 3, s 4 and s 7. Of further note is Division 2 of POHA, which deals with false statements of fact. Specifically, POHA provides several avenues of recourse for private individuals where a false statement of fact (including moving images) was published about them. As these provisions are not the focus of this article, however, they will not be discussed in depth here.

Films Act

Sections 29 and s 30

Sections 29 and s 30 of the Films Act criminalises the making, distribution, and possession of an obscene film. With no additional mens rea or causation elements (other than the perpetrator having a reasonable cause to believe the film is obscene),[33] it seems that all deepfake pornography scenarios can be captured under the provision. 

However, it was never the provision’s intent to punish a perpetrator for purposefully creating or distributing a deepfake pornography video which would greatly hurt the depicted individual. Rather, its goal is merely to regulate the circulation of mainstream pornography by punishing its makers and holders. The punishment under the provision is therefore usually not severe, with the imposition of light fines.[34] Although the court could theoretically sentence a perpetrator of deepfake pornography to the maximum 2 years in prison for dealing in obscene films, or 6 months for possessing them, this is unlikely as the provision merely targets “obscene” films. A typical deepfake pornography video would not make a pornography video more “obscene” to warrant a more serious punishment. 

With the punishment being incommensurate to the harm caused to a victim of deepfake pornography, the provision would not only fail to deter people from making deepfake pornography, but it would also not provide the victim with justice.

Civil action

Defamation 

In Singapore, one could sue for defamation if someone publishes a defamatory statement (with reference made to the plaintiff) to a third-party.[35] On top of potentially being awarded damages, one could seek for an interlocutory injunction in strong cases, which prohibits the perpetrator from continuing to publish the alleged defamatory material pending the Court’s final determination at trial.

Defamation may be useful for deepfake pornography victims since the distributor would usually be liable, as uploading such videos would “prejudice the victim in his profession or trade”, even if it was fake.[36] Not only would victims lose their jobs, but they may also have difficulty in finding new ones.[37]

Nonetheless, a defamation claim here can be severely hampered by the “justification” defence.[38] If the perpetrator places a watermark on the video stating that the video is doctored,[39] or includes a title that says “Not <VICTIM’S NAME>”, he can essentially prove that the substance of the offending material is true,[40] and hence would not be liable for defamation. This is a common practice to avoid defamation liability.

Copyright law 

The Copyright Act in Singapore protects the expression of ideas – which includes films. Upon alleged infringement, the copyright holder could first initiate take-down notices against network service providers to withdraw or disable access to the infringing materials,[41] while seeking for an injunction and damages.[42]

Unfortunately, this may not be very viable for the deepfake pornography victim, as they do not seem to hold any copyright in relation to the produced video. The “performance” in the deepfake would be owned by the pornographic actors taking part in the original video,[43] while the victim has no copyright claim to her “likeness”.[44] As a result, the only solution may be for the pornographic actors to take down the videos purely out of goodwill.[45] Even then, the claim may still be vulnerable to the fair dealing defence, where the perpetrator would argue that the video has been effectively transformed.[46]

As such, the Intellectual Property Office of Singapore has also acknowledged that intellectual property laws are insufficient to tackle deepfakes, suggesting that there should be a multi-disciplinary approach involving criminal and tort law.[47]

Tort of privacy

The tort of privacy probably comes closest to enabling the victim to seek proper civil redress, due to the fit of “false light publicity” and the “misappropriation of likeness”.

The tort of false light requires the publishing of some information about the victim which causes him or her to be placed in a misleading light.[48] The additional criteria is that it must be “highly offensive to a reasonable person”, and the publisher must have disregarded this offensiveness.[49] This is highly applicable in a deepfake pornography scenario since the falsity of the video does not affect the claim, unlike in defamation.[50]

On the other hand, the misappropriation of likeness involves the publisher appropriating the victim’s likeness to their advantage, commercially or otherwise. This must be done without the victim’s consent and causing resulting injury to the victim. As such, it covers the shortcoming of copyright as the victim has a direct claim from his likeness, rather than having to rely on undiscoverable copyright material. 

Nonetheless, the tort of privacy is only recognised in jurisdictions such as the United States, Canada, and New Zealand. As of today, it remains unrecognised in the UK and Singapore, with no precedent cases. For such tortious laws in Singapore which have not been judicially recognised, it has been said that the ideal route is to allow Parliament to delineate and legislate the law.[51] Thus, it may be futile for the victim to pursue these causes of action in court.

Regulatory measures 

Like many other jurisdictions, Singapore does not allow pornography to be on mainstream online platforms. Internet Content Providers (“ICPs”) are regulated through the Broadcasting (Class License) Notification, where they have to ensure that their content is in compliance with the Internet Code of Practice.[52]

In addition, over 100 pornographic websites are banned, with Internet Service Providers (“ISPs”) required to restrict public access to websites containing offensive or harmful content in furtherance of the same guidelines.[53]

Other than these general measures in regulating pornography, the Infocomm Media Development Authority (“IMDA“) does not take further action to specifically sift out deepfake pornography to remove them from the online space. Therefore, deepfake pornography can still reside within pornographic websites, banned or unbanned.

Specific developments in the UK

In the UK, the Law Commission has reviewed existing criminal laws relating to the taking, making, and sharing of intimate images without consent, and suggested four new offences in response.[54] In the context of deepfake pornography, this first includes a “base” offence which prohibits the sharing of the material without consent (and without reasonable belief in consent).[55] Two graver offences follow, which prohibits the commission of the base offence together with the intention to humiliate, alarm, or distress the victim, or for the purpose of either their own or someone else’s sexual gratification.[56] Lastly, one is prohibited from threatening to share the material, with intention or is reckless to cause fear in the victim.[57]

Notably, the criminalisation of the mere creation of deepfake pornography is purposefully left out by the Commission, as they do not have sufficient evidence to the prevalence of simple making and the harm which it causes.[58]

As to what constitutes deepfake pornography, the requirement is simply videos (or images) which have been digitally altered to appear intimate, with no threshold for realism.[59]

Moving forward in Singapore

Suggestions for regulatory measures

First, as a preliminary regulatory measure, IMDA could order the ISPs to expand their pornographic website restrictions to cover websites which specifically promulgate deepfake pornography.[60] This would prevent the public from easily accessing such content and curb distribution. Nonetheless, this solution is imperfect, with the prevalence of bypass measures such as using a VPN.

A second regulatory measure would require the development of sophisticated deepfake detection AI,[61] such that IMDA could run them on the various pornographic sites to sieve out deepfake pornography to be taken down immediately after its upload. However, this would also be challenging due to the sheer number of pornographic sites online, and it is not a given that such site owners will cooperate since most of them do not reside within Singapore’s jurisdiction. 

Suggestions for liability measures

With regard to civil claims, one suggestion would be for Parliament to legislatively introduce a tort of privacy, similar to how they enacted the tort of harassment. Nonetheless, this may be an unlikely outcome. On top of the fact that it would take a significant amount of time to introduce such a tort legislatively,[62] the tort of privacy also greatly overlaps with other civil measures and hence there may be no sufficient grounds for such enactment.  

Regardless, the lack of civil recourse may not be a big problem, since such remedies are often costly and time-consuming, and thus not accessible to most victims.[63] A robust criminalisation framework could thus perhaps be more efficient in tackling deepfake pornography.

At the moment, the best tool that Singapore has are ss 377BE and 377BD PC, which directly target the non-consenting possession and distribution of manipulated pornographic videos. However, as mentioned, the two provisions do not adequately cover the range of scenarios relevant to deepfake pornography.

As such, it is submitted that ss 377BE and 377BD PC should be expanded and altered in line with the UK’s proposed measure. First, the definition of an altered intimate image should be modified. Instead of requiring that a reasonable person believes that the video depicts the victim (which would exclude poor quality deepfakes, and deepfakes with watermarks clarifying it is a fake video), we should focus on the mere intention to depict an actual person (the victim) as seen through the video’s attempt to mimic realism. This would capture the abovementioned exclusions, as well as maintaining the original exclusion of cartoons. 

Second, the other parts of s 377BE PC should be kept as it is, since it is in line with UK’s first grave offence and the threat offence. Notably, the mens rea threshold is much lower in Singapore for both offenses. This makes sense following our strong stance against sex crimes.  

Third, we should import UK’s second grave offence, which criminalises the distribution of deepfake pornography for the purpose of either the perpetrator’s or someone else’s sexual gratification. This would capture the general deepfake pornography matrix, rather than requiring the knowledge of harm (which slants towards revenge pornography). The only issue is that sexual gratification may be difficult to prove. Perhaps distribution on a porn website or through a sinister chat group would meet this element, since it would obviously be for the purpose of someone else’s sexual gratification. 

Residual problems 

Despite the above recommendations, there may be several insurmountable issues in pursuing legal recourse.

First, it remains extremely difficult to even begin identifying the creator and distributor of the deepfake video. This is because the metadata relevant to ascertain a deepfake’s provenance is usually insufficient to identify the creator, while that distributor would likely have rerouted their IP address using virtual private networks.[64] This leaves the victim with no target to take action against, and the perpetrators would also not be deterred knowing that they could hide behind their anonymity.

Second, even if there is a perpetrator to take action against, he is more often than not located outside Singapore, which causes him to be beyond the effective reach of our legal processes.[65]

Third, measuring the victim’s harm remains a challenge. It is uncertain whether the harm begins upon the creation of the video, upon distribution, or upon the victim’s own contact with the video. This may lead to problems in claiming for damages in civil cases, and the justification of criminalisation. 

Conclusion

At present, the legal measures in Singapore are insufficient to tackle against the full spectrum of deepfake pornography cases, which calls for a need to improve the legal regime here. Nevertheless, the law itself cannot amount to a perfect measure. Other measures outside of the law such as code should also be utilised to regulate behaviour in relation to deepfake pornography.[66]

This piece was published as part of LawTech.Asia’s collaboration with the LAW4032 Law and Technology module of the Singapore Management University’s Yong Pung How School of Law. The views articulated herein belong solely to the original author, and should not be attributed to LawTech.Asia or any other entity.


[1] Justin Sherman, “Completely horrifying, dehumanizing, degrading: One woman’s fight against deepfake porn.” CBS News (14 October 2021) <https://www.cbsnews.com/news/deepfake-porn-woman-fights-online-abuse-cbsn-originals/>.

[2] Mika Westerlund, “The emergence of deepfake technology: A review”, Technology Innovation Management Review 2019; 9(11): 39-52. See 40-41 for an explanation of deepfake technology.

[3] Ibid.

[4] Tamsin Selbie, “Deepfake pornography could become an ‘epidemic’, expert warns” BBC News (27 May 2021) <https://www.bbc.com/news/uk-scotland-57254636>.

[5] Goh Yan Han, “New website set up to help victims amid 36% increase in sexual abuse involving tech: Aware” The Straits Times (14 July 2021) < https://www.straitstimes.com/singapore/with-36-per-cent-increase-in-sexual-abuse-involving-technology-new-website-can-help>.

[6] Karen Hao, “Deepfake porn is ruining women’s lives. Now the law may finally ban it.” MIT Technology Review (12 February 2021)<https://www.technologyreview.com/2021/02/12/1018222/deepfake-revenge-porn-coming-ban/>.

[7] Drew Harwell, “Fake-porn videos are being weaponized to harass and humiliate women: Everybody is a potential target” The Washington Post (30 December 2018) <https://www.washingtonpost.com/technology/2018/12/30/fake-porn-videos-are-being-weaponized-harass-humiliate-women-everybody-is-potential-target/>.

[8] Pavel Korshunov, “Deepfake detection: humans vs machines” arXiv:2009.03155 (2020).

[9] Sophie Maddocks, “From non-consensual pornography to image-based sexual abuse: Charting the course of a problem with many names.” Australian Feminist Studies 2018; 33.97; 345-361.

[10] Drew, supra n 7.

[11] Ibid.

[12] Penal Code (Cap 224, 2008 Rev Ed).

[13] Protection from Harassment Act (Cap 256A, 2015 Rev Ed).

[14] For the definition of intimate images, see the PC at s 377BE(5)(a).

[15] Id, at s 377BE(5)(b).

[16] Singapore Parliamentary Debates, Official Report (6 May 2019) vol 94 (‘Second Reading Speech on the Criminal Law Reform Bill’) (K Shanmugam, Minister for Home Affairs).

[17] PC, supra n 15.

[18] Id, at s 377BE(5)(b) Illustration B.

[19] Another example is a good quality video with the addition of a watermark claiming that the video is doctored.

[20] Justin, supra n 1. 

[21] PC, supra n 12, at s 377BE(1)(c) and s 377BE(2)(c).

[22] Parliamentary Debates, supra n 16.

[23] PC, supra n 12, at s 377BD.

[24] Also note that possession-based criminalisation serves as poor deterrence due to the understanding that our digital possessions are not routinely surveyed.

[25] See paras 12 and 13 above.

[26] POHA, supra n 13, at s 3.

[27] Drew, supra n 7. 

[28] POHA, supra n 13, at s 4. 

[29] Id, at S 7(3).

[30] Id, at S 7(2)(c)(ii).

[31] Id, at S 7(2)(b).

[32] PP v Lim Teck Guan [2021] SGMC 2 at [2]. The accused was found guilty of s 7(1) for impersonating the victim through ‘Wechat’ and ‘Instagram’ by uploading compromising pictures of the victim. 

[33] Films Act (Cap 107, Rev Ed 1998) s 29, 30.

[34] See PP v. Chandran s/o Natesan [2013] SGDC 33; PP v Chong Hou En [2015] 3 SLR 222.

[35] Golden Season Pte Ltd v Kairos Singapore Holdings Pte Ltd [2015] 2 SLR 751 at [35].

[36] Tharpe v. Lawidjaja, 8 F. Supp. 3d 743 (W.D. Va. 2014). at 786. The case involved a defendant distributing photographs which was altered to portray the plaintiff acting in a sexually explicit manner. 

[37] Danielle Citron, “Sexual Privacy” (2019) 128 Yale LJ 1870 at 1927. 

[38] Review Publishing Co Ltd v Lee Hsien Loong [2010] 1 SLR 52.

[39] Amelia O’Halloran, The Technical, Legal, and Ethical Landscape of Deepfake Pornography (2021) (Doctoral dissertation, Brown University) at p 29.

[40] Ibid.

[41] Copyright Act (“CA”) (Cap 63, 2006 Rev Ed) s 193DB.

[42] Id, at s 119(2)(a) and 119(2)(b).

[43] Amelia, supra n 39, at p 31.

[44] Kelsey Farish, “Do deepfakes pose a golden opportunity? Considering whether English law should adopt California’s publicity right in the age of the deepfake.” Journal of Intellectual Property Law & Practice 15.1 (2020) 40-48. Although copyright images might have been used to generate this “likeness”, once thousands of them have been blended by the algorithm, ascertaining the relevant plaintiffs may be impossible.

[45] Amelia, supra n 39, at p 31.

[46] CA, supra n 41, at s 35(2).

[47] IPOS Response to WIPO’s Request to participate in the Public Consultation on Artificial Intelligence and Intellectual Property Policy < https://www.wipo.int/export/sites/www/about-ip/en/artificial_intelligence/call_for_comments/pdf/ms_singapore.pdf>.

[48] Amelia, supra n 39.

[49] Ibid.

[50] Id, at p 30.

[51] AXA Insurance Singapore Pte Ltd v Chandran s/o Natesan [2013] 4 SLR 545 (“AXA”) at [8]. Such an approach was taken in the case with regards to the tort of harassment. 

[52] Infocomm Media Development Authority website https://www.imda.gov.sg/regulations-and-licensing-listing/content-standards-and-classification/standards-and-classification/internet>.

[53] Ibid.

[54] United Kingdom, Law Commission, Consultation paper on Intimate Image Abuse (Paper No. 253, 2021) at para 14.7.

[55] Ibid.

[56] Ibid.

[57] Ibid.

[58] Id, at para 7.106.

[59] Id, at para 14.24.

[60] Johannes Tammekänd, “Deepfakes 2020: The Tipping Point”, The Sentinel (October 2020) <https://thesentinel.ai/media/Deepfakes%202020:%20The%20Tipping%20Point,%20Sentinel.pdf> at p 82.

[61] Rei Kurohi, “AI Singapore launches $700k competition to combat deepfakes” The Straits Times (15 July 2021) < https://www.straitstimes.com/tech/ai-singapore-launches-700k-competition-to-combat-deepfakes>.

[62] Note the 13 years gap between Malcomson Nicholas Hugh Bertram v Mehta Naresh Kumar [2001] 3 SLR(R) and the enactment of the tort of harassment. 

[63] Law Commission, supra n 54, at para 1.14.

[64] Bobby Chesney and Danielle Citron, “Deep fakes: A looming challenge for privacy, democracy, and national security.” (2019) 107 Calif. L. Rev. 1759 at 1792.

[65] Ibid.

[66] Lawrence Lessig, Code: Version 2.0 (Basic Books, 2006). Note the ‘Four modalities’ at p 135.