Reading time: 14 minutes

Written by Lim Hong Wen, Amelia | Edited by Josh Lee Kok Thong

We’re all law and tech scholars now, says every law and tech sceptic. That is only half-right. Law and technology is about law, but it is also about technology. This is not obvious in many so-called law and technology pieces which tend to focus exclusively on the law. No doubt this draws on what Judge Easterbrook famously said about three decades ago, to paraphrase: “lawyers will never fully understand tech so we might as well not try”.

In open defiance of this narrative, LawTech.Asia is proud to announce a collaboration with the Singapore Management University Yong Pung How School of Law’s LAW4032 Law and Technology class. This collaborative special series is a collection featuring selected essays from students of the class. Ranging across a broad range of technology law and policy topics, the collaboration is aimed at encouraging law students to think about where the law is and what it should be vis-a-vis technology.

This piece, written by Lim Hong Wen, Amelia, seeks to analyse three key concerns that may arise from the use of digital surveillance, in particular, the issue of privacy, harassment, and algorithmic bias. This paper then examine how the four modalities expounded by Lawrence Lessig will come into play in regulating the use of digital surveillance (i.e. the law, architecture, social norms, and the market). Part II first explores the developments in the use of digital surveillance by the state, employers, and individuals. Digital surveillance has since transformed over the years and current laws may be insufficient in protecting individuals against certain unwanted forms of digital surveillance. Part III of this paper identified the inadequacies of current laws to address the key concerns identified earlier (i.e. privacy, harassment, and algorithmic bias). Given the lack of legal recourse available, Part IV then analyzed how the use or misuse of digital surveillance can be regulated by the remaining three modalities (i.e. the architecture, social norms, and the market).

Introduction

With the ubiquity of digital technology and Internet usage, data is collected daily at an unprecedented rate. From our online viewing habits to our physical footprints in the real space, any and every action we take is being surveilled and tracked.[1] More than a decade ago, the UK’s Information Commissioner’s Office (“ICO“) had already made the following observations in a report on the surveillance society, a fitting description of present-day society:[2]

“We live in a surveillance society. It is pointless to talk about surveillance society in the future tense. In all the rich countries of the world everyday life is suffused with surveillance encounters, not merely from dawn to dusk but 24/7. Some encounters obtrude into the routine […] [but] the majority are not just part of the fabric of daily life.”

UK INFORMATION COMMISSIONER’S OFFICE

Since then, digital surveillance has permeated into every aspect of our lives, including our workplaces and homes. More recently, there have been greater awareness of the scope and scale of digital surveillance.[3] Thus, this paper focus on three key concerns that individuals may face: (1) Privacy; (2) Harassment; and (3) Algorithmic bias. 

Evolving Trends of Digital Surveillance

Use of digital surveillance technology by states

Digital surveillance has evolved to become a quintessential tool in various states’ governing strategies.[4] Closed-circuit television (“CCTV”) surveillance is now the prevailing mode of crime prevention measures used globally.[5] In recent years, cities in China have emerged as some of the most surveilled cities in the world.[6] Even so, China is not alone: the global usage of surveillance cameras is set to increase to above 1 billion by the end of 2021.[7] In Singapore, at least 90,000 CCTVs have been installed in public areas. By 2030, the number of cameras deployed islandwide will increase to at least 200,000.[8]

As video surveillance technology becomes increasingly sophisticated and ubiquitous, an increasing number of states have begun to leverage on Artificial Intelligence (“AI”) technology to track and surveil citizens for a wide range of purposes.[9] For example, the latest CCTVs are equipped with biometric technology, such as facial recognition technology (“FRT”). FRT is a software that utilizes an image or video captured by a camera and matches it to other images in a dataset.[10] Such biometric technology has been employed to identify suspected rioters who were involved in the United States (“US”) Capitol attack.[11] Likewise, in China, FRT has been used by law enforcement, though its use has also extended to other areas such as payment services.[12]

However, the use of FRT in law enforcement has been plagued with public backlash and dissent. FRT presents the risk of producing inaccurate and biased results based on sensitive personal attributes such as gender or race.[13] For example, studies have shown that FRT is less accurate when applied in context of identifying African Americans.[14] Further, the use of FRT may result in imperfect matches being made. When such erroneous algorithmic analysis influence law enforcement decisions, these disparities can lead to varying outcomes for different communities.[15] Thus, with the lack of regulation and oversight over the use of FRT, the flaws with FRT may harm the social fabric of society. 

Prevalence of workplace surveillance 

Digital surveillance technology has also crept into our workplaces. The COVID-19 pandemic has propelled the use of surveillance and monitoring systems to track employees’ productivity as the world shifted to remote work practices. To ensure that employees are working during office hours, companies have leveraged on employee monitoring software, such as Pragli and Hubstaff.[16] Programmes such as these track every keystroke, snap pictures using the computer cameras, and rank employees’ productivity levels.[17]

That said, workplace surveillance software is not a new phenomenon. Prior to the COVID-19 pandemic, Barclays’ London installed on employee work desks a device called ‘OccupEye’, [18] which tracked the duration employees spent at their desks.[19] With the shift towards remote and online working, employee surveillance today is performed on work computers through discrete software programmes.       

The growth of private surveillance 

Our homes are also not spared from the pervasive use of digital surveillance. With more devices being connected to the Internet, individuals are being surveilled from smartphones and even home appliances.[20] These Internet-connected devices collect the user’s personal information, including shopping patterns and lifestyle preferences. The information is then used for commercial purposes, such as targeted advertising.[21]

Another form of private surveillance involves the use of ‘spyware’, which are now easily available in the market. Spyware, in the form of “spy” apps, can be installed in an individual’s electronic devices to remotely collect private messages, locational information and web browsing data.[22] Some apps, such as Flexispy, even allow users to access the device’s camera and microphone to listen in on unsuspecting victims.[23]

In recent years, surveillance has evolved from the use of electronic devices such as CCTVs into software – something which is more invisible and inconspicuous. As a result, this brings into question whether the current state of our laws is sufficient to protect individuals. 

Legal Issues Arising from Digital Surveillance

No Right of Privacy in Singapore

The proliferation of digital surveillance has given rise to several legal issues, one of which is the topic on individuals’ privacy. In a paper titled ‘Privacy and the Internet’, Suzan Balz discussed three aspects of privacy which digital technology threatens, in particular, the right to communicate without surveillance.[24] Many countries have recognized a constitutional right of privacy, and as a result, the violation of privacy rights by the state has been limited to specific circumstances, such as national security interests.[25] For example, the European Convention on Human Rights (“ECHR”) prescribes a right of privacy,[26] and private correspondence should not be interfered with unless it is for public order or crime prevention purposes.[27]

As for workplace surveillance, countries such as the US have laws to protect individuals’ privacy and to provide exceptions for employers to surveil their employees, such as the right to manage one’s business.[28] In the European Union (“EU”), the right of privacy under the ECHR also applies to employers.[29] Hence, employers must refrain from monitoring and intercepting individuals’ privacy and private correspondence. 

While Balz focuses on the regulations imposed by certain countries, Balz’s article omitted countries that do not recognize a fundamental right to privacy. Unlike the EU, Singapore does not recognize the fundamental right of privacy of individuals.[30] Although Singapore has enacted the Personal Data Protection Act (“PDPA”), the PDPA only recognize the right to data protection.[31] This difference is crucial as the lack of a fundamental right of privacy may shape public discourse on digital surveillance. Moreover, a lack of legal sanctions against certain unwanted surveillance means that individuals are potentially left without recourse in the event of serious privacy intrusions. Singapore has been slow in demarcating acceptable forms of surveillance from unreasonable and oppressive forms of surveillance. Still, there are other forms of legal redress available to individuals.   

Limited Recourse Under the Protection from Harassment Act

The Protection from Harassment Act (“POHA”) governs conduct that cause harassment, alarm, or distress, regardless of whether the act was committed in cyberspace or in real space.[32] Although the phrase “harassment, alarm or distress” was not defined in the POHA, the Singapore courts have suggested a commonsensical meaning of the term “harassment”.[33] In Hayes v Willoughby, the UK Supreme Court held that “harassment” is a “persistent and deliberate course of unreasonable and oppressive conduct […] which is calculated and does cause that person alarm, fear or distress”.[34]Consequently, surveillance of an individual must be repetitive over a prolonged period of time, and cause that individual distress for it to constitute “harassment”.[35] An example that comes to mind is the use of spyware in workplaces and homes which would arguably cause an individual to be harassed and distressed. 

Yet, recourse under the POHA is limited to very specific acts of surveillance. Firstly, POHA applies only to individuals and companies, to the exclusion of public agencies.[36] Therefore, individuals have no recourse against the state even if they feel harassed, alarmed, or distressed, for example, by the daunting number of CCTVs in their neighbourhoods.[37] If every CCTV in Singapore is equipped with WiFi sniffers, thermal-imaging cameras or FRT,[38] individuals may likely feel disturbed by the omnipresence of surveillance at every corner of the streets. Arguably, the exclusion of public agencies from POHA is justifiable as the state has an interest in detecting and preventing crimes. However, this may bring the issue of privacy in public spaces to the forefront. Certain jurisdictions such as Canada have recognized a reasonable expectation of privacy in public or semi-public spaces.[39] However, it is unclear if such reasonable expectations of privacy in public spaces exist in Singapore as Singapore does not at present recognise a general tort of privacy.[40]  

Secondly, an alleged perpetrator could, as a defence, prove that the surveillance was reasonable.[41] Hence, it is arguable that employers may reasonably surveil its employees, though what constitute as “reasonable” depend on the circumstances.[42] Even so, employee monitoring will probably not fall within the purview of POHA. The underlying rationale that led to the enactment of POHA is to criminalise acts such as online harassment, cyberstalking, and cyberbullying.[43] Therefore, workplace surveillance to ensure employees’ productivity would not be a conduct that Parliament had intended for POHA to govern. If, however, an employer names and shames employees with low-productivity rankings as revealed by the surveillance software, it is questionable if such conduct would fail the threshold of “reasonableness” and constitute “harassment”. 

Lack of Legal Recourse for Algorithmic Bias

The inadequacies of current legislation to regulate unwanted forms of surveillance are amplified in situations where the use of technology, such as FRT, has led to discriminatory practices because of inaccurate and biased algorithmic analysis.  For example, in the US, Robert Julian-Borchak Williams was arrested based on a faulty facial recognition algorithmic match.[44] Fortunately, Williams was released with no serious repercussions. If this occurred in Singapore, and Williams was prosecuted based on the erroneous algorithmic match, current laws may be unable to provide sufficient remedies for him.

No Recourse Against the State 

In Singapore, a court may grant an order for compensation if the prosecution of an acquitted person was frivolous or vexatious.[45] In Parti Liyani v Public Prosecutor,[46] the High Court clarified when a prosecution is “frivolous or vexatious”. If, for example, the prosecution was solely based on an inaccurate FRT match result without any other substantial evidence to prove that the accused had indeed committed the crime, it is likely that the prosecution is “frivolous or vexatious”. However, the threshold for such a claim is rather high. Given the public interest element in criminal litigation at play, it is unlikely that accused persons have any available recourse against the state’s use of digital surveillance.

Lack of Regulation Against Discrimination in Digital Surveillance

Certain jurisdictions have opted to enact anti-discrimination laws. For example, EU Member States are required to ensure that there is no racial or ethnic discrimination.[47] The UK has also enacted the Equality Act in accordance with the EU Directives.[48]

As for algorithmic bias in FRT, certain cities have gone further to ban the use of FRT by government agencies.[49]

In Singapore, the current legal framework for racial and ethnic discrimination is scattered between the Maintenance of Religious Harmony Act[50] and the Penal Code.[51] Acts that promote hostility and enmity between different religious and racial groups are criminalised. However, in situations where the use of data-driven decision-making software, such as FRT or employee monitoring software, leads to indirect discrimination of certain racial or religious groups,[52] it is unclear if the user or the developer of those software will be criminally liable for racial or religious offences. 

From the discussion on the key areas of privacy, harassment, and algorithmic bias, there are inadequacies in the law to protect individuals from unwanted surveillance. The PDPA, POHA, and Penal Code prohibit wrongs committed against individuals, but its effectiveness is limited by the language of its provisions How, then, should digital surveillance be regulated?

Other Modalities May Come into Play in Regulating Digital Surveillance 

Lessig explained that behaviours in cyberspace can be regulated by four modalities – “the law, social norms, the market, and architecture”.[53] When the law is inadequate in regulating online behaviour, the other three modalities may play a greater role and influence the use of digital surveillance.

Cyberspace architecture

Lessig argued that the code or architecture set the features of cyberspace, and “they constrain some behaviour by making other behaviour possible or impossible”.[54] However, as the “scope and pervasiveness of digital technologies [opened] up new areas of social vulnerability”, the architecture actually affords easier invasion of private spaces.[55] For example, employee monitoring software and spyware can effortlessly intrude into individuals’ private spaces even though such software is limited by their code. In other words, the architecture can limit the extent of surveillance of an individual but at the same time, enable the user to infiltrate other individuals’ private spaces. This demonstrates the duality of technology.

At times, the architecture may also constrain consumers from controlling what data is being collected, used, and disclosed. For example, the algorithm in consumer products such as Google Home collects individuals’ information by default, and can accumulate individuals’ behavioural patterns to create digital identities.[56] Individual users would not have the means and capabilities to break away from surveillance which means that individuals have almost no say in how their data is used.[57]

To this end, Singapore has enacted several governance frameworks to improve and maintain the use of digital technology by corporations and individuals. For example, the Personal Data Protection Commission has published guidelines on the appropriate use of CCTVs, and drones.[58] The Model AI Governance Framework also informs organisations on how AI can be implemented responsibly, including assessing the risks of bias in data.[59] Here, soft law complements the architecture of cyberspace in regulating behaviour and minimizing the potential misuse of surveillance software.

Markets

When it comes to unwanted forms of surveillance, financial barriers only serve a small deterrent effect in minimizing the misuse of digital surveillance. Spyware, which was initially deployed as a tool of espionage by nation-states, have now been commoditised.[60] CCTVs and spy cameras, can be bought at a low cost from electronics malls or online sites.[61]Furthermore, a large number of “spy” apps, such as mSpy, can either be purchased on the Internet directly from vendors or installed from Google’s Play Store’ or Apple’s App Store.[62]

The market may also, in fact, incentivise and encourage the use of digital surveillance by the state and corporations. With vast amounts of data being collected every day, data is now being commoditised by corporations to understand consumers’ needs and desires.[63] For example, automakers and social media platforms are increasingly keen to profit from the sale of personal data generated by its consumers.[64] Thus, market forces play a limited role in curbing the disproportionate and abusive use of digital surveillance.

Social Norms 

Social norms are normative constraints that “members of a community impose on each other”, and “deviation from which makes [one] socially abnormal”.[65] The widespread use of digital surveillance by the state and employers has led to a growing distrust in societies. In 2013, the “Stop Watching Us” and “The Day We Fight Back” campaigns were organised in the US in a movement for online surveillance to be curtailed.[66] These came after former US National Security Agency (“NSA”) contractor Edward Snowden leaked classified documents detailing the scope of surveillance conducted by the US government. Shortly after these two campaigns, the USA Freedom Act was enacted to rein in the mass collection of Americans’ Internet data by the NSA.[67]

On the issue of algorithmic bias, social norms may also be effective in mitigating racial or ethnic bias in AI. John Villasenor explained that racial bias can be built into datasets and subsequently, into data-driven decision-making software.[68] For example, African-Americans are disproportionately targeted in policing and hence, arrest record statistics are shaped by race. This could affect the sentencing recommendations made by an AI system that rely on prior arrests as an input. After the Black Lives Matter movement which saw protests in various countries, the US Democrats have proposed sweeping legislation in 2020 to combat police misconduct and racial bias in policing.[69]  

Therefore, from the different social movements seen here, social norms can influence and compel laws to be changed to protect individuals, which in turn, regulate and prohibit unwanted behaviours. 

Conclusion

This paper has reviewed the developments and legal issues associated with digital surveillance in the key areas of privacy, harassment and algorithmic bias. Digital surveillance is a prominent technological development as it is often inconspicuous and unnoticeable. This poses new challenges for the protection of individuals against the misuse of digital surveillance. Absent constitutional protection or any legal recourse for invasions of privacy, harassment, or racial discrimination, the use of digital surveillance may remain unfettered with its technological capability potentially bypassing parliamentary scrutiny. Current legislations are also at risk of becoming obsolete. Thus, there is a need to consider the adequacy of the existing legal framework and oversight mechanisms before new technologies are implemented. Otherwise, the remaining three modalities may come into play to regulate the use of digital surveillance. 

This piece was published as part of LawTech.Asia’s collaboration with the LAW4032 Law and Technology module of the Singapore Management University’s Yong Pung How School of Law. The views articulated herein belong solely to the original author, and should not be attributed to LawTech.Asia or any other entity.


[1] Andrew Ferguson, The Rise of Big Data Policing (NYU Press, 2017), at page 7.  

[2] Murakami Wood, D. and Ball, K., “A Report on the Surveillance Society”, ResearchGate (2006) <https://www.researchgate.net/publication/241917099_A_Report_on_the_Surveillance_Society> (accessed 16 November 2021).

[3] Tredinnick, L. and Laybats, C., “Workplace surveillance” Business Information Review 2019; 36(2): 50-52.

[4] Feldstein, S., “The Global Expansion of AI Surveillance”, Carnegie Endowment for International Peace (2019) <https://carnegieendowment.org/2019/09/17/global-expansion-of-ai-surveillance-pub-79847> (accessed 16 November 2021).

[5] Piza, E., Welsh, B., Farrington, D. and Thomas, A., “CCTV surveillance for crime prevention” Criminology & Public Policy 2019; 18(1): 135 – 159, at page 136.

[6] Bischoff, P., “Surveillance Camera Statistics: Which City has the Most CCTV Cameras?”, Comparitech (2021) <https://www.comparitech.com/vpn-privacy/the-worlds-most-surveilled-cities/>.

[7] Purnell, L., “A World with a Billion Cameras Watching You Is Just Around the Corner” Wall Street Journal (6 December 2019).

[8] BAHARUDIN, H., “More police cameras to be installed islandwide as Home Team taps tech to enhance ops” The Straits Times (1 March 2021).

[9] Feldstein, S., supra n 4.

[10] Symanovich, S., “What is facial recognition? How facial recognition works”, Us.norton.com (2021) <https://us.norton.com/internetsecurity-iot-how-facial-recognition-software-works.html>.

[11] Finklea, K., “U.S. Capitol Attack and Law Enforcement Use of Facial Recognition Technology” Congressional Research Service (2021)<https://crsreports.congress.gov/product/pdf/IN/IN11614>.

[12] Yuan Yang, “Smile to enter: China embraces facial-recognition technology” Financial Times (2017) <https://www.ft.com/content/ae2ec0ac-4744-11e7-8519-9f94ee97d996>.

[13] Martin, C., “Facial Recognition in Law Enforcement” Seattle Journal for Social Justice 2020; 19(1): 309-346.

[14]  Garvie, C., Bedoya, A. and Frankle, J., “The Perpetual Line-Up” Georgetown Law (2016) <https://www.perpetuallineup.org/>.

[15] Martin (2020), supra n 13, at page 317 – 319.

[16] Drew Harwell, “Managers turn to surveillance software, always-on webcams to ensure employees are (really) working from home” Washington Post (2020).

[17] Adam Satariano, “How My Boss Monitors Me While I work from home” New York Times (2020).

[18] Alex Christian, “Bosses started spying on remote workers” The Wired UK (2020).

[19] Ibid.

[20] Nicholas Frick, “The perceived surveillance of conversations through smart devices” Elsevier ScienceDirect Journals 2021; 47: 101046.

[21] Nicholas Frick (2021), ibid

[22] Diarmaid Harkin, “The commodification of mobile phone surveillance” Crime Media Culture 2020; 16(1): 33-60, at page 34-36; See also Karen Levy, “Intimate surveillance” Idaho Law Review 2015; 51(3): 679-694, at p 686.

[23] Karen Levy, “Intimate Surveillance”, ibid

[24] Suzan Balz, “Privacy and the Internet: Intrusion, Surveillance and Personal Data” International Review of Law, Computers & Technology (1996); 10(2): 219-234. 

[25] Ibid, at p 222 – 224.

[26] Id.

[27] Id.

[28] Id, at p 224-226.

[29] Id, at p 226.

[30] Warren Chik, “The Meaning and Scope of Personal Data Under the Singapore Personal Data Protection Act” (2014) 26 SAcLJ 354.

[31] Personal Data Protection Act 2012 (No. 26 of 2012); Warren Chik (2014), ibid, at p 362.

[32] Protection from Harassment Act (Cap 256A, 2015 Rev Ed) ss 2, 3, and 4; See also Goh Yi Han & Yip Man, “The Protection from Harassment Act 2014 – legislative comment” (2014) 26:2 SAcLJ 700, at p 706.

[33] Ibid.

[34] Hayes v Willoughby [2013] 1 WLR 935 at [1].

[35] Chee Siok Chin and others v Minister for Home Affairs and another [2006] 1 SLR(R) 582.

[36] POHA, supra n 32, s 2.

[37] Nadine Chua, “More than 200,000 police cameras to be installed islandwide by 2030” The Straits Times (10 October 2021).

[38] See Jessica Batke & Mareike Ohlberg, “State of Surveillance: Government Documents Reveal New Evidence on China’s Efforts to Monitor its People” ChinaFile (2020) <https://www.chinafile.com/state-surveillance-china>.

[39] Hubbard, R.W., Magotiaux, S., & Sullivan, M., “The State Used of Closed Circuit TV: Is There Reasonable Expectation of Privacy in Public” Criminal Law Quarterly 2004; 49(2): 222-250.

[40] Lanx Goh & Jansen Aw, Data Protection Law in Singapore – Privacy and Sovereignty in an Interconnected World (Academy Publishing, 2nd Ed, 2018) at Chapter 4.

[41] POHA, supra n 32, ss 3 and 4.

[42] Goh Yi Han & Yip Man (2014), supra n 32, at p 710 – 711.

[43] Singapore Parliamentary Debates, Official Report (13 March 2014) vol 91 (K Shanmugam, Minister for Law).

[44] Kashmir Hill, “Wrongfully Accused by an Algorithm” New York Times (24 June 2020).

[45] Criminal Procedure Code (Cap 68, Rev Ed 2012), s 359.

[46] Parti Liyani v Public Prosecutor [2021] SGHC 146.

[47] Race Equality Directive 2000/43/EC (2000) Official Journal L180. 

[48] Equality Act 2010 (c 15) (UK).

[49] Purnell, L. (2019), supra n 7.

[50] Maintenance of Religious Harmony Act (Chapter 167A, Rev Ed 2001).

[51] Penal Code (Cap 224, Rev Ed 2008) Chapter XV.

[52] Indré Žliobaité (2017), at 1063.

[53] Lawrence Lessig, Code 2.0 (New York: BasicBooks, 2006), at Chapter 7, at p 123. 

[54] Lawrence Lessig (2006), supra n 53, at p 124 – 125.

[55] Jewkes, Y, “Policing the Net: crime, regulation and surveillance in cyberspace” Dot.cons: Crime, Deviance and Identity on the Internet(Cullompton: Willan Publishing, 1st Ed, 2003), at p 24.

[56] Yong Jin Park, Jae Eun Chung, and Dong Hee Shin, “The Structuration of Digital Ecosystem, Privacy, and Big Data Intelligence” American Behavioral Scientist 2018; 62(10): 1319 – 1337.

[57] Ibid.

[58] Personal Data Protection Commission, “Advisory Guidelines on the PDPA for Selected Topics” (24 September 2013) at p 32 – 39, and p 56.

[59] Personal Data Protection Commission, “Model Artificial Governance Framework” (21 January 2020).

[60] Diarmaid Harkin (2020), supra n 22.

[61] Wong Pei Ting, “Hook camera used by man to film in Starbucks toilet easily available online, at Sim Lim Square” Today Singapore (26 August 2018).

[62] Diarmaid Harkin (2020), supra n 22. 

[63] Knowledge@Wharton, “Your Data Is Shared and Sold…What’s Being Done About It?” Knowledge@Wharton (2021) <https://knowledge.wharton.upenn.edu/article/data-shared-sold-whats-done/>.

[64] Ryotaro Yamada & Kotaro Abe, “Honda Joins $400bn Gold Rush to Monetize Smart Car Data” Nikkei Asia (2021) < https://asia.nikkei.com/Business/Technology/Honda-joins-400bn-gold-rush-to-monetize-smart-car-data>; Leetaru, K., “What Does It Mean For Social Media Platforms To “Sell” Our Data?” Forbes (2021) <https://www.forbes.com/sites/kalevleetaru/2018/12/15/what-does-it-mean-for-social-media-platforms-to-sell-our-data/>.

[65] Lawrence Lessig (2006), supra n 53, at p 340. 

[66] Till Wäscher, “Framing Resistance Against Surveillance” Digital Journalism 2017; 5:3: 368-385, at p 368 – 369.

[67] Ibid, at p 381.

[68] John Villasenor, “Artificial Intelligence and bias: Four key challenges” Brookings (3 January 2019) <https://www.brookings.edu/blog/techtank/2019/01/03/artificial-intelligence-and-bias-four-key-challenges/>.

[69] Anthony Zurcher, “US Democrats introduce sweeping legislation to reform police” BBC News (9 June 2020) <https://www.bbc.com/news/world-us-canada-52969375>.