Reading time: 6 minutes

Written by Alvin Chen and Stella Chen (Law Society of Singapore)

Editor’s Note: This article was first published in the August 2018 issue of the Singapore Law Gazette, the official publication of the Law Society of Singapore. Reproduced with permission.

In December 2017, DeepMind, a leading AI company, sent ripples through the AI world when it announced that it had developed a computer program (known as “AlphaGoZero” or “AlphaZero”) which learned the rules of three games – chess, Shogi and Go – from scratch and defeated a world-champion computer program in each game within 24 hours of self-learning.1 What was remarkable about DeepMind’s achievement was the program’s “tabula rasa” or clean slate approach which did not refer to any games played by human players or other “domain knowledge”.2 Yet, DeepMind’s program was able to develop an unconventional and some say, uncanny,3 methodology in surpassing current computer understanding of how to play the three games.

Referring to an earlier version of DeepMind’s program (“AlphaGo”) which defeated the (human) world champion in Go in 2016, the legal futurist Richard Susskind considers such innovative technologies to be “disruptive”. In his international bestseller Tomorrow’s Lawyers: An Introduction to Your Future (“Tomorrow’s Lawyers“)Susskind defined “disruptive” as something that would “fundamentally challenge and change conventional habits”.4

Defining “Disruptive” in the Legal Profession 

In an earlier book, The End of Lawyers? Rethinking the Nature of Legal Services (“The End of Lawyers?“), Susskind defined disruptive legal technologies as those that “fundamentally challenge or overhaul” law firms rather than clients.5 He suggested that clients would benefit from disruptive legal technologies, as such technologies would substantially increase law firms’ efficiency in providing legal services.6

Susskind identified 13 disruptive legal technologies in Tomorrow’s Lawyers, including automated document assembly, e-learning, online dispute resolution, machine prediction, legal question answering and online legal guidance.7 Whilst it is beyond the scope of this article to comprehensively compare the different types of disruptive legal technologies, a common implication of some of these technologies is what Susskind calls “disintermediation”, or, simply put, removing lawyers from the supply chain.8

For example, online legal guidance systems, which may be created through websites and chatbots, can provide clients with direct access to legal knowledge, and thereby bypass the tradition of lawyers giving face-to-face legal advice to clients. Susskind observed that the disruptive effect of such online legal guidance systems is clear: clients may find it cheaper to turn to online automated tools than to seek advice from a human lawyer.9

Professional Ethics and Automated Legal Advice Tools 

Public discourse has primarily focused on disruptive legal technologies’ commercial and technological benefits, with sparse discussion on the ethical issues from their use. However, in the past few years, a growing field of literature is exploring the ethical ramifications of using one of Susskind’s disruptive legal technologies – online legal guidance systems.

In April 2018, the University of Melbourne’s Networked Society Institute published a discussion paper (the “NSI paper“) on Automated Legal Advice Tools (“ALAT“).10 An ALAT, which is a “specific new technolog[y] supporting and providing delivery of legal advice”,11 is in essence a more sophisticated term for Susskind’s online legal guidance systems. Using selected examples from the UK, US and Australia, the NSI paper classified ALATs into 11 categories:12

  • Legal chatbots e.g. Lexi: provides customised legal information and documents through online interactive chat.
  • Legal apps e.g. Picture It Settled: predicts parties’ negotiating patterns and allows parties to refine their settlement strategies.
  • Virtual assistants e.g. FTA Portal: guides farmers in navigating voluminous free trade agreements on exporting goods from Australia.
  • Legal document automation e.g. Clerky: provides startups with automated company incorporation documents.
  • Legal document review e.g. Contract Probe: provides a quality check on contracts, apparently within 15 seconds, and also identifies missing or unusual clauses.
  • Legal artificial intelligence e.g. Compas Core: used by judges to predict the risk that an accused person will commit a new violent crime, be likely to re-offend or be a flight risk.
  • Legal data analytics and prediction e.g. Premonition: predicts which lawyers will win cases in which courts, even before they attend court.
  • Human-free smart contracts e.g. Agridigital: uses blockchain technologies to assist in transacting and settling agricultural commodities and to manage supply chain risk.
  • NewLaw business models e.g. LegalVision: a law firm which allows users to build their own legal documents on a website.
  • Legal technology companies e.g. Neota Logic System: develops smart applications by combining rules, reasoning, decision management and document automation.

Citing numerous recent articles and books as references, the NSI paper examined a number of interesting regulatory and ethical issues arising from ALATs. A central issue is whether ALATs can be considered as engaging in the unauthorised practice of law. Whilst giving legal advice has traditionally been viewed as lawyers’ exclusive province, some have justified using ALATs in the legal sector because ALATs claim to provide only “legal information”, which is “generic” and not tailored to “the particular circumstances of the individual”.13 However, the NSI paper noted that the purported dichotomy between legal advice and legal information may not work in practice. Difficult questions of consumer choice and public interest will also need to be addressed.14

Other professional ethical issues discussed briefly in the NSI paper included the quality of legal advice provided by ALATs and the duty of competence expected of lawyers employing ALATs.15

Wider Ethical Concerns 

The use of ALATs also raises wider ethical concerns that have been part of a larger conversation on the ethics of AI. Two of these concerns will be addressed in this section.

The first is “algorithmic opacity”16 or what the NSI paper called the “Black Box problem”.17 ALATs employ algorithms which lack transparency in their reasoning, creating an information gap between ALATs and their users (e.g. lawyers or clients).18 It is one thing for humans to fail to understand the reasoning behind AlphaZero’s uncanny moves in a chess game. It is a far more serious matter if lawyers cannot critique machine reasoning in making decisions in high-stakes legal matters, even though significant concerns may exist regarding the reliability or validity of the ALAT’s decision.19

Although some have suggested that we can address the Black Box problem by allowing a human person to ultimately override the algorithm,20 “automation bias” is likely to prevail. This is a condition where people defer to the perceived superiority of AI technology,21 suggesting that the master control switch may in reality only be a showpiece. As pointed out by Shannon Vallor and George A. Bekey in Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence, “i[f] an artificially intelligent system has consistently demonstrated greater competence than humans in a certain task, on what grounds do we give a human supervisor the power to challenge or override its decisions?”22

The second broader ethical concern is that algorithmic opacity may obscure “hidden machine bias”, such as a “harmful racial or gender bias”.23 A recent example is the Compas Core software cited above, which has generated controversy in light of claims that it was racially biased or failed to accurately predict an offender’s risk of recidivism.24 As observed in the NSI paper, we need to ask critical questions as to what assumptions the ALAT made, as well as the choice of the dataset(s) it used, to determine whether any biases existed in its learning of the data.25

We need to ask critical questions as to what assumptions the ALAT made, as well as the choice of the dataset(s) it used, to determine whether any biases existed in its learning of the data.

Conclusion 

Although ethics has made a slow start in the legal AI world, it is gradually catching up with disruptive legal technologies such as ALATs. ALATs are likely to proliferate in the next decade, which raises important ethical questions for stakeholders in the legal industry, including regulators, lawyers, legal technologists and clients. Addressing these questions directly and holistically may prove to be as “disruptive” of the existing ethical inertia as the fundamental changes taking place in the field of legal technology.

Featured Image Credit: Law Society of Singapore

1.David Silver et. al., “Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm”, arXiv:1712.01815 [cs.AI] (5 December 2017) <https://arxiv.org/abs/1712.01815> (accessed 26 July 2018).
2.Ibid.
3.Will Knight, “AlphaZero’s ‘Alien’ Chess Shows the Power, and the Peculiarity, of AI” MIT Technology Review (8 December 2017) <https://www.technologyreview.com/s/609736/alpha-zeros-alien-chess-shows-the-power-and-the-peculiarity-of-ai/> (accessed 26 July 2018).
4.Richard Susskind, Tomorrow’s Lawyers: An Introduction to Your Future (Oxford: Oxford University Press, 2nd Edition, 2017) “Tomorrow’s Lawyers”, at p. 15.
5.Richard Susskind, The End of Lawyers? Rethinking the Nature of Legal Services (Oxford: Oxford University Press, 2008) “The End of Lawyers?”, at p. 99.
6.The End of Lawyers?id, at p. 100; Tomorrow’s Lawyerssupra n 4, at p. 44.
7.Tomorrow’s Lawyerssupra n 4, at pp. 44-5.
8.The End of Lawyers?supra n 5, at p. 6.
9.The End of Lawyers?supra n 5, at pp. 121-22; Tomorrow’s Lawyerssupra n 4, at pp. 48-9.
10.Judith Bennett et. al., “Current State of Automated Legal Advice Tools”, Networked Society Institute Discussion Paper 1 (April 2018) < https://networkedsociety.unimelb.edu.au/__data/assets/pdf_file/0020/2761013/2018-NSI-CurrentStateofALAT.pdf> (accessed 26 July 2018) “Current State of Automated Legal Advice Tools”.
11.Current State of Automated Legal Advice Tools, Section 1.1, at p. 7.
12.Current State of Automated Legal Advice Tools, Appendix A, at pp. 37-57.
13.Current State of Automated Legal Advice Tools, Section 3.3, at p. 16; Section 5.2, at pp. 30-1.
14.Current State of Automated Legal Advice Tools, Section 3.3, at p. 16; Section 5.2, at pp. 30-1.
15.Current State of Automated Legal Advice Tools, Section 5.3, at p. 31; Section 5.5, at p. 32.
16.Patrick Lin et. al. (eds.), “Artificial Intelligence and the Ethics of Self-Learning Robots” in Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence (Oxford, Oxford University Press: 2017) “Robot Ethics 2.0”, at p. 345.
17.Current State of Automated Legal Advice Tools, Section 5.7, at pp. 33-4.
18.Robot Ethics 2.0supra n 16, at p. 345.
19.Robot Ethics 2.0supra n 16, at p. 345.
20.“Humans may not always grasp why AI acts. Don’t panic” The Economist (15 February 2018) <https://www.economist.com/leaders/2018/02/15/humans-may-not-always-grasp-why-ais-act.-dont-panic> (accessed 26 July 2018).
21.Robot Ethics 2.0supra n 16, at p. 349.
22.Robot Ethics 2.0supra n 16, at p. 345.
23.Robot Ethics 2.0supra n 16, at p. 346.
24.Ed Yong, “A popular algorithm is no better at predicting crimes than random people” The Atlantic (17 January 2018) <https://www.theatlantic.com/technology/archive/2018/01/equivant-compas-algorithm/550646/> (accessed 26 July 2018).
25.Current State of Automated Legal Advice Tools, Section 5.7, at p. 33.