Asia's Leading Law & Technology Review

Tag: artificial intelligence Page 2 of 3

TechLaw.Fest 2020 Quick Chats: Dr Ian Walden, Professor of Information and Communications Law at Queen Mary University of London; Director of Centre for Commercial Law Studies

Reading time: 8 minutes

Interview by Josh Lee, Lenon Ong and Elizaveta Shesterneva | Edited by Josh Lee

TechLaw.Fest 2020 (“TLF”) will take place online from 28 September – 2 October 2020, becoming the virtual focal point for leading thinkers, leaders and pioneers in law and technology. In the weeks leading up to TLF, the LawTech.Asia team will be bringing you regular interviews and shout-outs covering some of TLF’s most prominent speakers and the topics they will be speaking about.

This week, LawTech.Asia received the exclusive opportunity to interview Dr Ian Walden, Professor of Information and Communications Law and Queen Mary University of London and the Director of the Centre for Commercial Law Studies. Ian will be speaking at a panel on “Global Perspectives on Tackling AI Governance” on the second day of TLF (29 September 2020).

The Epistemic Challenges Facing the Regulation of AI

Reading time: 8 minutes

Written by Tristan Koh and Josh Lee

The regulation of artificial intelligence (“AI”) has been a hot topic in recent years. This may stem from increased societal awareness of: (a) the possibilities that AI may deliver across various domains; and (b) the risks that the implementation of AI may cause (e.g., the risk of bias, discrimination, and the loss of human autonomy). These risks, in particular, have led renowned thought leaders to claims that AI technologies are “vastly more risky than North Korea” and could be the “worst event in the history of our civilisation”.

A key challenge facing any policymaker creating regulations for AI (or, for that matter, any new technology), however, is the epistemic (i.e., knowledge-based) challenge – policymakers must have domain knowledge in order to be able to sufficiently appreciate the scope, size, degree and impact of any regulation, and be able to propose solutions that are effective and pragmatic.[1]  In fact, it has been recognised in some governments that subject-matter expertise is lacking when policies or regulations are being crafted.[2] To effectively regulate the development and use of AI, it is clear that policymakers and regulators will need to possess a deep understanding of AI technology and its technical underpinnings.

While a full exposition of AI technology in this short article would not be possible, this article sets out some of the key technical features that policymakers and regulators should consider in the regulation of AI. In particular, this piece focuses on neural networks, a key element in modern AI systems. 

LawTech.Asia’s Response to Public Consultation on Model AI Governance Framework

Reading time: 2 minutes

On 23 January 2019, the Personal Data Protection Commission (i.e. the Info-comm Media Development Authority) (the “PDPC”) published its Model Artificial Intelligence Governance Framework (“Model Framework”). The PDPC also launched a public consultation to receive feedback on the Model Framework.

As an organisation committed to thought leadership in law and technology (with AI regulation a key area of focus), LawTech.Asia produced a response to the public consultation on 24 June 2019.

LawTech.Asia’s response comprised the following two sections:

  1. A framework tailored for the implementation of the Model Framework to the legal technology sectors. Tapping on LawTech.Asia’s familiarity with the legal and legal technology sectors, LawTech.Asia produced a customised framework tailored specifically for the implementation of the Model Framework to the legal technology industry. We hope that this customised framework may shed some light in allowing legal technology firms deploying AI to have greater guidance in aligning their practices with some of the implementation guidelines set out in the Model Framework.
  2. Comments and feedback on each specific section covered by the Model Framework. These sections are, namely: the overall principles set out in the Model Framework, internal governance measures, determination of the AI decision-making model, operations management, and customer relations management. Tying our comments together is the thread that the Model Framework could go further in elaborating on some of the guidelines that it had set out, as well as to set out more specifically the ends that the Model Framework is targeted at achieving.

Our response may be downloaded for reference here:

In closing, we emphasise that the views set out within our response are wholly independent. They do not represent the views of any other organisation save for LawTech.Asia.

LawTech.Asia is also grateful to our partner and friend, Ms Shazade Jameson from the World Data Project, for her guidance and assistance in the preparation of our response.

The LawTech.Asia Team

Disruptive Legal Technologies – Is Ethics Catching Up?

Reading time: 6 minutes

Written by Alvin Chen and Stella Chen (Law Society of Singapore)

Editor’s Note: This article was first published in the August 2018 issue of the Singapore Law Gazette, the official publication of the Law Society of Singapore. Reproduced with permission.

In December 2017, DeepMind, a leading AI company, sent ripples through the AI world when it announced that it had developed a computer program (known as “AlphaGoZero” or “AlphaZero”) which learned the rules of three games – chess, Shogi and Go – from scratch and defeated a world-champion computer program in each game within 24 hours of self-learning.1 What was remarkable about DeepMind’s achievement was the program’s “tabula rasa” or clean slate approach which did not refer to any games played by human players or other “domain knowledge”.2 Yet, DeepMind’s program was able to develop an unconventional and some say, uncanny,3 methodology in surpassing current computer understanding of how to play the three games.

Referring to an earlier version of DeepMind’s program (“AlphaGo”) which defeated the (human) world champion in Go in 2016, the legal futurist Richard Susskind considers such innovative technologies to be “disruptive”. In his international bestseller Tomorrow’s Lawyers: An Introduction to Your Future (“Tomorrow’s Lawyers“)Susskind defined “disruptive” as something that would “fundamentally challenge and change conventional habits”.4

E-Discovery: Artificial Intelligence & Predictive Coding – Discovering the Way Forward

Reading time: 5 minutes

Written by Emily Tan | Edited by Jennifer Lim Wei Zhen, Josh Lee, Maryam Salehijam (Resolve Disputes Online)

Introduction

Cases turn on their facts. Lawyers depend on both the law and the specific circumstances of their client’s case to make a convincing argument for their client. This makes the discovery process, where the available information is sifted through to identify relevant evidence, a crucial step in any case.  

However, discovery is by its nature a slow and laborious process. Countless hours are spent digging through documents, emails and other such sources, searching for the key factors which may make or break a case. This “time-drain” has been exacerbated by the digitalisation of work, which has exponentially increased the volume of documents that lawyers have to analyse. In addition, it is typically the junior lawyers who are delegated to do the discovery task — which explains the television stereotype of young lawyers poring over cartons and cartons of documents late into the night. 

Page 2 of 3

Powered by WordPress & Theme by Anders Norén