Asia's Leading Law & Technology Review

Tag: artificial intelligence Page 2 of 4

The Epistemic Challenge Facing the Regulation of AI: LRD Colloquium Vol. 1 (2020/07)

Reading time: 25 minutes

Written by Josh Lee* and Tristan Koh**

Editor’s note: This article was first published by the Law Society of Singapore as part of its Legal Research and Development Colloquium 2020. It has been re-published with the permission of the Law Society of Singapore and the article’s authors. Slight adaptations and reformatting changes have been made for readability.

ABSTRACT

The increased interest in artificial intelligence (‘AI’) regulation stems from increased awareness about its risks. This suggests the need for a regulatory structure to preserve safety and public trust in AI. A key challenge, however, is the epistemic challenge. This paper posits that to effectively regulate the development and use of AI (in particular, deep learning systems), policymakers need a deep understanding of the technical underpinnings of AI technologies and the ethical and legal issues arising from its adoption. Given that AI technologies will impact many sectors, the paper also explores the challenges of applying AI technologies in the legal industry as an example of industry-specific epistemic challenges. This paper also suggests possible solutions: the need for interdisciplinary knowledge, the introduction of baseline training in technology for legal practitioners and the creation of a corps of allied legal professionals specialising in the implementation of AI.

TechLaw.Fest 2020 Quick Chats: Dr Ian Walden, Professor of Information and Communications Law at Queen Mary University of London; Director of Centre for Commercial Law Studies

Reading time: 8 minutes

Interview by Josh Lee, Lenon Ong and Elizaveta Shesterneva | Edited by Josh Lee

TechLaw.Fest 2020 (“TLF”) will take place online from 28 September – 2 October 2020, becoming the virtual focal point for leading thinkers, leaders and pioneers in law and technology. In the weeks leading up to TLF, the LawTech.Asia team will be bringing you regular interviews and shout-outs covering some of TLF’s most prominent speakers and the topics they will be speaking about.

This week, LawTech.Asia received the exclusive opportunity to interview Dr Ian Walden, Professor of Information and Communications Law and Queen Mary University of London and the Director of the Centre for Commercial Law Studies. Ian will be speaking at a panel on “Global Perspectives on Tackling AI Governance” on the second day of TLF (29 September 2020).

The Epistemic Challenges Facing the Regulation of AI

Reading time: 8 minutes

Written by Tristan Koh and Josh Lee

The regulation of artificial intelligence (“AI”) has been a hot topic in recent years. This may stem from increased societal awareness of: (a) the possibilities that AI may deliver across various domains; and (b) the risks that the implementation of AI may cause (e.g., the risk of bias, discrimination, and the loss of human autonomy). These risks, in particular, have led renowned thought leaders to claims that AI technologies are “vastly more risky than North Korea” and could be the “worst event in the history of our civilisation”.

A key challenge facing any policymaker creating regulations for AI (or, for that matter, any new technology), however, is the epistemic (i.e., knowledge-based) challenge – policymakers must have domain knowledge in order to be able to sufficiently appreciate the scope, size, degree and impact of any regulation, and be able to propose solutions that are effective and pragmatic.[1]  In fact, it has been recognised in some governments that subject-matter expertise is lacking when policies or regulations are being crafted.[2] To effectively regulate the development and use of AI, it is clear that policymakers and regulators will need to possess a deep understanding of AI technology and its technical underpinnings.

While a full exposition of AI technology in this short article would not be possible, this article sets out some of the key technical features that policymakers and regulators should consider in the regulation of AI. In particular, this piece focuses on neural networks, a key element in modern AI systems. 

LawTech.Asia’s Response to Public Consultation on Model AI Governance Framework

Reading time: 2 minutes

On 23 January 2019, the Personal Data Protection Commission (i.e. the Info-comm Media Development Authority) (the “PDPC”) published its Model Artificial Intelligence Governance Framework (“Model Framework”). The PDPC also launched a public consultation to receive feedback on the Model Framework.

As an organisation committed to thought leadership in law and technology (with AI regulation a key area of focus), LawTech.Asia produced a response to the public consultation on 24 June 2019.

LawTech.Asia’s response comprised the following two sections:

  1. A framework tailored for the implementation of the Model Framework to the legal technology sectors. Tapping on LawTech.Asia’s familiarity with the legal and legal technology sectors, LawTech.Asia produced a customised framework tailored specifically for the implementation of the Model Framework to the legal technology industry. We hope that this customised framework may shed some light in allowing legal technology firms deploying AI to have greater guidance in aligning their practices with some of the implementation guidelines set out in the Model Framework.
  2. Comments and feedback on each specific section covered by the Model Framework. These sections are, namely: the overall principles set out in the Model Framework, internal governance measures, determination of the AI decision-making model, operations management, and customer relations management. Tying our comments together is the thread that the Model Framework could go further in elaborating on some of the guidelines that it had set out, as well as to set out more specifically the ends that the Model Framework is targeted at achieving.

Our response may be downloaded for reference here:

In closing, we emphasise that the views set out within our response are wholly independent. They do not represent the views of any other organisation save for LawTech.Asia.

LawTech.Asia is also grateful to our partner and friend, Ms Shazade Jameson from the World Data Project, for her guidance and assistance in the preparation of our response.

The LawTech.Asia Team

Disruptive Legal Technologies – Is Ethics Catching Up?

Reading time: 6 minutes

Written by Alvin Chen and Stella Chen (Law Society of Singapore)

Editor’s Note: This article was first published in the August 2018 issue of the Singapore Law Gazette, the official publication of the Law Society of Singapore. Reproduced with permission.

In December 2017, DeepMind, a leading AI company, sent ripples through the AI world when it announced that it had developed a computer program (known as “AlphaGoZero” or “AlphaZero”) which learned the rules of three games – chess, Shogi and Go – from scratch and defeated a world-champion computer program in each game within 24 hours of self-learning.1 What was remarkable about DeepMind’s achievement was the program’s “tabula rasa” or clean slate approach which did not refer to any games played by human players or other “domain knowledge”.2 Yet, DeepMind’s program was able to develop an unconventional and some say, uncanny,3 methodology in surpassing current computer understanding of how to play the three games.

Referring to an earlier version of DeepMind’s program (“AlphaGo”) which defeated the (human) world champion in Go in 2016, the legal futurist Richard Susskind considers such innovative technologies to be “disruptive”. In his international bestseller Tomorrow’s Lawyers: An Introduction to Your Future (“Tomorrow’s Lawyers“)Susskind defined “disruptive” as something that would “fundamentally challenge and change conventional habits”.4

Page 2 of 4

Powered by WordPress & Theme by Anders Norén