Asia's Leading Law & Technology Review

Tag: artificial intelligence Page 1 of 3

The Use of Chatbots as a Way to Create a Two-Step Approach to Providing Legal Services: Case Study: LRD Colloquium Vol. 1 (2020/06)

Reading time: 16 minutes

Written by Elizaveta Shesterneva*

Editor’s note: This article was first published by the Law Society of Singapore as part of its Legal Research and Development Colloquium 2020. It has been re-published with the permission of the Law Society of Singapore and the article’s authors. Slight adaptations and reformatting changes have been made for readability.

ABSTRACT

Chatbots have already been deployed by law firms and Legal Technology (‘LegalTech’) start-ups to perform some law-related activities as a way to provide better assistance to clients. The widespread use of chatbots may further deepen existing issues relating to the scope of legal functions chatbots undertake, the unauthorised practice of law and the competitiveness in the legalsector. This paper examines the aforementioned issues and suggests a two-step approach to providing legal services which incorporate the use of chatbots with help from qualified attorneys. The goal of the suggested two-step approach is an attempt at a peaceful collaboration between technology and legal professionals, where the use of chatbots do not threaten the ‘status-quo’ of qualified persons, but rather, encourages further innovation in the legal profession.

The Future of Artificial Intelligence and Intellectual Property Rights

Reading time: 12 minutes

Written by Samuel Chan Zheng Wen (Associate Author) | Mentored by Lenon Ong | Reviewed by Associate Professor Saw Cheng Lim

LawTech.Asia is proud to have commenced the third run of its popular Associate Author (2020) Programme. The aim of the Associate Authorship Programme is to develop the knowledge and exposure of student writers in the domains of law and technology, while providing them with mentorship from LawTech.Asia’s writers and tailored guidance from a respected industry mentor.

In partnership with the National University of Singapore’s alt+law and Singapore Management University’s Legal Innovation and Technology Club, five students were selected as Associate Authors. This piece by Samuel Chan, reviewed by industry reviewer Associate Professor Saw Cheng Lim (Singapore Management University School of Law), marks the first thought piece in this series. It examines the future of artificial general intelligence and intellectual property rights.

The Epistemic Challenge Facing the Regulation of AI: LRD Colloquium Vol. 1 (2020/07)

Reading time: 25 minutes

Written by Josh Lee* and Tristan Koh**

Editor’s note: This article was first published by the Law Society of Singapore as part of its Legal Research and Development Colloquium 2020. It has been re-published with the permission of the Law Society of Singapore and the article’s authors. Slight adaptations and reformatting changes have been made for readability.

ABSTRACT

The increased interest in artificial intelligence (‘AI’) regulation stems from increased awareness about its risks. This suggests the need for a regulatory structure to preserve safety and public trust in AI. A key challenge, however, is the epistemic challenge. This paper posits that to effectively regulate the development and use of AI (in particular, deep learning systems), policymakers need a deep understanding of the technical underpinnings of AI technologies and the ethical and legal issues arising from its adoption. Given that AI technologies will impact many sectors, the paper also explores the challenges of applying AI technologies in the legal industry as an example of industry-specific epistemic challenges. This paper also suggests possible solutions: the need for interdisciplinary knowledge, the introduction of baseline training in technology for legal practitioners and the creation of a corps of allied legal professionals specialising in the implementation of AI.

TechLaw.Fest 2020 Quick Chats: Dr Ian Walden, Professor of Information and Communications Law at Queen Mary University of London; Director of Centre for Commercial Law Studies

Reading time: 8 minutes

Interview by Josh Lee, Lenon Ong and Elizaveta Shesterneva | Edited by Josh Lee

TechLaw.Fest 2020 (“TLF”) will take place online from 28 September – 2 October 2020, becoming the virtual focal point for leading thinkers, leaders and pioneers in law and technology. In the weeks leading up to TLF, the LawTech.Asia team will be bringing you regular interviews and shout-outs covering some of TLF’s most prominent speakers and the topics they will be speaking about.

This week, LawTech.Asia received the exclusive opportunity to interview Dr Ian Walden, Professor of Information and Communications Law and Queen Mary University of London and the Director of the Centre for Commercial Law Studies. Ian will be speaking at a panel on “Global Perspectives on Tackling AI Governance” on the second day of TLF (29 September 2020).

The Epistemic Challenges Facing the Regulation of AI

Reading time: 8 minutes

Written by Tristan Koh and Josh Lee

The regulation of artificial intelligence (“AI”) has been a hot topic in recent years. This may stem from increased societal awareness of: (a) the possibilities that AI may deliver across various domains; and (b) the risks that the implementation of AI may cause (e.g., the risk of bias, discrimination, and the loss of human autonomy). These risks, in particular, have led renowned thought leaders to claims that AI technologies are “vastly more risky than North Korea” and could be the “worst event in the history of our civilisation”.

A key challenge facing any policymaker creating regulations for AI (or, for that matter, any new technology), however, is the epistemic (i.e., knowledge-based) challenge – policymakers must have domain knowledge in order to be able to sufficiently appreciate the scope, size, degree and impact of any regulation, and be able to propose solutions that are effective and pragmatic.[1]  In fact, it has been recognised in some governments that subject-matter expertise is lacking when policies or regulations are being crafted.[2] To effectively regulate the development and use of AI, it is clear that policymakers and regulators will need to possess a deep understanding of AI technology and its technical underpinnings.

While a full exposition of AI technology in this short article would not be possible, this article sets out some of the key technical features that policymakers and regulators should consider in the regulation of AI. In particular, this piece focuses on neural networks, a key element in modern AI systems. 

Page 1 of 3

Powered by WordPress & Theme by Anders Norén