Asia's Leading Law & Technology Review

Tag: artificial intelligence Page 1 of 4

The Landscape of AI Regulation in the Asia-Pacific

Reading time: 32 minutes

Written by Alistair Simmons and Matthew Rostick | Edited by Josh Lee Kok Thong

Introduction

In recent months, many jurisdictions in the Asia-Pacific (“APAC”) have adopted or are considering various forms of AI governance mechanisms. At least 16 jurisdictions in APAC have begun some form of AI governance, and this number will likely continue to increase. This paper scans the different AI governance mechanisms across a number of APAC jurisdictions and offers some observations at the end. 

This paper segments AI governance mechanisms into four categories: Direct AI regulations are enforceable rules that regulate the development, deployment or use of AI directly as a technology, and consequently have regulatory impact across multiple sectors. Voluntary frameworks cover voluntary and non-binding guidance issued by governmental entities that directly address the development, deployment or use of AI as a technology. Indirect regulations (data & IP) are also enforceable legal rules but do not regulate the development, deployment or use of AI directly as a technology. They are rules of more general applicability that nevertheless have an impact on the development, deployment or use of AI. As the scope of this category is potentially broad, we have focused on data protection/privacy and intellectual property laws in this paper. Sector-specific measures refers to binding and non-binding rules and guidelines issued by sector regulators that are relevant to the development, deployment or use of AI in an industry. To avoid getting bogged down in the specifics of whether the rules and guidelines are technically binding or not, we have presented them together. Unlike the mechanisms addressed in the Sectoral Governance Mechanisms segment, the non-binding frameworks in this segment typically address the use of AI across multiple sectors.

For avoidance of doubt, this paper addresses legal governance mechanisms only. There may be other initiatives afoot to drive alignment and good practices from a technical perspective. We do not seek to address technical measures in this paper.

Victoria Phua: Attributing electronic personhood only for strong AI? 

Reading time: 16 minutes

Written by Victoria Rui-Qi Phua | Edited by Josh Lee Kok Thong

We’re all law and tech scholars now, says every law and tech sceptic. That is only half-right. Law and technology is about law, but it is also about technology. This is not obvious in many so-called law and technology pieces which tend to focus exclusively on the law. No doubt this draws on what Judge Easterbrook famously said about three decades ago, to paraphrase: “lawyers will never fully understand tech so we might as well not try”.

In open defiance of this narrative, LawTech.Asia is proud to announce a collaboration with the Singapore Management University Yong Pung How School of Law’s LAW4032 Law and Technology class. This collaborative special series is a collection featuring selected essays from students of the class. Ranging across a broad range of technology law and policy topics, the collaboration is aimed at encouraging law students to think about where the law is and what it should be vis-a-vis technology.

This piece, written by Victoria Phua, puts forward an argument for attributing electronic personhood status for “strong AI”. According to her, algorithms trained by machine learning are increasingly performing or assisting with tasks previously exclusive to humans. As these systems provide decision making rather than mere support, the emergence of strong AI has raised new legal and ethical issues, which cannot be satisfactorily addressed by existing solutions. The ‘Mere Tools’ approach regards algorithms as ‘mere tools’ but does not address active contracting mechanisms. The ‘Agency’ approach treats AI systems as electronic agents but fails to deal with legal personality and consent issues in agency. The ‘Legal Person’ approach goes further to treat AI systems as legal persons but has drawn criticism for having no morality nor intent. To address the legal personality in strong AI, Victoria proposes to extend the fiction and concession theories of corporate personality to create a ‘quasi-person’ or ‘electronic person’. This is more satisfactory as it allows for a fairer allocation of risks and responsibilities among contracting parties. It also holds autonomous systems liable for their actions, thereby encouraging innovation. Further, it facilitates the allocation of damages. Last, it embodies the core philosophy of human-centricity.

Criminalising Offensive Speech Made by AI Chatbots in Singapore

Reading time: 16 minutes

Written by Loh Yu Tong | Edited by Josh Lee Kok Thong

We’re all law and tech scholars now, says every law and tech sceptic. That is only half-right. Law and technology is about law, but it is also about technology. This is not obvious in many so-called law and technology pieces which tend to focus exclusively on the law. No doubt this draws on what Judge Easterbrook famously said about three decades ago, to paraphrase: “lawyers will never fully understand tech so we might as well not try”.

In open defiance of this narrative, LawTech.Asia is proud to announce a collaboration with the Singapore Management University Yong Pung How School of Law’s LAW4032 Law and Technology class. This collaborative special series is a collection featuring selected essays from students of the class. Ranging across a broad range of technology law and policy topics, the collaboration is aimed at encouraging law students to think about where the law is and what it should be vis-a-vis technology.

This piece, written by Loh Yu Tong, demonstrates how Singapore’s present criminal framework is ill-prepared to address offensive speech made by autonomous AI chatbots. The author examines the possible regulatory challenges that may arise, and identifies a negligence-based framework – under which a duty of care is imposed on developers, deployers and malicious third-party interferes – to be preferable over an intent-based one. Other viable solutions include employing regulatory and civil sanctions. While AI systems are likely to become more complex in the future, the author holds out hope that Singapore’s robust legal system can satisfactorily balance the deterrence of harm against the risk of stifling innovation.

The Use of Chatbots as a Way to Create a Two-Step Approach to Providing Legal Services: Case Study: LRD Colloquium Vol. 1 (2020/06)

Reading time: 16 minutes

Written by Elizaveta Shesterneva*

Editor’s note: This article was first published by the Law Society of Singapore as part of its Legal Research and Development Colloquium 2020. It has been re-published with the permission of the Law Society of Singapore and the article’s authors. Slight adaptations and reformatting changes have been made for readability.

ABSTRACT

Chatbots have already been deployed by law firms and Legal Technology (‘LegalTech’) start-ups to perform some law-related activities as a way to provide better assistance to clients. The widespread use of chatbots may further deepen existing issues relating to the scope of legal functions chatbots undertake, the unauthorised practice of law and the competitiveness in the legalsector. This paper examines the aforementioned issues and suggests a two-step approach to providing legal services which incorporate the use of chatbots with help from qualified attorneys. The goal of the suggested two-step approach is an attempt at a peaceful collaboration between technology and legal professionals, where the use of chatbots do not threaten the ‘status-quo’ of qualified persons, but rather, encourages further innovation in the legal profession.

The Future of Artificial Intelligence and Intellectual Property Rights

Reading time: 12 minutes

Written by Samuel Chan Zheng Wen (Associate Author) | Mentored by Lenon Ong | Reviewed by Associate Professor Saw Cheng Lim

LawTech.Asia is proud to have commenced the third run of its popular Associate Author (2020) Programme. The aim of the Associate Authorship Programme is to develop the knowledge and exposure of student writers in the domains of law and technology, while providing them with mentorship from LawTech.Asia’s writers and tailored guidance from a respected industry mentor.

In partnership with the National University of Singapore’s alt+law and Singapore Management University’s Legal Innovation and Technology Club, five students were selected as Associate Authors. This piece by Samuel Chan, reviewed by industry reviewer Associate Professor Saw Cheng Lim (Singapore Management University School of Law), marks the first thought piece in this series. It examines the future of artificial general intelligence and intellectual property rights.

Page 1 of 4

Powered by WordPress & Theme by Anders Norén