Asia's Leading Law & Technology Review

Category: Artificial Intelligence Page 2 of 12

The Landscape of AI Regulation in the Asia-Pacific

Reading time: 32 minutes

Written by Alistair Simmons and Matthew Rostick | Edited by Josh Lee Kok Thong

Introduction

In recent months, many jurisdictions in the Asia-Pacific (“APAC”) have adopted or are considering various forms of AI governance mechanisms. At least 16 jurisdictions in APAC have begun some form of AI governance, and this number will likely continue to increase. This paper scans the different AI governance mechanisms across a number of APAC jurisdictions and offers some observations at the end. 

This paper segments AI governance mechanisms into four categories: Direct AI regulations are enforceable rules that regulate the development, deployment or use of AI directly as a technology, and consequently have regulatory impact across multiple sectors. Voluntary frameworks cover voluntary and non-binding guidance issued by governmental entities that directly address the development, deployment or use of AI as a technology. Indirect regulations (data & IP) are also enforceable legal rules but do not regulate the development, deployment or use of AI directly as a technology. They are rules of more general applicability that nevertheless have an impact on the development, deployment or use of AI. As the scope of this category is potentially broad, we have focused on data protection/privacy and intellectual property laws in this paper. Sector-specific measures refers to binding and non-binding rules and guidelines issued by sector regulators that are relevant to the development, deployment or use of AI in an industry. To avoid getting bogged down in the specifics of whether the rules and guidelines are technically binding or not, we have presented them together. Unlike the mechanisms addressed in the Sectoral Governance Mechanisms segment, the non-binding frameworks in this segment typically address the use of AI across multiple sectors.

For avoidance of doubt, this paper addresses legal governance mechanisms only. There may be other initiatives afoot to drive alignment and good practices from a technical perspective. We do not seek to address technical measures in this paper.

TechLaw.Fest 2023: This Is What’s Next

Reading time: 12 minutes

Written by Hannah Loo Yuet Ying and Leong Tzi An (Zaine) | Edited by Josh Lee Kok Thong

The theme of this year’s TechLaw.Fest is ‘This is What’s Next”’. I thought this is very apt in the realm of law and technology. Both are forward-looking, and multi-faceted that we constantly, even in practice, ask ourselves ‘what’s next’.

Second Minister for Law and Minister for Community, Culture and Youth Edwin Tong S.C.
Opening Remarks at TechLaw.Fest 2023

Introduction

Since the last edition of TechLaw.Fest in 2022, technology has developed at a rapid pace. It is now trite to say that technology touches every aspect of our lives. It has transformed, and continues to transform, how people work, interact and, play. This is not only embodied in the rise of large language models (“LLMs”) and generative AI applications such as ChatGPT, but also questions about the future of cryptocurrency, immersive technologies, and online safety. Amidst rapid technological developments on multiple fronts, it is important to have robust conversations on the workings of these technologies and their impact – positive or negative – on people and society. 

As one of Asia’s largest law and technology conferences, TechLaw.Fest is an important forum bringing together industry leaders, government, legal professionals, technologists, academics, and civil society to have these robust conversations. As the first fully physical rendition of the event since 2019, TechLaw.Fest 2023 brought together thought leaders from various domains to answer “what’s next” in the vast field of law and technology. This article aims to bring a glimpse into the key insights and themes discussed across both days of Singapore’s signature law and technology conference.

An Interview with Professor David B. Wilkins, Lester Kissel Professor of Law, Vice Dean for Global Initiatives on the Legal Profession, Faculty Director of the Center on the Legal Profession, Harvard Law School

Reading time: 9 minutes

Written by Josh Lee Kok Thong

On 3 and 4 August 2023, the Singapore Academy of Law (“SAL”), in conjunction with the Singapore Management University (“SMU”), organized a conference titled “The Next Frontier in Lawyering: From ESG to GPT”. The conference provided participants with an overview of latest trends in the legal industry, and how these trends posed opportunities and challenges for lawyers and legal professionals. Held at the SMU Yong Pung How School of Law (“SMUYPHSOL”), the conference saw hundreds of attendees learn from global and local legal industry leaders about cutting-edge developments in the legal industry.

One of these global leaders and giants was Professor David B. Wilkins. As Lester Kissel Professor of Law, Vice Dean for Global Initiatives on the Legal Profession, and Faculty Director of the Center on the Legal Profession at Harvard Law School, Professor Wilkins is a prominent thought leader and speaker on the future of the legal profession, disruptive innovation, and legal industry leadership. He has written over 80 articles on the legal profession in leading scholarly journals and the popular press, and teaches several courses at Harvard Law School such as The Legal Profession, and Challenges of a General Counsel. 

At the conference, Professor Wilkins delivered a keynote address titled “From “Law’s Empire” to “Integrated Solutions”: How Globalization, Technology, and Organizational Change Are Opening “New Frontiers” for Lawyers, Clients and Society”. His address covered how law is becoming a more collaborative enterprise (with other knowledge domains) in a volatile, uncertain, complex and ambiguous world. While law would remain a domain driven by human capital, Professor Wilkins also urged lawyers to learn how to work with and understand technology. At the conference, Professor Wilkins also moderated a discussion on “Technology and the Legal Profession”, which explored how new technologies are transforming how lawyers work and interact with clients. 

Following his keynote address, LawTech.Asia (“LTA”) had the valuable opportunity of chatting with Professor Wilkins on his views on the opportunities and impact of technology on the legal industry, the training of future lawyers, how Singapore could strengthen its legal innovation ecosystem, and how legal technology could be better oriented to serve the underserved and under-represented in society. The interview, which is set out below, has only been edited for readability and brevity. 

Victoria Phua: Attributing electronic personhood only for strong AI? 

Reading time: 16 minutes

Written by Victoria Rui-Qi Phua | Edited by Josh Lee Kok Thong

We’re all law and tech scholars now, says every law and tech sceptic. That is only half-right. Law and technology is about law, but it is also about technology. This is not obvious in many so-called law and technology pieces which tend to focus exclusively on the law. No doubt this draws on what Judge Easterbrook famously said about three decades ago, to paraphrase: “lawyers will never fully understand tech so we might as well not try”.

In open defiance of this narrative, LawTech.Asia is proud to announce a collaboration with the Singapore Management University Yong Pung How School of Law’s LAW4032 Law and Technology class. This collaborative special series is a collection featuring selected essays from students of the class. Ranging across a broad range of technology law and policy topics, the collaboration is aimed at encouraging law students to think about where the law is and what it should be vis-a-vis technology.

This piece, written by Victoria Phua, puts forward an argument for attributing electronic personhood status for “strong AI”. According to her, algorithms trained by machine learning are increasingly performing or assisting with tasks previously exclusive to humans. As these systems provide decision making rather than mere support, the emergence of strong AI has raised new legal and ethical issues, which cannot be satisfactorily addressed by existing solutions. The ‘Mere Tools’ approach regards algorithms as ‘mere tools’ but does not address active contracting mechanisms. The ‘Agency’ approach treats AI systems as electronic agents but fails to deal with legal personality and consent issues in agency. The ‘Legal Person’ approach goes further to treat AI systems as legal persons but has drawn criticism for having no morality nor intent. To address the legal personality in strong AI, Victoria proposes to extend the fiction and concession theories of corporate personality to create a ‘quasi-person’ or ‘electronic person’. This is more satisfactory as it allows for a fairer allocation of risks and responsibilities among contracting parties. It also holds autonomous systems liable for their actions, thereby encouraging innovation. Further, it facilitates the allocation of damages. Last, it embodies the core philosophy of human-centricity.

The value of differential privacy in establishing an intermediate legal standard for anonymisation in Singapore’s data protection landscape

Reading time: 11 minutes

Written by Nanda Min Htin | Edited by Josh Lee Kok Thong

We’re all law and tech scholars now, says every law and tech sceptic. That is only half-right. Law and technology is about law, but it is also about technology. This is not obvious in many so-called law and technology pieces which tend to focus exclusively on the law. No doubt this draws on what Judge Easterbrook famously said about three decades ago, to paraphrase: “lawyers will never fully understand tech so we might as well not try”.

In open defiance of this narrative, LawTech.Asia is proud to announce a collaboration with the Singapore Management University Yong Pung How School of Law’s LAW4032 Law and Technology class. This collaborative special series is a collection featuring selected essays from students of the class. Ranging across a broad range of technology law and policy topics, the collaboration is aimed at encouraging law students to think about where the law is and what it should be vis-a-vis technology.

This piece, written by Nanda Min Htin, seeks to examine the value of differential privacy an establishing an intermediate legal standard for anonymisation in Singapore’s data protection landscape. Singapore’s data protection framework recognizes privacy-protected data that can be re-identified as anonymised data, insofar as there is a serious possibility that this re-identification would not occur. As a result, such data are not considered personal data in order to be protected under Singapore law. In contrast, major foreign legislation such as the GDPR in Europe sets a clearer and stricter standard for anonymised data by requiring re-identification to be impossible; anything less would be considered pseudonymized data and would subject the data controller to legal obligations. The lack of a similar intermediate standard in Singapore risks depriving reversibly de-identified data of legal protection. One key example is differential privacy, a popular privacy standard for a class of data de-identification techniques. It prevents the re-identification of individuals at a high confidence level by adding random noise to computational results queried from the data. However, like many other data anonymization techniques, it does not completely prevent re-identification. This article first highlights the value of differential privacy in exposing the need for an intermediate legal standard for anonymization under Singapore data protection law. Then, it explains how differential privacy’s technical characteristics would help establish regulatory standards for privacy by design and help organizations fulfil data breach notification obligations. 

Page 2 of 12

Powered by WordPress & Theme by Anders Norén