Asia's Leading Law & Technology Review

Category: Artificial Intelligence Page 2 of 11

An Interview with Professor David B. Wilkins, Lester Kissel Professor of Law, Vice Dean for Global Initiatives on the Legal Profession, Faculty Director of the Center on the Legal Profession, Harvard Law School

Reading time: 9 minutes

Written by Josh Lee Kok Thong

On 3 and 4 August 2023, the Singapore Academy of Law (“SAL”), in conjunction with the Singapore Management University (“SMU”), organized a conference titled “The Next Frontier in Lawyering: From ESG to GPT”. The conference provided participants with an overview of latest trends in the legal industry, and how these trends posed opportunities and challenges for lawyers and legal professionals. Held at the SMU Yong Pung How School of Law (“SMUYPHSOL”), the conference saw hundreds of attendees learn from global and local legal industry leaders about cutting-edge developments in the legal industry.

One of these global leaders and giants was Professor David B. Wilkins. As Lester Kissel Professor of Law, Vice Dean for Global Initiatives on the Legal Profession, and Faculty Director of the Center on the Legal Profession at Harvard Law School, Professor Wilkins is a prominent thought leader and speaker on the future of the legal profession, disruptive innovation, and legal industry leadership. He has written over 80 articles on the legal profession in leading scholarly journals and the popular press, and teaches several courses at Harvard Law School such as The Legal Profession, and Challenges of a General Counsel. 

At the conference, Professor Wilkins delivered a keynote address titled “From “Law’s Empire” to “Integrated Solutions”: How Globalization, Technology, and Organizational Change Are Opening “New Frontiers” for Lawyers, Clients and Society”. His address covered how law is becoming a more collaborative enterprise (with other knowledge domains) in a volatile, uncertain, complex and ambiguous world. While law would remain a domain driven by human capital, Professor Wilkins also urged lawyers to learn how to work with and understand technology. At the conference, Professor Wilkins also moderated a discussion on “Technology and the Legal Profession”, which explored how new technologies are transforming how lawyers work and interact with clients. 

Following his keynote address, LawTech.Asia (“LTA”) had the valuable opportunity of chatting with Professor Wilkins on his views on the opportunities and impact of technology on the legal industry, the training of future lawyers, how Singapore could strengthen its legal innovation ecosystem, and how legal technology could be better oriented to serve the underserved and under-represented in society. The interview, which is set out below, has only been edited for readability and brevity. 

Victoria Phua: Attributing electronic personhood only for strong AI? 

Reading time: 16 minutes

Written by Victoria Rui-Qi Phua | Edited by Josh Lee Kok Thong

We’re all law and tech scholars now, says every law and tech sceptic. That is only half-right. Law and technology is about law, but it is also about technology. This is not obvious in many so-called law and technology pieces which tend to focus exclusively on the law. No doubt this draws on what Judge Easterbrook famously said about three decades ago, to paraphrase: “lawyers will never fully understand tech so we might as well not try”.

In open defiance of this narrative, LawTech.Asia is proud to announce a collaboration with the Singapore Management University Yong Pung How School of Law’s LAW4032 Law and Technology class. This collaborative special series is a collection featuring selected essays from students of the class. Ranging across a broad range of technology law and policy topics, the collaboration is aimed at encouraging law students to think about where the law is and what it should be vis-a-vis technology.

This piece, written by Victoria Phua, puts forward an argument for attributing electronic personhood status for “strong AI”. According to her, algorithms trained by machine learning are increasingly performing or assisting with tasks previously exclusive to humans. As these systems provide decision making rather than mere support, the emergence of strong AI has raised new legal and ethical issues, which cannot be satisfactorily addressed by existing solutions. The ‘Mere Tools’ approach regards algorithms as ‘mere tools’ but does not address active contracting mechanisms. The ‘Agency’ approach treats AI systems as electronic agents but fails to deal with legal personality and consent issues in agency. The ‘Legal Person’ approach goes further to treat AI systems as legal persons but has drawn criticism for having no morality nor intent. To address the legal personality in strong AI, Victoria proposes to extend the fiction and concession theories of corporate personality to create a ‘quasi-person’ or ‘electronic person’. This is more satisfactory as it allows for a fairer allocation of risks and responsibilities among contracting parties. It also holds autonomous systems liable for their actions, thereby encouraging innovation. Further, it facilitates the allocation of damages. Last, it embodies the core philosophy of human-centricity.

The value of differential privacy in establishing an intermediate legal standard for anonymisation in Singapore’s data protection landscape

Reading time: 11 minutes

Written by Nanda Min Htin | Edited by Josh Lee Kok Thong

We’re all law and tech scholars now, says every law and tech sceptic. That is only half-right. Law and technology is about law, but it is also about technology. This is not obvious in many so-called law and technology pieces which tend to focus exclusively on the law. No doubt this draws on what Judge Easterbrook famously said about three decades ago, to paraphrase: “lawyers will never fully understand tech so we might as well not try”.

In open defiance of this narrative, LawTech.Asia is proud to announce a collaboration with the Singapore Management University Yong Pung How School of Law’s LAW4032 Law and Technology class. This collaborative special series is a collection featuring selected essays from students of the class. Ranging across a broad range of technology law and policy topics, the collaboration is aimed at encouraging law students to think about where the law is and what it should be vis-a-vis technology.

This piece, written by Nanda Min Htin, seeks to examine the value of differential privacy an establishing an intermediate legal standard for anonymisation in Singapore’s data protection landscape. Singapore’s data protection framework recognizes privacy-protected data that can be re-identified as anonymised data, insofar as there is a serious possibility that this re-identification would not occur. As a result, such data are not considered personal data in order to be protected under Singapore law. In contrast, major foreign legislation such as the GDPR in Europe sets a clearer and stricter standard for anonymised data by requiring re-identification to be impossible; anything less would be considered pseudonymized data and would subject the data controller to legal obligations. The lack of a similar intermediate standard in Singapore risks depriving reversibly de-identified data of legal protection. One key example is differential privacy, a popular privacy standard for a class of data de-identification techniques. It prevents the re-identification of individuals at a high confidence level by adding random noise to computational results queried from the data. However, like many other data anonymization techniques, it does not completely prevent re-identification. This article first highlights the value of differential privacy in exposing the need for an intermediate legal standard for anonymization under Singapore data protection law. Then, it explains how differential privacy’s technical characteristics would help establish regulatory standards for privacy by design and help organizations fulfil data breach notification obligations. 

Criminalising Offensive Speech Made by AI Chatbots in Singapore

Reading time: 16 minutes

Written by Loh Yu Tong | Edited by Josh Lee Kok Thong

We’re all law and tech scholars now, says every law and tech sceptic. That is only half-right. Law and technology is about law, but it is also about technology. This is not obvious in many so-called law and technology pieces which tend to focus exclusively on the law. No doubt this draws on what Judge Easterbrook famously said about three decades ago, to paraphrase: “lawyers will never fully understand tech so we might as well not try”.

In open defiance of this narrative, LawTech.Asia is proud to announce a collaboration with the Singapore Management University Yong Pung How School of Law’s LAW4032 Law and Technology class. This collaborative special series is a collection featuring selected essays from students of the class. Ranging across a broad range of technology law and policy topics, the collaboration is aimed at encouraging law students to think about where the law is and what it should be vis-a-vis technology.

This piece, written by Loh Yu Tong, demonstrates how Singapore’s present criminal framework is ill-prepared to address offensive speech made by autonomous AI chatbots. The author examines the possible regulatory challenges that may arise, and identifies a negligence-based framework – under which a duty of care is imposed on developers, deployers and malicious third-party interferes – to be preferable over an intent-based one. Other viable solutions include employing regulatory and civil sanctions. While AI systems are likely to become more complex in the future, the author holds out hope that Singapore’s robust legal system can satisfactorily balance the deterrence of harm against the risk of stifling innovation.

Legal Implications of Digital Surveillance: Individual Protection

Reading time: 14 minutes

Written by Lim Hong Wen, Amelia | Edited by Josh Lee Kok Thong

We’re all law and tech scholars now, says every law and tech sceptic. That is only half-right. Law and technology is about law, but it is also about technology. This is not obvious in many so-called law and technology pieces which tend to focus exclusively on the law. No doubt this draws on what Judge Easterbrook famously said about three decades ago, to paraphrase: “lawyers will never fully understand tech so we might as well not try”.

In open defiance of this narrative, LawTech.Asia is proud to announce a collaboration with the Singapore Management University Yong Pung How School of Law’s LAW4032 Law and Technology class. This collaborative special series is a collection featuring selected essays from students of the class. Ranging across a broad range of technology law and policy topics, the collaboration is aimed at encouraging law students to think about where the law is and what it should be vis-a-vis technology.

This piece, written by Lim Hong Wen, Amelia, seeks to analyse three key concerns that may arise from the use of digital surveillance, in particular, the issue of privacy, harassment, and algorithmic bias. This paper then examine how the four modalities expounded by Lawrence Lessig will come into play in regulating the use of digital surveillance (i.e. the law, architecture, social norms, and the market). Part II first explores the developments in the use of digital surveillance by the state, employers, and individuals. Digital surveillance has since transformed over the years and current laws may be insufficient in protecting individuals against certain unwanted forms of digital surveillance. Part III of this paper identified the inadequacies of current laws to address the key concerns identified earlier (i.e. privacy, harassment, and algorithmic bias). Given the lack of legal recourse available, Part IV then analyzed how the use or misuse of digital surveillance can be regulated by the remaining three modalities (i.e. the architecture, social norms, and the market).

Page 2 of 11

Powered by WordPress & Theme by Anders Norén