Asia's Leading Law & Technology Review

Tag: model ai governance framework

The value of differential privacy in establishing an intermediate legal standard for anonymisation in Singapore’s data protection landscape

Reading time: 11 minutes

Written by Nanda Min Htin | Edited by Josh Lee Kok Thong

We’re all law and tech scholars now, says every law and tech sceptic. That is only half-right. Law and technology is about law, but it is also about technology. This is not obvious in many so-called law and technology pieces which tend to focus exclusively on the law. No doubt this draws on what Judge Easterbrook famously said about three decades ago, to paraphrase: “lawyers will never fully understand tech so we might as well not try”.

In open defiance of this narrative, LawTech.Asia is proud to announce a collaboration with the Singapore Management University Yong Pung How School of Law’s LAW4032 Law and Technology class. This collaborative special series is a collection featuring selected essays from students of the class. Ranging across a broad range of technology law and policy topics, the collaboration is aimed at encouraging law students to think about where the law is and what it should be vis-a-vis technology.

This piece, written by Nanda Min Htin, seeks to examine the value of differential privacy an establishing an intermediate legal standard for anonymisation in Singapore’s data protection landscape. Singapore’s data protection framework recognizes privacy-protected data that can be re-identified as anonymised data, insofar as there is a serious possibility that this re-identification would not occur. As a result, such data are not considered personal data in order to be protected under Singapore law. In contrast, major foreign legislation such as the GDPR in Europe sets a clearer and stricter standard for anonymised data by requiring re-identification to be impossible; anything less would be considered pseudonymized data and would subject the data controller to legal obligations. The lack of a similar intermediate standard in Singapore risks depriving reversibly de-identified data of legal protection. One key example is differential privacy, a popular privacy standard for a class of data de-identification techniques. It prevents the re-identification of individuals at a high confidence level by adding random noise to computational results queried from the data. However, like many other data anonymization techniques, it does not completely prevent re-identification. This article first highlights the value of differential privacy in exposing the need for an intermediate legal standard for anonymization under Singapore data protection law. Then, it explains how differential privacy’s technical characteristics would help establish regulatory standards for privacy by design and help organizations fulfil data breach notification obligations. 

TechLaw.Fest 2020 Quick Chats: Dr Ian Walden, Professor of Information and Communications Law at Queen Mary University of London; Director of Centre for Commercial Law Studies

Reading time: 8 minutes

Interview by Josh Lee, Lenon Ong and Elizaveta Shesterneva | Edited by Josh Lee

TechLaw.Fest 2020 (“TLF”) will take place online from 28 September – 2 October 2020, becoming the virtual focal point for leading thinkers, leaders and pioneers in law and technology. In the weeks leading up to TLF, the LawTech.Asia team will be bringing you regular interviews and shout-outs covering some of TLF’s most prominent speakers and the topics they will be speaking about.

This week, LawTech.Asia received the exclusive opportunity to interview Dr Ian Walden, Professor of Information and Communications Law and Queen Mary University of London and the Director of the Centre for Commercial Law Studies. Ian will be speaking at a panel on “Global Perspectives on Tackling AI Governance” on the second day of TLF (29 September 2020).

The Epistemic Challenges Facing the Regulation of AI

Reading time: 8 minutes

Written by Tristan Koh and Josh Lee

The regulation of artificial intelligence (“AI”) has been a hot topic in recent years. This may stem from increased societal awareness of: (a) the possibilities that AI may deliver across various domains; and (b) the risks that the implementation of AI may cause (e.g., the risk of bias, discrimination, and the loss of human autonomy). These risks, in particular, have led renowned thought leaders to claims that AI technologies are “vastly more risky than North Korea” and could be the “worst event in the history of our civilisation”.

A key challenge facing any policymaker creating regulations for AI (or, for that matter, any new technology), however, is the epistemic (i.e., knowledge-based) challenge – policymakers must have domain knowledge in order to be able to sufficiently appreciate the scope, size, degree and impact of any regulation, and be able to propose solutions that are effective and pragmatic.[1]  In fact, it has been recognised in some governments that subject-matter expertise is lacking when policies or regulations are being crafted.[2] To effectively regulate the development and use of AI, it is clear that policymakers and regulators will need to possess a deep understanding of AI technology and its technical underpinnings.

While a full exposition of AI technology in this short article would not be possible, this article sets out some of the key technical features that policymakers and regulators should consider in the regulation of AI. In particular, this piece focuses on neural networks, a key element in modern AI systems. 

LawTech.Asia’s Response to Public Consultation on Model AI Governance Framework

Reading time: 2 minutes

On 23 January 2019, the Personal Data Protection Commission (i.e. the Info-comm Media Development Authority) (the “PDPC”) published its Model Artificial Intelligence Governance Framework (“Model Framework”). The PDPC also launched a public consultation to receive feedback on the Model Framework.

As an organisation committed to thought leadership in law and technology (with AI regulation a key area of focus), LawTech.Asia produced a response to the public consultation on 24 June 2019.

LawTech.Asia’s response comprised the following two sections:

  1. A framework tailored for the implementation of the Model Framework to the legal technology sectors. Tapping on LawTech.Asia’s familiarity with the legal and legal technology sectors, LawTech.Asia produced a customised framework tailored specifically for the implementation of the Model Framework to the legal technology industry. We hope that this customised framework may shed some light in allowing legal technology firms deploying AI to have greater guidance in aligning their practices with some of the implementation guidelines set out in the Model Framework.
  2. Comments and feedback on each specific section covered by the Model Framework. These sections are, namely: the overall principles set out in the Model Framework, internal governance measures, determination of the AI decision-making model, operations management, and customer relations management. Tying our comments together is the thread that the Model Framework could go further in elaborating on some of the guidelines that it had set out, as well as to set out more specifically the ends that the Model Framework is targeted at achieving.

Our response may be downloaded for reference here:

In closing, we emphasise that the views set out within our response are wholly independent. They do not represent the views of any other organisation save for LawTech.Asia.

LawTech.Asia is also grateful to our partner and friend, Ms Shazade Jameson from the World Data Project, for her guidance and assistance in the preparation of our response.

The LawTech.Asia Team

Powered by WordPress & Theme by Anders Norén