Reading time: 11 minutes

Written by Nanda Min Htin | Edited by Josh Lee Kok Thong

We’re all law and tech scholars now, says every law and tech sceptic. That is only half-right. Law and technology is about law, but it is also about technology. This is not obvious in many so-called law and technology pieces which tend to focus exclusively on the law. No doubt this draws on what Judge Easterbrook famously said about three decades ago, to paraphrase: “lawyers will never fully understand tech so we might as well not try”.

In open defiance of this narrative, LawTech.Asia is proud to announce a collaboration with the Singapore Management University Yong Pung How School of Law’s LAW4032 Law and Technology class. This collaborative special series is a collection featuring selected essays from students of the class. Ranging across a broad range of technology law and policy topics, the collaboration is aimed at encouraging law students to think about where the law is and what it should be vis-a-vis technology.

This piece, written by Nanda Min Htin, seeks to examine the value of differential privacy an establishing an intermediate legal standard for anonymisation in Singapore’s data protection landscape. Singapore’s data protection framework recognizes privacy-protected data that can be re-identified as anonymised data, insofar as there is a serious possibility that this re-identification would not occur. As a result, such data are not considered personal data in order to be protected under Singapore law. In contrast, major foreign legislation such as the GDPR in Europe sets a clearer and stricter standard for anonymised data by requiring re-identification to be impossible; anything less would be considered pseudonymized data and would subject the data controller to legal obligations. The lack of a similar intermediate standard in Singapore risks depriving reversibly de-identified data of legal protection. One key example is differential privacy, a popular privacy standard for a class of data de-identification techniques. It prevents the re-identification of individuals at a high confidence level by adding random noise to computational results queried from the data. However, like many other data anonymization techniques, it does not completely prevent re-identification. This article first highlights the value of differential privacy in exposing the need for an intermediate legal standard for anonymization under Singapore data protection law. Then, it explains how differential privacy’s technical characteristics would help establish regulatory standards for privacy by design and help organizations fulfil data breach notification obligations.