Written by: Marc Lauritsen
Will AI make law better?
Yes.
For whom?
For many on both sides of the legal profession’s moat.
I’ll be brief.
(If you’re looking for verbosity, see my other writings. Links to some decorate this one.)
Written by: Marc Lauritsen
Will AI make law better?
Yes.
For whom?
For many on both sides of the legal profession’s moat.
I’ll be brief.
(If you’re looking for verbosity, see my other writings. Links to some decorate this one.)
Written by Alistair Simmons and Matthew Rostick | Edited by Josh Lee Kok Thong
In recent months, many jurisdictions in the Asia-Pacific (“APAC”) have adopted or are considering various forms of AI governance mechanisms. At least 16 jurisdictions in APAC have begun some form of AI governance, and this number will likely continue to increase. This paper scans the different AI governance mechanisms across a number of APAC jurisdictions and offers some observations at the end.
This paper segments AI governance mechanisms into four categories: Direct AI regulations are enforceable rules that regulate the development, deployment or use of AI directly as a technology, and consequently have regulatory impact across multiple sectors. Voluntary frameworks cover voluntary and non-binding guidance issued by governmental entities that directly address the development, deployment or use of AI as a technology. Indirect regulations (data & IP) are also enforceable legal rules but do not regulate the development, deployment or use of AI directly as a technology. They are rules of more general applicability that nevertheless have an impact on the development, deployment or use of AI. As the scope of this category is potentially broad, we have focused on data protection/privacy and intellectual property laws in this paper. Sector-specific measures refers to binding and non-binding rules and guidelines issued by sector regulators that are relevant to the development, deployment or use of AI in an industry. To avoid getting bogged down in the specifics of whether the rules and guidelines are technically binding or not, we have presented them together. Unlike the mechanisms addressed in the Sectoral Governance Mechanisms segment, the non-binding frameworks in this segment typically address the use of AI across multiple sectors.
For avoidance of doubt, this paper addresses legal governance mechanisms only. There may be other initiatives afoot to drive alignment and good practices from a technical perspective. We do not seek to address technical measures in this paper.
Written by Loh Yu Tong | Edited by Josh Lee Kok Thong
We’re all law and tech scholars now, says every law and tech sceptic. That is only half-right. Law and technology is about law, but it is also about technology. This is not obvious in many so-called law and technology pieces which tend to focus exclusively on the law. No doubt this draws on what Judge Easterbrook famously said about three decades ago, to paraphrase: “lawyers will never fully understand tech so we might as well not try”.
In open defiance of this narrative, LawTech.Asia is proud to announce a collaboration with the Singapore Management University Yong Pung How School of Law’s LAW4032 Law and Technology class. This collaborative special series is a collection featuring selected essays from students of the class. Ranging across a broad range of technology law and policy topics, the collaboration is aimed at encouraging law students to think about where the law is and what it should be vis-a-vis technology.
This piece, written by Loh Yu Tong, demonstrates how Singapore’s present criminal framework is ill-prepared to address offensive speech made by autonomous AI chatbots. The author examines the possible regulatory challenges that may arise, and identifies a negligence-based framework – under which a duty of care is imposed on developers, deployers and malicious third-party interferes – to be preferable over an intent-based one. Other viable solutions include employing regulatory and civil sanctions. While AI systems are likely to become more complex in the future, the author holds out hope that Singapore’s robust legal system can satisfactorily balance the deterrence of harm against the risk of stifling innovation.
Written by Pramesh Prabakaran (Associate Author) | Mentored by Huiling Xie | Reviewed by Nydia Remolina
LawTech.Asia is proud to have commenced the third run of its popular Associate Author (2020) Programme. The aim of the Associate Authorship Programme is to develop the knowledge and exposure of student writers in the domains of law and technology, while providing them with mentorship from LawTech.Asia’s writers and tailored guidance from a respected industry mentor.
In partnership with the National University of Singapore’s alt+law and Singapore Management University’s Legal Innovation and Technology Club, five students were selected as Associate Authors. This piece, written by Pramesh Prabakaran and reviewed by industry reviewer Nydia Remolina (SMU School of Law), marks the fourth thought piece in this series. It examines the benefits, risks, and regulatory and legal issues that could arise in relation to the growing trend of telemedicine and AI in Singapore’s context.
Powered by WordPress & Theme by Anders Norén