Written by: Marc Lauritsen
Will AI make law better?
Yes.
For whom?
For many on both sides of the legal profession’s moat.
I’ll be brief.
(If you’re looking for verbosity, see my other writings. Links to some decorate this one.)
Written by: Marc Lauritsen
Will AI make law better?
Yes.
For whom?
For many on both sides of the legal profession’s moat.
I’ll be brief.
(If you’re looking for verbosity, see my other writings. Links to some decorate this one.)
Written by: Alexis Sudrajat and Alexis N. Chun
The inaugural Computational Law Conference (“CLAWCON“) ran from 12 to 14 July 2023. Hosted in the Singapore Management University (“SMU“), the event saw speakers and attendees from private, public, regulatory, and academic organisations, some of whom had flown in from all over the world. They had come together to discuss the issues surrounding computational law from a multi- and interdisciplinary perspective. It was organised by SMU’s Centre for Computational Law (“CCLAW“), Singapore’s first and only research centre focused on applied research in the intersection between law and technology.[1]
Two distinguished speakers, Professor Lee Pey Woan, Dean of Yong Pung How School of Law (“YPHSL“), and Mr Yeong Zee Kin, Chief Executive of Singapore Academy of Law (“SAL“), delivered the opening keynote addresses of CLAWCON 2023. This article summarises both of these keynote speeches.
Written by Alistair Simmons and Matthew Rostick | Edited by Josh Lee Kok Thong
In recent months, many jurisdictions in the Asia-Pacific (“APAC”) have adopted or are considering various forms of AI governance mechanisms. At least 16 jurisdictions in APAC have begun some form of AI governance, and this number will likely continue to increase. This paper scans the different AI governance mechanisms across a number of APAC jurisdictions and offers some observations at the end.
This paper segments AI governance mechanisms into four categories: Direct AI regulations are enforceable rules that regulate the development, deployment or use of AI directly as a technology, and consequently have regulatory impact across multiple sectors. Voluntary frameworks cover voluntary and non-binding guidance issued by governmental entities that directly address the development, deployment or use of AI as a technology. Indirect regulations (data & IP) are also enforceable legal rules but do not regulate the development, deployment or use of AI directly as a technology. They are rules of more general applicability that nevertheless have an impact on the development, deployment or use of AI. As the scope of this category is potentially broad, we have focused on data protection/privacy and intellectual property laws in this paper. Sector-specific measures refers to binding and non-binding rules and guidelines issued by sector regulators that are relevant to the development, deployment or use of AI in an industry. To avoid getting bogged down in the specifics of whether the rules and guidelines are technically binding or not, we have presented them together. Unlike the mechanisms addressed in the Sectoral Governance Mechanisms segment, the non-binding frameworks in this segment typically address the use of AI across multiple sectors.
For avoidance of doubt, this paper addresses legal governance mechanisms only. There may be other initiatives afoot to drive alignment and good practices from a technical perspective. We do not seek to address technical measures in this paper.
Powered by WordPress & Theme by Anders Norén