Written by Victoria Rui-Qi Phua | Edited by Josh Lee Kok Thong
We’re all law and tech scholars now, says every law and tech sceptic. That is only half-right. Law and technology is about law, but it is also about technology. This is not obvious in many so-called law and technology pieces which tend to focus exclusively on the law. No doubt this draws on what Judge Easterbrook famously said about three decades ago, to paraphrase: “lawyers will never fully understand tech so we might as well not try”.
In open defiance of this narrative, LawTech.Asia is proud to announce a collaboration with the Singapore Management University Yong Pung How School of Law’s LAW4032 Law and Technology class. This collaborative special series is a collection featuring selected essays from students of the class. Ranging across a broad range of technology law and policy topics, the collaboration is aimed at encouraging law students to think about where the law is and what it should be vis-a-vis technology.
This piece, written by Victoria Phua, puts forward an argument for attributing electronic personhood status for “strong AI”. According to her, algorithms trained by machine learning are increasingly performing or assisting with tasks previously exclusive to humans. As these systems provide decision making rather than mere support, the emergence of strong AI has raised new legal and ethical issues, which cannot be satisfactorily addressed by existing solutions. The ‘Mere Tools’ approach regards algorithms as ‘mere tools’ but does not address active contracting mechanisms. The ‘Agency’ approach treats AI systems as electronic agents but fails to deal with legal personality and consent issues in agency. The ‘Legal Person’ approach goes further to treat AI systems as legal persons but has drawn criticism for having no morality nor intent. To address the legal personality in strong AI, Victoria proposes to extend the fiction and concession theories of corporate personality to create a ‘quasi-person’ or ‘electronic person’. This is more satisfactory as it allows for a fairer allocation of risks and responsibilities among contracting parties. It also holds autonomous systems liable for their actions, thereby encouraging innovation. Further, it facilitates the allocation of damages. Last, it embodies the core philosophy of human-centricity.