Written by Deng Haiying | Edited by Josh Lee Kok Thong
LawTech.Asia is proud to collaborate with the Singapore Management University Yong Pung How School of Law’s LAW4060 AI Law, Policy and Ethics class. This collaborative special series is a collection featuring selected essays from students of the class. For the class’ final assessment, students were asked to choose from a range of practice-focused topics, such as writing a law reform paper on an AI-related topic, analysing jurisdictional approaches to AI regulation, or discussing whether such a thing as “AI law” exists. The collaboration is aimed at encouraging law students to analyse issues using the analytical frames taught in class, and apply them in practical scenarios combining law and policy.
This piece, written by Deng Haiying, seeks to tackle challenges that generative AI brings to copyright law. In doing so, Haiying’s paper explores the current state of the law in Singapore on generative AI and copyright, factors to be taken into account when exploring regulatory reforms in this area, and possible regulatory solutions to tackle the copyright challenges posed by generative AI.
Introduction
“Good artists copy, great artists steal”. This quote by Pablo Picasso from the 20th century has once again gained prominence in today’s day and age of Generative Artificial Intelligence (“Gen AI”). While Picasso likely did not envisage this quote to extend to the current realities of lawsuits against Gen AI companies such as OpenAI and Stability AI based on copyright infringements,[1] the phrase seems anachronistically apposite for modern intellectual property concerns.
Indeed, technological advancements perennially knock on the gates of copyright.[2] Gen AI – which refers to tools (such as ChatGPT) that can generate content such as written, audio-visual, or even programming code upon prompt – is no exception.[3] As the capabilities of Gen AI applications (“GAIA”) develop rapidly and become more accessible, we see increased calls for various Intellectual Property Offices around the world to clarify pressing copyright issues.[4]
This paper attempts to tackle the notable challenges that Gen AI brings to copyright law. The first section sets out burgeoning issues surrounding Gen AI, namely: (1) whether training of GAIA potentially infringes copyright, (2) whether Gen AI outputs can be protected by copyright, and (3) the lack of transparency requirements when training GAIAs. It will also discuss the current state of law in Singapore dealing with the aforementioned issues. The second section expounds on the factors to be considered when approaching regulatory reforms, and posits that any proposed reform will have to be in tandem with (1) Singapore’s broader AI goals, (2) be adaptable and future-proof, as well as (3) take into account any steps taken by global AI players such as the EU, US and South Korea.The third section offers possible regulatory solutions to tackle the current issues such as (1) imposing a levy on GAIA providers, (2) providing increased clarity in the Copyright Act with more illustrations and the distinction between AI-generated and AI-assisted use cases, (3) co-regulatory approach to improve transparency and disclosure of training datasets, and (4) increased stakeholder dialogues.
Burgeoning copyright issues
Copyright – the protection of ideas embodied in a medium such as printed text – is rooted in protecting and incentivising products of human creativity and intelligence.[5] However, this line is increasingly blurred as we see GAIAs capable of generating content that are nearly indistinguishable from that created by the most skilled humans.[6] This section sets out emerging copyright issues in relation to Gen AI and how the current state of law – the Copyright Act 2021 (“Copyright Act”) and judicial case law are dealing with them.
Whether training of GAIA potentially infringes copyright
With over 100 million active users just 3 months after its launch, ChatGPT is the fastest growing consumer application in digital history.[7] Such GAIAs, also known as Large Language Models (“LLMs”), are trained on large volumes of data for machine learning and reading purposes to discover patterns, create new knowledge, and generate insights.[8] This process is commonly known as text and data mining (“TDM”), and the ingestion and analysis of large datasets train such models to detect and mimic human response.[9] These applications are then able to instantiate sophisticated natural language processing techniques to communicate fluently to humans.[10]
TDM is a crucial first step to many machine learning and digital humanities applications. The TDM process can generally be described in three steps: (1) access to content, (2) the extraction or copying of content, and (3) the mining of text and data to extract meaningful insights.[11] Given that the development and fine tuning of AI solutions is a data intensive project, the more data there is, the better the quality of that data, which leads to a more robust and accurate AI solution.[12]
The actual TDM takes place in the third step, whereas the first two steps are typically the ones that runs afoul of infringing copyright-protected content, as training LLM inevitably involves the reproduction of entire or substantial portion of work.[13]
Indeed, New York Times’ (“NYT”) lawsuit against Microsoft and Open AI is precisely rooted in the argument that the latter’s LLM tools were built on copying and using millions of copyrighted news articles.[14] Further, NYT argues that ChatGPT causes “commercial and competitive injury by misattributing content” that NYT did not, in fact, publish.[15]
Similarly, on 9 April 2024, Singaporean writers such as Ally Chua and Ng Kah Gay voiced their discontent with their works being used for the National Multimodel LLM Programme, an initiative by Infocomm Media Development Authority (“IMDA”) for potentially infringing copyright.[16] This LLM program seeks to pioneer the development of an AI model trained to understand Singapore’s and the South-east Asia’s unique linguistic characteristics and multilingual environment.[17]
Computational Data Analysis
The first exception that may apply to overcome copyright issues can be found under section 244 of the Copyright Act, also known as Computational Data Analysis (“CDA”). With regard to LLMs, it is clear that copying entire or significant portions of work and storing them in computer memory constitutes prima facie infringement under the Copyright Act.[18] However, section 244 of the Copyright Act allows for reproduction of works made for the purpose of CDA which refers to “using a computer program to identify, extract and analyse information or data from the work”.[19] This is synonymous to TDM.[20]
For the exception to apply, there must be “lawful access” to the works copied. The Copyright Act currently only provides two negative illustrations of “lawful access” – that of breaching the terms of use of a database and circumventing paywalls.[21] It is clear why LLMs such as ChatGPT who are trained on datasets such as Common Crawl and BookCorpus would constitute “unlawful” access as it circumvents paywalls for machine learning purposes.[22]
However, the CDA exception is insufficient in reflecting the realities of GAIA use cases today. First, the illustrations under section 244 of the Copyright Act are of limited usefulness and invokes ambiguity as to what entails “lawful access”. For example, it is unclear if it applies to situations where the owner has given lawful access to online content for the sole purpose of enjoyment and consumption, but mining of their content had still taken place.[23] Second, the parameters in which LLMs can “make a copy” relates to the “storing”, “retaining” and “communicating” of a work.[24] However, where would more esoteric operations designed to extract data for the purposes of training the GAIAs such as data cleaning, normalisation and feature extraction fit in? [25]Professor David Tan has criticised the CDA exception for merely providing an “illusion of certainty in a situation that legislators may not completely understand”.[26] Providing increased clarity to reflect the realities of GAIA use cases today can help illuminate some guidance to the first issue of whether the training of GAIAs potentially infringes copyright.
Fair use
For the fair use exception to apply under section 190 of the Copyright Act, the courts will consider inter alia, the purpose of the use, and the effect that the use has on the value of the market (“first and fourth factor”).[27]
In the seminal US Supreme Court case of Campbell v Acuff-Rose Music Inc, the court’s emphasis on the close linkage between the first and fourth factors has been cited with approval by the Singapore Court of Appeal (“SGCA”).[28] Hence, the more the copying is done to achieve a purpose that differs from the purpose of the original (“transformative purpose”), the less likely it is that the copy will serve as a satisfactory substitute for the original.[29]
However, the concern is that GAIAs may be reproducing certain copyright-protected works without satisfying the requirement of transformative purpose. In Chabon v OpenAI, the court held that ChatGPT’s generation of a book summary that it had ingested for training purposes did not qualify as fair use. On the other hand, the critical essay of the book would qualify as the equivalent of a book review and may fall under fair use.[30] This factor is usually satisfied if the work has been used for transformative purpose and not merely for reproduction of creative expressions from the underlying works.[31]
Indeed, based on the current GAIA market players such as ChatGPT and Stable Diffusion, most of them are highly successful commercial enterprises who merely reproduces creative expressions in their output.[32] Hence, the fair use exception is unlikely to apply to such LLMs.
Can the output of GAIAs be protected by copyright?
In Singapore, the position appears to be that Gen AI output does not receive copyright protection at all, unless there is an identifiable human author.[33] Under the Copyright Act 2021, there is a distinction drawn between authorial and non-authorial works.
It is clear that the “authorial works” must be created by human authors to be protected by copyright.[34] In Asia Pacific Publishing v Pioneers & Leaders (Publishers) Pte Ltd, the SGCA held that without the identification of a human author from whom the work originates, there can be no “original work” capable of copyright protection.[35] Further, the standard for originality is the “creativity” approach – which has been described by the SGCA as requiring the application of intellectual effort, creativity, or the exercise of mental labour, skill, or judgment towards the authorial creation.[36]
This view put forth by the SGCA presupposes that human creativity is ontologically exceptional. However, this view may be problematic as there is no scientific or philosophical consensus on the nature of creativity.[37] It is thus perplexing to argue that only the works by human are considered creative.[38] This is especially so when influential thinkers such as Marvin Minsky, also known as the “Father of AI”, have likened the way a human mind thinks to a purely mechanical process – in other words, similar to how some AI systems operate.[39]
In addition, this position is increasingly vague given the distinction between “AI-generated” and “AI-assisted” works. It has been suggested that it is generally easier to argue that copyright exists in an output that is AI-assisted (in other words, where AI has been used in augmenting the final creation) rather than completely AI-generated.[40] This is because where Gen AI has been part of the creative process, experts have argued that it is arguably no different from using any other tool.[41] On the other hand, for AI-generated creations, the AI has “cognitive” abilities (i.e., akin to emulating how a human brain works and thinks) in generating the work with no human intervention.[42]
The question then is when does a work cease being “AI-assisted” and become “AI-generated”? Put simply, how much human contribution to the Gen AI output is sufficient for there to consist of authorial works by a human, and therefore worthy of copyright protection? This is an area that regulators should consider to ensure robust copyright laws.
Lack of transparency requirement
To prove copyright infringement, one must prove that an infringer copied a protected work and that there is substantial similarity between the protected and infringed work.[43] Hence, if a work is not actually copied from a copyright protected work, there is no infringement regardless of how similar the works are.
What the current Copyright Act captures are outputs that reflect the input data. Thus, if a LLM bypasses a paywall to obtain the work, the original owner of the work can raise a copyright infringement claim and may be rewarded with an injunction or damages if they are able to successfully prove their case. [44]
However, due to the lack of transparency in disclosing training data, the glaring problem is that it is virtually impossible to prove that a piece of work has been used as training input unless the output exhibits substantial similarity to the original work.[45] This is exacerbated when GAIAs generate works that no longer have any “direct resemblance to a specific pre-existing work”.[46] Take for example outputs of algorithms that are trained on synthetic data, which are artificially generated data using real, organically generated data as input.[47] The absence of any mandatory disclosure regulation coupled with the lack of transparency by Gen AI companies in disclosing their training datasets makes it implausible for the original copyright holder to even know that his copyright work is being infringed in the first place.
Factors to consider when conducting regulatory reforms
Before delving into the possible regulatory solutions, this section discusses some factors to consider when planning a regulatory reform. The core dilemma lies in protecting copyrights too strictly which could hinder the development of new tools and innovation enhanced by GAIAs, but at the same time, failing to protect those rights could render millions of jobs unsustainable and undermine the rights enjoyed by copyright holders.[48]
While it is worthwhile to assess normatively what the law ought to be, one should not conduct law reforms simply based on that. There are a whole host of other considerations and stakeholders at play. While it is easy to acquiesce that since we have a position on what the law should be, law reforms should just indiscriminately follow suit. That should not be the case as ultimately, law reforms should be conducted carefully and with due consideration of multiple factors which will be discussed below.
Compatibility with Singapore’s broader AI goals
One such consideration is whether the proposed law reform is compatible with Singapore’s broader AI goals that have been developed by Parliament and various ministries through stakeholder dialogues and engagements. Ensuring that copyright law reform with regard to Gen AI is in tandem with Singapore’s broader AI goals is a practical and essential consideration, as it ensures consistency between law and policy. Besides, it helps send a robust and clear signal of Singapore’s position with regard to being a global player in the field of AI.
Some initiatives include Gen AI Sandbox to help and familiarise SMEs in getting a head start in capturing new AI opportunities,[49] and providing businesses with access to Gen AI expertise and resources as part of Singapore’s Smart Nation initiative.[50] As re-iterated by then-Deputy Prime Minister Lawrence Wong, these initiatives aim to propel Singapore as a leader in the field of AI and ensure AI is used for the public good.[51]
Adaptable and future proof laws and regulations
When conducting law reforms, it is important to have laws and regulations that are adaptable to new technologies without the need for legislative intervention.[52] Having adaptable laws not only reduces the need for ongoing rounds of legal reform every time a new emerging technology appears, but also assist the legislature in not having to predict in advance the precise use that would come within the scope of the newly enacted regulation.[53]
Further, there will be substantial lag time before rules can be amended, and hence, there is a risk of chilling socially desirable behaviour as a result of decisions made by judges and adjudicators based on imperfect information.[54]
Taking reference from global AI players
Singapore’s regulatory approach is one that places great emphasis on practicality and promoting international interoperability of regulatory framework. This is largely because Singapore is still considered a “deployer” of AI, and not a “developer”, and hence, would naturally be a price-taker. However, this does not mean that Singapore is without agency in deciding its own regulatory approach. One just needs to be strategic in conducting such reforms to ensure that it maximises the economic and welfare benefits AI could bring. By considering the approaches taken on by global players such as the EU, US and even South Korea, Singapore can learn and avoid the pitfalls while simultaneously evaluating the suitability and advantages of said approaches. Hence, it is imperative to consider the approaches by global players such as the EU, US, and even South Korea.
Proposed regulatory reform
Regulators need not only resort to hard law, but they also have wide-ranging tools such as industry codes of conduct, policy co-creation exercises, and non-binding rules that are meant to induce changes in conduct at their disposal. As discussed above, there are still gaps in copyright laws when dealing with the fast-paced development of Gen AI.
The proposed reform in this section will cover a wide range of regulatory solutions to help bridge the gap, from the most stringent to the most lenient. With this range, regulators have the flexibility to choose which approach would be the most beneficial to Singapore.
Imposing a levy for human remuneration purposes
As discussed above, training GAIAs with copyrighted datasets have left many copyright owners deeply upset. This is because most GAIAs can only mimic human works only after having the opportunity to analyse copyrighted human creations.[55] The livelihood of existing copyright holders such as authors are being adversely affected with no safeguards and due consideration provided in place. Thus, human authors should be remunerated as this is a “parasitic usurpation of [their] market”.[56]
From a market perspective, failing to protect human-authored works used in the training of Gen AI would reduce the incentive for human creations, which is the sine qua non for copyright laws.[57]
It is proposed that a one-time levy be imposed on GAIA providers. This lump-sum levy could then be used to benefit copyright holders through offering financial support, training opportunities and grants for new projects.[58] Revenue accruing from remuneration payments can be collected by existing copyright associations such as the Copyright Licensing & Administration Society of Singapore. They can then be paid to Collective Management Organisations, which are private entities appointed by rights owners to manage the copyright works and collecting of royalties and payments.[59]
This approach balances equitable remuneration to foster and support human creative work, as well as allowing TDM uses. Moreover, the general payment of levy offers a standard and reasonable levy that is equally applicable to all players in the industry.[60] Such an approach would be in line with Singapore’s Parliament view towards the benefits of TDM,[61] as data analysis improves the underlying set of data by making sure it is as complete as possible, which is indispensable to the current digital economy.[62]
However, at the same time, it is conceded that this is potentially too stringent of a requirement to impose on GAIA providers, and it may even deter them from entering the Singapore market, which would go against Singapore’s broader initiatives in promoting Gen AI uses.[63] This is exacerbated by how no other major AI regimes such as the EU or US have imposed such an onerous standard. Hence, it is recommended that this approach should take a backseat in favor of others for the time being, unless the need for such strict regulation arises.
Increase clarity under the Copyright Act
As discussed in paragraph 12 of this paper, the Copyright Act should include more illustrations and examples to improve the clarity of what constitutes “lawful access” for the purpose of TDM use cases.[64] Moreover, clarifications on the line to be drawn between an AI-generated and AI-assisted output and the level of human contribution needed can shed light on the debate of whether the output by a GAIA can be protected by copyright.
The finding of the latter will have consequences on the debates of authorship. If the threshold to qualify for AI-assisted work is low, whereby a user entering a prompt suffices for there to be human contribution and hence qualifies for copyright protection, then would this mean that authorship is given by merely asking an GAIA to generate something creative?[65] While this may not be unfair to the GAIA as it has no interest in taking credit for the work, it would equate legitimate human creativity with someone simply asking an AI to perform a task – which is unfair to other human authors. Greater clarity is hence welcomed.
Co-regulatory approach to improve transparency and training data disclosure
With regards to the issue of lack of transparency in disclosing training datasets by LLMs, it is suggested that we look towards South Korea’s “Preliminary Adequacy Review System” by the Personal Information Protection Commission (“PIPC”). Under this three-step initiative, if a business is uncertain as to whether its operations violate South Korea’s Personal Information Protection Act for the safe use of personal information in technologies, they can apply for a prior adequacy review by the PIPC.[66]
By pre-emptively assessing compliance, applicants can effectively use the PIPC’s prior approval as a sign of assurance. If they were to encounter any possible breaches or litigation, it is possible for it to be used as a mitigating factor or even just as proof of adherence to data privacy laws. This effectively incentivises market players to cooperate with regulatory authorities.
Transposing this idea of co-regulatory approach to copyright, it is possible for the Intellectual Property Office of Singapore to conduct such disclosure initiative. They can invite GAIA providers to apply for a review as to whether their data have been trained from clean and legally permissible provenance that do not violate copyright. If GAIA providers are able to show that their training datasets do not violate copyright, then it is suggested that a “checkmark” is provided to them as proof of cooperating with transparency disclosures. In this way GAIAs providers are incentivised to cooperate with regulators in a way that is not too draconian such as that of imposing mandatory disclosure of training datasets.
Such an approach can promote greater transparency in the Gen AI ecosystem, which will allow copyright holders to know when their rights have been violated. Moreover, such an approach allows regulators to be flexible and adaptable as they can work closely with the GAIA market players to regulate future issues more efficiently.
Increase in stakeholder dialogues
In objection to IMDA’s initiative in using copyrighted works to train its LLMs, local writers had voiced their displeasure with the way IMDA interacted with stakeholders.[67] In particular, author Ally Chua was unhappy with the way IMDA conducted the survey, as if the foregone conclusion was that the usage of copyrighted works was already in acquiescence and it is just a matter of negotiation of remuneration.[68] Regulators must be sensitive to contentious topics like the training of GAIAs with works protected by copyright as it is far from being a settled area of law.
Such incidents highlight how important it is to discuss such controversial issues with stakeholders, especially when it concerns their potential livelihood and future career development. Having meaningful dialogues with different stakeholders who are concerned with the development of Gen AI is important when implementing reforms, and it is also in line with Singapore’s goal of harnessing technology to improve the lives of people.[69]
Conclusion
This paper discussed the current issues posed by Gen AI to copyright law, as well as some proposed solutions to consider when conducting a law reform. Ultimately, there must be a weighing of pros and cons of each solution introduced to Singapore. Furthermore, given the huge economic and social potential of Gen AI, the implementation of any proposed changes resembles an open-heart surgery. These reforms cannot be done without prior interdisciplinary research and discussion with different stakeholders (copyright holders, software developers, IP offices, key user groups, etc.).
Editor’s note: This student’s paper was submitted for assessment in end-May 2024. Information within this article should therefore be considered as up-to-date until that time. The views within this paper belong solely to the student author, and should not be attributed in any way to LawTech.Asia.
[1] See Sarah Andersen; Kelly Mckernan; Karla Ortiz; individual and representative plaintiffs v Stability AI Ltd 23-cv-00201-WHO (N.D. Cal. Jan 13, 2023).
[2] Kalpana Tyagi, Copyright, text & data mining and the innovation dimension of generative AI, Journal of Intellectual Property Law & Practice, 2024, Vol. 00, No. 00 at p 1. (“Tyagi”)
[3] Delacroix, Sylvie, Sustainable Data Rivers? (March 14, 2023). Critical AI, published by Duke University Press, April 2024 issue.
[4] Intellectual Property Office of Singapore, SMU Centre for AI and Data Governance, When Code Creates: A Landscape Report on Issues at the Intersection of Artificial Intelligence and Intellectual Property Law, at p 8. (“Landscape Report”)
[5] Id, at p 38.
[6] Bart Mueller, OpenAI’s Sora & the Role of the US Copyright Office, The Vanderbilt Journal of Entertainment and Technology Law, Vanderbilt University.
[7] Authors Guild v Open AI (Complaint filed on 19 September 2023) No.1:23-cv-8292, at para 73.
[8]Tyagi, supra n 2 at p 7.
[9] Authors Guild v. Open AI, supra n 7 at para 55.
[10] Timm Teubner, Christoph M. Flath, Christof Weinhardt, Will van der Aalst & Oliver Hinz, Welcome to the Era of ChatGPT, Bus Inf Syst Eng at p 95.
[11] Tyagi, supra n 2 at p 9.
[12] Legal500 website <https://www.legal500.com/developments/thought-leadership/a-i-copyright-did-singapores-copyright-act-2021-solve-copyright-problems-in-the-training-of-a-i/ > (accessed 1 April 2024).
[13] Tyagi, supra n 2 at p 8.
[14] The New York Times Company v. Microsoft Corporate et al Case 1:23-cv-11195 (27 December 2023) United States District Court of New York at para 2.
[15] Id, at paras 134–136.
[16]The Straits Times <https://www.straitstimes.com/life/arts/singaporean-writers-object-to-imda-using-works-to-train-a-large-language-model> (accessed 10 April 2024).
[17] Ibid.
[18] Paul Keller ‘Protecting creatives or impeding progress? Machine learning and the EU copyright framework’ Kluwer Copyright Blog.
[19] Copyright Act 2021, section 243(a).
[20] David Tan, Thomas Lee Chee Seng, Copying Right in Copyright Law, Fair Use, Computational Data Analysis and the Personal Data Protection Act, 2021 33 SAcLJ 1032 at p 1055. (“Tan and Lee”)
[21] Copyright Act 2021, section 244(d), illustrations (a) and (b).
[22] David Tan, Generative AI and Copyright Part 2: Computational Data Analysis Exception and Fair Use, [2023] SAL Prac 25. (“Tan, part 2”)
[23] Tan and Lee, supra n 20 at p 1062.
[24] Legal 500, supra n 12.
[25] Ibid.
[26] Tan and Lee, supra n 20 at p 1062.
[27] Intellectual Property Office of Singapore, Copyright Factsheet on Copyright Act 2021, at p 14. (“Factsheet”)
[28] Campbell v Acuff-Rose Music Inc 510 US 569 at para 591 (6th Cir, 1994).
[29] Ibid.
[30] Id, at p 13.
[31]David Tan, Generative AI and Copyright Part 1: Copyright Infringement, 2023 SAL Prac 24, at p 12. (“Tan, part 1”)
[32] Id, at p 12.
[33] Simon Chesterman, Good Models Borrow, Great Models Steal: Intellectual Property Rights and Generative AI, Policy & Society Journal Special Issue: Governance of Generative Artificial Intelligence. (“Chesterman”)
[34] Section 9 of the Singapore Copyright Act 2021 defines “authorial work” as a literary, dramatic, musical or an
artistic work.
[35] Asia Pacific Publishing Pte Ltd v Pioneers & Leaders (Publishers) Pte Ltd [2011] SGCA 37 at [75] and [82]. It is to be noted that the claimant in this case was a corporation, but this point is likely to extend to non-human entities such as GAIAs.
[36] Global Yellow Pages Limited v Promedia Directories Pte Ltd [2017] SGCA 28 at [24].
[37] Ryan Abbott, Elizabeth Rothman, Disrupting Creativity: Copyright Law in the Age of Generative Artificial Intelligence, Florida Law Review, Vol 75, at p 1185. (“Abbott, Rothman”)
[38] Ibid.
[39] See generally Stephen L. Thaler, Vast Topological Learning and Sentient AGI, 8 Journal OF A.I. AND Consciousness 81 (2021).
[40] Landscape Report, supra n 4, at p 8.
[41] Ibid.
[42] Ibid.
[43] See generally Global Yellow Pages Limited v Promedia Directories Pte Ltd and another matter [2017] SGCA 28.
[44] Intellectual Property Office of Singapore <https://www.ipos.gov.sg/about-ip/copyright/infringement-enforcement> (Accessed 1 April 2024).
[45] Tan, part 1, supra n 31.
[46] Tyagi, supra n 2 at p 8.
[47] Kaled El Emam, Accelerating AI with Synthetic Data: Generating Data for AI Projects, Nvidia Report 2020, at p 2.
[48] Chesterman supra n 33 at p 13.
[49] Infocomm Media Development Authority, <https://www.imda.gov.sg/resources/press-releases-factsheets-and-speeches/press-releases/2024/sg-first-genai-sandbox-for-smes> (Accessed 4 April 2024).
[50] Smart Nation Singapore, <https://www.smartnation.gov.sg/media-hub/press-releases/20240301a/> (Accessed 4 April 2024).
[51] Ibid.
[52] Tan and Lee, supra n 20 at p 1066.
[53] Ibid.
[54] Ibid.
[55] Frosio G, Should we ban generative AI, incentivise it or make it a medium for inclusive creativity? Bonadio E, Sganga C (eds) A research agenda for EU copyright law. Edward Elgar, Cheltenham (forthcoming).
[56] Ibid.
[57] Martin Senftleben, Generative AI and Author Remuneration, IIC – International Review of Intellectual Property and Competition Law, at p 1537.
[58] Ibid.
[59] See section 459 and 460 of the Copyright Act 2021.
[60] Tyagi, supra n 2 at p 16.
[61] Gavin Foo, Trina Ha, Sustaining Innovation and AI in a data-driven climate. Industry and Critical Reception to Singapore’s Computational Data Analysis, [2023] SAL Prac 28, at para 24.
[62] Ibid.
[63] See paragraph 27-28 of this paper.
[64] See paragraph 12 of this paper.
[65] Abbott, Rothman, supra n 37 at p 1199.
[66] Personal Information Protection Commission < https://www.dataguidance.com/news/south-korea-pipc-announces-pilot-preliminary-adequacy> (Accessed 7 April 2024).
[67] See paragraph 9 of this paper.
[68] Ibid.
[69] Smart Nation Singapore <https://www.smartnation.gov.sg/about-smart-nation/transforming-singapore/#:~:text=The%20goal%20is%20to%20improve,friction%20that%20the%20region%20faces.> (Accessed 9 April 2024).