Written by Alyssa Asha Minjoot | Edited by Josh Lee Kok Thong
LawTech.Asia is proud to collaborate with the Singapore Management University Yong Pung How School of Law’s LAW4060 AI Law, Policy and Ethics class. This collaborative special series is a collection featuring selected essays from students of the class. For the class’ final assessment, students were asked to choose from a range of practice-focused topics, such as writing a law reform paper on an AI-related topic, analysing jurisdictional approaches to AI regulation, or discussing whether such a thing as “AI law” existed. The collaboration is aimed at encouraging law students to analyse issues using the analytical frames taught in class, and apply them in practical scenarios combining law and policy.
This piece, written by Alyssa Minjoot, explores and analyses South Korea’s approach to AI regulation. It examines how South Korea has been able to take a forward-thinking, proactive and novel approach in formulating AI policies and guidance, while examining the need for clearer and more stringent AI regulations to deal with higher-risk AI systems.
Introduction
Eminent astrophysicist Stephen Hawking famously cautioned that the “development of full artificial intelligence (“AI”) could spell the end of the human race”.[1] This sentiment was concurred with by other prominent figures, Elon Musk and Bill Gates,[2] exemplifying their shared wariness of the consequences of creating something which surpassed human capabilities. Hence, although the rise of AI in our global climate has been decades in the making, its recent breakthroughs has given rise to opposing schools of thought – either excitement about AI’s potential to revolutionise entire industries, or fear of the unknown regarding how AI could irreversibly alter and even harm our current ways of life.
This paper examines a jurisdiction that has embraced the former optimistic view towards AI due to its potential to boost growth, revitalise economies and substantially improve the efficacy levels of daily processes. South Korea (“Korea”) stands out as a prime example of such a jurisdiction due to its substantial investments in AI developments and ambition to cement its status as a global AI powerhouse.
Preliminary framework
This paper will examine Korea’s legal and policy approaches to AI, and how it has addressed ethical concerns arising from AI’s growing influx. While Korea boasts numerous AI policies and guidelines, it is presently in the final stages of drafting comprehensive AI legislation (which has yet to be enacted).[3] It follows that Korea’s existing soft laws which regulate AI will be examined first, before delving into its progression towards enacting more stringent regulations.
Ultimately, this paper aims to commend Korea for its innovative, efficient and novel AI policies and guidelines which has distinguished it from other jurisdictions. However, it also highlights the need for more stringent AI regulations, specifically to address issues posed by high-risk AI. These suggestions for improvement will be thoroughly discussed in the final section of this paper.
In addition, the analytical framework deemed appropriate for assessing the efficacy and utility of Korea’s AI initiatives is to examine each initiative’s strengths, weaknesses, opportunities and threats – also known as the (“SWOT”) framework.[4] This analysis will aid in grasping the fundamental tenets of each scheme, as well as assessing their longevity amidst the ever-evolving landscape of AI.
Korea’s soft laws regulating AI
At this juncture, it is apposite to define which of Korea’s policies and guidelines fall under the category of soft laws. Here, soft laws encompass quasi-legal instruments which lack binding legal force, or pose a weaker binding force compared to traditional laws.[5]
National Strategy for AI and National Guidelines for AI Ethics
The first important policies to examine are Korea’s National Strategy for AI,[6] and the National Guidelines for AI Ethics.[7] Established in 2019 and 2020 respectively, these documents encapsulate Korea’s plans to achieve AI competitiveness whilst protecting basic human rights,[8] and to emerge as an AI world leader by 2030.[9] However, to better understand these documents, it is imperative to first discuss the catalytic events that significantly influenced Korea’s interest in integrating and regulating AI.
Key events which influenced Korea’s AI strategies
A defining event which catapulted Korea’s AI policies was the AlphaGo match of March 2016.[10] In this match that pit Korea’s Go grandmaster Lee Sedol against DeepMind’s Go computer program AlphaGo, AlphaGo emerged as the champion.[11] This win for AI came as a tremendous shock since Go is a highly intricate board game in which players must rely on their intuition, and not merely the rules of the game.[12]
Just two days after the AlphaGo match, the Korean government introduced a plan to invest 1 trillion won into AI research.[13] Five other major AI plans were initiated in subsequent years, and the then-president declared that it was “thanks to the AlphaGo shock” that Korea had realised the significance of AI in the future global climate.[14]
Accordingly, it is crucial to keep in mind the AlphaGo shock when analyzing Korea’s subsequent AI initiatives. The immediacy with which Korea formulated plans to regulate and invest in AI underscores the government’s determination to establish its position as a global leader in AI, ensure it remains competitive in the global race to drive AI innovation, and to harness its potential for economic growth.
Korea’s commitment to use technology to drive economic progress is solidified by its inclusion of “technology as an engine of growth” in the Constitution of the Republic of Korea.[15] Evidently, this “rhetoric” and “real policy tenet” has significantly influenced Korea’s AI policies,[16] as it prominently features in the National Strategy and guides the formulation of ethical guidelines to avoid compromising this central focus.
National Strategy for AI
This national policy document outlines Korea’s core objectives to enhance AI competitiveness, maximise AI utilisation, and leverage on AI to boost the economy and improve citizens’ quality of life.[17] The strength of Korea’s strategy to ensure it spurs innovation levels is evident in its substantial investment in technologies that will “become the core competitiveness in the AI ecosystem”.[18] This also highlights Korea’s ability to be forward-looking and to capitalise on current opportunities in research and development (“R&D”) which will propel its AI innovation and potential.
Additionally, this emphasis on innovation led to a revision of Korea’s AI laws into a negative regulatory system, whilst employing an “approve first, regulate later” approach.[19] Clearly, the government largely prioritises accelerating innovation as it frames a “future-oriented legal system” as one that supports the AI era rather than requiring AI to abide by existing data protection laws and practices.[20] While this approach may foster innovation, it raises concerns that a negative regulatory system could fall short in enforcing sufficient data protection laws and safeguarding human well-being in the long run.
In another vein, the National Strategy has been critiqued as too reliant on dated narratives of “growth-oriented technology promotion”, perhaps to its detriment.[21] However, this paper argues that while the National Strategy unmistakably emphasises AI’s optimistic effect on economic growth, it also addresses immense AI security threats with proposed solutions. To this, the document formulates the establishment of AI-based information protection technologies to mitigate AI dysfunction.[22] Additionally, the National Strategy recognises the need for Korea to define ethical standards consistent with global norms,[23] thereby addressing any concern of appearing to fall behind in global AI regulation efforts. It is posited that these efforts are reinforced by the Korean government, as demonstrated by the introduction of the following two policies.
National Guidelines for AI Ethics
A year after the National Strategy was implemented, Korea adopted the National Guidelines for AI Ethics, which set out ten essential requirements for the implementation of human-centred AI.[24] These requirements are stipulated to uphold human dignity, promote the common good of society, and ensure the appropriate use of technology.[25] As aforementioned, the introduction of these guidelines demonstrates the Korean government’s readiness and ability to fulfil its stated commitment to establish ethical guidelines suitable for the influx of AI. It also signifies its proactive approach in promptly addressing potential threats posed by AI to basic human rights.
Nonetheless, the guidelines have faced criticism for their apparent nuanced approach towards AI ethics in industrial technological promotion.[26] This is evident from the guidelines’ preface, which emphasise the necessity not to impede R&D efforts and industrial growth in AI, despite being intended as an ethical guideline which should focus on highlighting societal benefits and mitigating potential harms.[27] While it is not disputed that Korea’s ethical guidelines represent progress in protecting citizens’ rights and welfare, caution is advised to ensure that prioritising AI as a growth mechanism does not overshadow other equally paramount considerations.
Digital Bill of Rights
Korea’s commitment to remain current in their AI policies is further exhibited through their unveiling of the Digital Bill of Rights in 2023.[28] This is a landmark initiative to establish a “new digital order” and offers a foundational framework for novel and universal digital norms.[29] Chiefly, the Korean government showcases its undeniable strength in developing a novel approach to address various issues which are commonly overlooked – such as literacy and disparities caused by AI.[30] Additionally, it goes “beyond ethical and normative discussions” to distinguish certain principles and rights which should be safeguarded through international solidarity and cooperation.[31] This forward-looking approach thus ensures the protection of both foreseeable and unforeseeable rights which may be compromised, amid the highly transient nature and rapid development of AI.
Practically, the Digital Bill of Rights serves as a guiding compass and vision for the development, use and regulation of digital technology,[32] rather than as a legislative document with binding force.[33] Notwithstanding, it seeks to definitively influence Korea’s “legislative, regulatory and operational approaches to AI”,[34] and to set out key principles which can be relied upon as executive guidance.[35]
The efficacy of this bill is bolstered by the Korean government’s determination to integrate the bills’ principles into its policy-making. The National Assembly has since initiated legislative principles like the AI Industry Promotion and Trust Base Establishment Act which embodies the principle of “permission first, subsequent regulation”.[36] Importantly, it identifies AI related to human life and safety as “high-risk AI” – which mandates prior notification and adequate reliability measures.[37]
While this connotes a tremendous step forward for Korea in its mission to set the “global normative order in the digital age”,[38] it must be caveated that this proposed Act does not cover every aspect of the bill.[39] To solidify its position as a global AI leader, Korea may need to adopt more rigorous regulatory practices, especially given recent efforts by the European Union (“EU”) and the USA to enact hard laws regulating AI. This is exemplified by the EU’s recent approval of the AI Act as the world’s first comprehensive AI law,[40] and the Biden Administration’s Executive Order regulating AI standards for safety and security.[41]
Korea’s hard laws regulating AI
In the following section, hard laws are defined as legally binding instruments which confer precise obligations on the parties involved,[42] and may be legally enforced before a court. This section explores Korea’s existing data privacy laws which are applicable to AI, their enforcement mechanisms, and considers the potential enactment of a comprehensive AI Act in Korea.
Amendments to the Personal Information Protection Act
In 2011, Korea enacted the Personal Information Protection Act (“PIPA”) as its data privacy legislation which regulates how personal data is collected and processed.[43] Although the PIPA serves as Korea’s general data protection law, its recent amendments apply to AI-related concerns, showcasing the government’s agility in responding to the AI’s evolving landscape.
The Data 3 Act
The first significant amendment involves Korea’s revision of three major data privacy laws in 2020, including amendments to the PIPA, collectively referred to as “the Data 3 Act”.[44] Pursuant to the Ministry of Science and ICT (“MSIT”), these amendments primarily aim to concretise the criteria for assessing anonymous information, and seek to develop a “data economy” by introducing the concept of “pseudonymised data”.[45] Importantly, pseudonymised data may be processed and utilised without obtaining the data subject’s consent if its purpose falls within the boundaries of statistics, scientific research and keeping public records.[46]
Strengths of the amendments
Korea has placed key measures in place to ensure that it achieves the intended effects of the amendments whilst continuing to safeguard data subjects’ privacy. Stringent penalties await companies that misclassify data as personal instead of pseudonymized.[47] Companies must also adequately justify why their usage of personal information does not require the data subjects’ consent, and violating this threshold exposes companies to fines and even criminal penalties.[48] Accordingly, the Data 3 Act mitigates the threat of companies’ misuse of data by ensuring transparency and accountability in data-handling practices.
Critiques of the amendments and the strategies employed to address them
Nonetheless, the enactment of these amendments has not been without critique and controversy. The Data 3 Act was enacted to better adopt the EU’s General Data Protection Regulation (“GDPR”),[49] which permits the non-consensual use of personal data in order to spur societal benefit in the form of publishing research findings, and furthering scientific and statistical research.[50] These reasons lend strong justification to the usage of citizens’ personal data whilst avoiding the burden of obtaining additional consent.[51] In contrast, the PIPA amendments were passed with a central focus on data pseudonymization, instead of on whether such “non-consensual processing” would be beneficial to society.[52] Academics have opposed that pseudonymising data may not be an adequate sole precondition to permitting the non-consensual use of personal data.[53]
In addition, although the pseudonymisation of data was introduced to further advancements in AI through data sharing, its inherent complexities and the lack of regulatory guidance led to its rare utilisation by businesses.[54] However, Korea was swift to analyse the reasons behind businesses’ infrequent usage of pseudonymisation, and introduced the “Guidelines for Processing Pseudonymous Information” to promote the use of safe data processing.[55] More recently, in 2024, the Personal Information Protection Commission (“PIPC”) refined these standards and established a more comprehensive set of guidelines to aid companies in their safe utilisation of unstructured data.[56] This evidences Korea’s admirable commitment to continually enhance their AI regulatory and innovative capabilities so as to remain at the forefront of AI’s evolving landscape.
In a separate vein, it is equally crucial to examine how Korea has addressed breaches of its PIPA. In December 2020, ScatterLab launched its AI chatbot, “Lee Luda”, which garnered traction for its ability to closely imitate women’s speech patterns.[57] However, it faced criticism for facilitating hate speech and sexual objectification towards minority groups.[58] These directly violated the PIPA, leading the PIPC to determine that ScatterLab illegally collated personal information without sufficient consent, and utilised sensitive information without safeguards.[59] Thus, the PIPC imposed penalties of approximately 100 million won on ScatterLab for failing to pseudonymize Lee Luda’s training data.[60]
Notwithstanding, this case study underscores a notable critique of the PIPA amendments: there remains ambiguity regarding the precise threshold for personal information to fall within the legal scope of permitted pseudonymised data. To this critique, this paper posits that Korea has striven to reduce these legislative uncertainties through the PIPC’s initiation of the Policy Plan for Safe Utilization of Personal Information in the era of AI – which illustrates guidelines to correctly interpret and apply the current PIPA.[61]
Moreover, imposing harsh penalties on ScatterLab serves as a deterrent to other AI companies to ensure their training datasets comply with the PIPA’s data protection standards. It is apposite to note that this was the PIPC’s first sanction of an AI company for indiscriminate personal information processing.[62]
Rights concerning fully automated decision-making
Most recently, on 15 March 2024, an amendment to the PIPA introducing “rights concerning fully automated decision-making” came into effect.[63] This amendment bears great similarity to Article 22 of the EU’s GDPR[64] as it allows individuals to reject decisions made by fully automated systems that significantly affect their rights or duties – subject to certain stipulated exceptions.[65]
Specifically, this amendment empowers data subjects with the autonomy to “demand explanations or reviews of decisions” made by automated processes void of human intervention, before subjects are permitted to reject these decisions where their rights are compromised.[66] However, if data controllers have justifiable concerns that granting these rights could infringe another person’s life, body, property, or interests – they reserve the right to deny the data subjects’ rejection.[67] Nonetheless, data controllers must adhere to similarly stringent requirements to ensure their legitimacy in handling such critical personal information.[68]
This paper puts forth that this recent PIPA amendment highlights Korea’s significant progress to prioritize user safety over AI’s capabilities to drive innovation and economic growth. Indeed, this amendment grants data subjects’ significant control over AI decisions and returns the final outcome into human hands – be it those of the data subjects, or the data controllers.[69] Further, by seeking to improve data protection levels and ensuring that data controllers are held to higher thresholds of transparency and accountability,[70] this move also allows Korea to align itself more closely with global best practices towards regulating AI, such as that of the GDPR.[71]
Role of the PIPC
In conjunction with the implementation of the PIPA, the PIPC was inaugurated as Korea’s national data protection authority.[72] Though its main role is to enforce the PIPA, the PIPC also operates as an independent regulatory agency with the authority to initiate data protection policies, address issues regarding statutory interpretations, and impose administrative sanctions and fines.[73] Consequently, it holds a unique ability to shape Korea’s data protection practices beyond the scope of simply enforcing the PIPA.
As aforementioned in [28], the PIPC issued guidelines on the Safe Utilization of Personal Information in the AI era in August 2023 to tackle ethical issues arising from AI’s usage in tandem with personal information.[74]Here, the guidelines for interpreting the PIPA to curb regulatory uncertainties was but one of several significant initiatives introduced. In addition, the PIPC launched the AI Privacy Team to assist firms in integrating user privacy considerations from the outset of AI product and service development.[75] Further, the AI Privacy Team would operate under the PIPC’s new Preliminary Adequacy Review System, which preemptively assesses businesses’ compliance with the PIPA through a three-step prior adequacy review.[76]
Hence, the combined effect of these two initiatives in consulting and collaborating with companies during the initial product development stages is that Korea’s AI Privacy Team will serve as the main consultation party for all organisations within Korea,[77] and the Review System provides companies with a mechanism to better adhere to the PIPA’s data privacy standards – thus reducing their exposure to strict liabilities. In sum, these initiatives will bolster Korea’s reputation as an attractive hub for AI innovation as companies are duly supported in their endeavours to comply with Korea’s PIPA.
Turning to the PIPC’s pivotal role in enforcing the PIPA, the PIPC has a track record of imposing large sums of penalty charges for such violations. For instance, a global social media company was dealt a penalty surcharge of 6.7 billion won for providing personal data to a third-party operator without obtaining valid consent from data subjects.[78] In more severe cases, the PIPC has levied penalties of 69.2 billion won and 30.8 billion won respectively on two online platform operators for customizing advertisements, similarly without securing their data subjects’ valid consent.[79]
Accordingly, this paper posits that whilst Korea’s general AI strategies and ethical guidelines perceivably lack strict implementation in favour of their bid to utilise AI to drive economic growth, the PIPC compensates for these “gaps” with their innovative and effective efforts to combat data protection violations. Given AI’s increasingly rapid advancement, stringent and meticulous enforcement is crucial in evading the threat of mass ignorance and infringement of personal data protection laws. Further, the PIPC’s levying of harsh penalties for PIPA violations serve as a strong deterrent against companies being lackadaisical in their usage of personal data, and may potentially incentivise them to apply for the PIPC’s prior adequacy reviews.
Ultimately, it is undeniable that the Korean government has seized this opportunity to distinguish itself not only as a prominent jurisdiction in the field of AI, but also as one capable of employing novel methods to address emerging unpredictable manners in which AI poses a threat to basic ethical and privacy standards. It is this capability to creatively innovate that discerns Korea’s AI policies and regulations as one of the forefront jurisdictions in leading global AI standards.
The possibility of Korea implementing an AI Act
Finally, this paper considers the potential enactment of Korea’s comprehensive AI legislation, and how it could transform the country’s AI regulations. In February 2023, the National Assembly’s Science, ICT, Broadcasting and Communications Committee introduced a bill titled “the Act on Promotion of AI Industry and Framework for Establishing Trustworthy AI”, also known as the AI Act.[80] While it is clearly not Korea’s first AI-related bill, if this bill is passed at the main floor vote, it would supersede all other AI-related legislation and serve as a comprehensive master plan to consolidate all manner of AI laws.[81]
The proposed AI Act introduces various key changes, including establishing a legal definition of AI and comprehensive principles for safety.[82] Additionally, it establishes a statutory basis for developing Korea’s ethical guidelines for AI and a “Basic Plan for AI” as policy roadmaps.[83] Lastly, the enactment of this bill would provide support for innovative AI businesses, and lead to the formulation of a triannual government plan for AI.[84]
Currently, the prospect of the bill’s passage hinges on the outcome of Korea’s general election in 2024. Notwithstanding this, it is this author’s view that the implementation of this bill would be a tremendous boon to Korea’s AI agenda. Not only would this Act synthesise all AI-related legislation into a single statute, but it would provide legal basis for Korea’s primary AI strategies and goals which are currently enshrined in their soft law guidelines – thereby better protecting ethical and policy considerations.
Hence, this author is in favour of the adoption of this bill by the Korean government. Doing so would not only improve Korea’s ability to address AI-related threats and challenges, but it would allow Korea to level the playing field with other leading jurisdictions in AI which, as stated in [20], have already enacted hard laws to regulate AI.
Recommendations
In sum, this paper seeks to illustrate that while Korea’s current approach to AI which emphasises innovation and economic growth without over-regulating AI is commendable, the evolving landscape calls for the crucial implementation of stricter regulations. These hard regulations are necessary to safeguard fundamental rights and uphold standards of data protection amidst AI’s increasing influence and presence in our daily lives.
To this, the trajectory of Korea’s existing Digital Bill of Rights offers a tangible way forward. Chiefly, it simultaneously provides a soft law framework that outlines national-level principles reflecting the government’s digital policy direction while also permitting the drafting of numerous legislative proposals to enforce these principles definitively, as detailed in [19].[85] Thus, Korea should continue its soft law approach to AI regulation to increase innovation levels, but should not shy away from implementing hard laws for more efficient outcomes.
Specifically, there is an urgent necessity to regulate high-risk AI – defined by the National Assembly as AI used in critical areas significantly impacting human life, safety and basic rights of citizens.[86] In October 2021, it was revealed that two Korean ministries had shared facial information of both citizens and foreigners to private AI companies without obtaining their valid consent.[87] This was done to further a project aimed at developing an AI system to track faces at Incheon Airport’s immigration.[88]
However, the MSIT argued that this facial data processing was legal, citing its classification as biometric information protected under the PIPA.[89] The PIPC concurred with MSIT’s stance in its investigation results and determined that the use of this data fell within the scope of legitimate entrustment, therefore only imposing a minor fine on the MSIT for “failing to disclose the fact of entrustment”.[90] Nevertheless, civil rights groups have opposed this finding, and countered that high-risk AI utilising facial biometric data for large-scale AI training in public institutions jeopardised the privacy and security rights of both citizens and foreigners.[91]
Moreover, the National Human Rights Commission of Korea, an independent agency within South Korea’s executive branch,[92] issued an opinion to the Prime Minister and National Assembly Chairman in 2023 advocating for the enactment of hard laws to safeguard human rights against possible violations by facial recognition technology.[93] The opinion chiefly recommended that such technology should not be indiscriminately adopted, but instead used only where there is a recognised public interest need.[94] Further, the employment of facial recognition technology should be grounded by fixed legislations which align its usage with human rights principles.[95]
Accordingly, it is contended that high-risk AI demands hard regulations to prevent breaches of fundamental human rights. Corporations utilising such AI must also be held accountable and transparent in maintaining records of their development processes and training datasets. This would greatly reassure the public about the protection of their personal information, and garner increased public support for Korea’s AI regulations and policies.
Conclusion
Given AI’s rapid acquisition of capabilities previously exclusive to humans, it is recommended that Korea considers revising its “approve first, regulate later”[96] strategy so as to prevent the entrance of AI systems which undermine basic human rights and safety regulations. Instead, it is respectfully posited that Korea persists with its recent efforts to implement stricter AI laws. This approach will not impede but rather fuel Korea’s journey towards becoming a global AI leader, as more stringent regulations are crucial in addressing AI’s risks.
Notwithstanding these suggestions for enhancement, this paper asserts that Korea has admirably distinguished itself as a jurisdiction which employs innovative practices to both regulate AI and harness its potential as a mechanism to advance economic growth and societal benefit. This, along with the PIPC’s frequent proactive updates and initiation of new AI policies demonstrates Korea’s serious regard for the immense potential yet formidable threats of AI. Ultimately, these efforts stand Korea in good stead to achieve its goal of emerging as a jurisdiction at the forefront of global AI efforts.
Editor’s note: This student’s paper was submitted for assessment in end-May 2024. Information within this article should therefore be considered as up-to-date until that time. The views within this paper belong solely to the student author, and should not be attributed in any way to LawTech.Asia.
[1] Jongheon Kim, “Traveling AI-essentialism and national AI strategies: A comparison between South Korea and France” Review of Policy Research 2023; 40: 705-728, at p 709.
[2] Ibid.
[3] Jay Harriman, “AI Policy and Regulatory Frameworks Take Shape in APAC” Bower Group Asia (19 October 2023) <https://bowergroupasia.com/ai-policy-and-regulatory-frameworks-take-shape-in-apac/#:~:text=Korea%20is%20in%20the%20advanced,%E2%80%9Cthe%20AI%20Act%E2%80%9D> (accessed 11 April 2024).
[4] Susannah Haan, “SWOT analysis” CIPD website (15 March 2024) <https://www.cipd.org/en/knowledge/factsheets/swot-analysis-factsheet/#:~:text=A%20SWOT%20analysis%20is%20a,environment%20in%20which%20it%20operates.> (accessed 11 April 2024).
[5] Bryan H. Druzin, “Why does Soft Law Have any Power Anyway?” Asian Journal of International Law 2017; 7: 361-378, at p 361.
[6] The Government of the Republic of Korea, National Strategy for Artificial Intelligence (17 December 2019).
[7] Ministry of Science and ICT, National Guidelines for AI Ethics (23 December 2020).
[8] Ibid.
[9] Supra n 6, at p 16.
[10] So Young Kim, “Development and Developmentalism of Artificial Intelligence: Decoding South Korean Policy Discourse on Artificial Intelligence” in Imagining AI: How the World Sees Intelligent Machines (Oxford University Press, 2023) ch 20 at p 318.
[11] Id, at p 319.
[12] Ibid.
[13] Id, at p 323.
[14] Mark Zastrow, “South Korea trumpets $860-million AI fund after AlphaGo ‘shock’” Nature (18 March 2016) <https://www.nature.com/articles/nature.2016.19595> (accessed 11 April 2024).
[15] The Constitution of the Republic of Korea (1987) Art 127(1).
[16] Supra n 10, at p 326.
[17] Supra n 6, at p 16.
[18] Supra n 6, at p 22.
[19] Id, at p 24.
[20] Ibid.
[21] Supra n 10, at p 326.
[22] Supra n 6, at p 48.
[23] Id, at p 49.
[24] Supra n 7.
[25] Ibid.
[26] Supra n 10, at p 333.
[27] Id, at p 332-333.
[28] Ministry of Science and ICT, Digital Bill of Rights (26 September 2024).
[29] Kwang Bae Park, Sunghee Chae, Matt Younghoon Mok, “South Korea: Digital Bill of Rights – key takeaways” OneTrust Data Guidance (February 2024) <https://www.dataguidance.com/opinion/south-korea-digital-bill-rights-key-takeaways> (accessed 12 April 2024).
[30] Ministry of Science and ICT, “South Korea presents a new digital order to the world!” Press Release (25 September 2023).
[31] Ibid.
[32] Ibid.
[33] Lee Jae-Lim, “Government hopes ‘Digital Bill of Rights’ will set global standard” Korea JoongAng Daily (6 October 2023) <https://koreajoongangdaily.joins.com/news/2023-10-06/business/industry/Govt-hopes-Digital-Bill-of-Rights-will-set-global-standard/1884940> (accessed 27 November 2024).
[34] Willy Cho, “New Digital Order: A blueprint for mutual prosperity through AI governance in Korea” Microsoft (13 May 2024) <https://news.microsoft.com/apac/2024/05/13/new-digital-order-a-blueprint-for-mutual-prosperity-through-ai-governance-in-korea/> (accessed 27 November 2024).
[35] Ibid.
[36] Supra n 29.
[37] Ibid.
[38] Supra n 30.
[39] Supra n 29.
[40] Caitlin Andrews, “European Parliament approves landmark AI Act, looks ahead to implementation” International Association of Privacy Professionals (13 March 2024) <https://iapp.org/news/a/with-eu-ai-act-on-the-books-lawmakers-look-ahead/> (accessed 12 April 2024).
[41] The White House, Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (30 October 2023).
[42] Kenneth W. Abbott, Duncan Snidal, “Hard and Soft Law in International Governance” Legalization and World Politics 2000; 54(3): 421-456, at p 421.
[43] Personal Information Protection Act (2020) Korea Legislation Research Institute.
[44] “South Korea: National Assembly passes Data 3 Act” OneTrust Data Guidance (January 2020) <https://www.dataguidance.com/news/south-korea-national-assembly-passes-data-3-act#:~:text=The%20National%20Assembly%20of%20the,and%20Communications%20Network%20Utilization%20and> (accessed 12 April 2024).
[45] Chris H Kang, Sun Hee Kim, “Recent major amendments to three South Korean data privacy laws and their implications” International Bar Association <https://www.ibanet.org/article/0D5FD702-179C-42A1-B37D-45D12F4556DA> (accessed 12 April 2024).
[46] Supra n 43, Art 28-2(1).
[47] Supra n 45.
[48] Ibid.
[49] General Data Protection Regulation (EU) 2016/679 (2016).
[50] Kyung Sin Park, Natalie Pang, “Data Innovations and Challenges in South Korea: From Legislative Innovations for Big Data to Battling COVID-19” Konrad Adenauer Stiftung (2022) at p 11 – 12.
[51] Id, at p 12.
[52] Id, at p 13.
[53] Ibid.
[54] Sinook Kang, Jeong Ho Ahn, Hyein Lee, Ho Sang Yoon, “PIPC’s Amendment to the Guidelines on Processing Pseudonymized Data” Shin & Kim (31 May 2022) <https://www.shinkim.com/eng/media/newsletter/1834> (accessed 27 November 2024).
[55] Ministry of Science and Ict, “In the era of artificial intelligence, standards for pseudonymization of images, videos, voices, and texts have been introduced” Press Release (2 February 2024).
[56] Ibid.
[57] Heesoo Jang, “A South Korean Chatbot Shows Just How Sloppy Tech Companies Can Be With User Data” Future Tense, Slate (2 April 2021) <https://slate.com/technology/2021/04/scatterlab-lee-luda-chatbot-kakaotalk-ai-privacy.html> (accessed 12 April 2024).
[58] Ibid.
[59] Jasmine Park, “South Korea: The First Case Where The Personal Information Protection Act Was Applied To An AI System” Future of Privacy Forum (21 May 2021) <https://fpf.org/blog/south-korea-the-first-case-where-the-personal-information-protection-act-was-applied-to-an-ai-system/> (accessed 12 April 2024).
[60] Ibid.
[61] Supra n 29.
[62] Supra n 59.
[63] Supra n 29.
[64] Supra n 49, Art 22.
[65] Doil Son, “New data protection rules in South Korea” JD Supra (10 April 2024) <https://www.jdsupra.com/legalnews/new-data-protection-rules-in-south-korea-8439959/> (accessed 12 April 2024).
[66] Ibid.
[67] Supra n 43, Art 35(4).
[68] Supra n 65.
[69] Ibid.
[70] International Organisation of Employers Website, “South Korea: New data protection rules” Industrial Relations and Labour Law Newsletter (July 2024) <https://industrialrelationsnews.ioe-emp.org/industrial-relations-and-labour-law-july-2024/news/article/south-korea-new-data-protection-rules> (accessed 27 November 2024).
[71] Supra n 49.
[72] Minchae Kang, “South Korea – Data Protection Overview” OneTrust Data Guidance (July 2023) <https://www.dataguidance.com/notes/south-korea-data-protection-overview> (accessed 12 April 2024).
[73] Ibid.
[74] Supra n 29.
[75] Ibid.
[76] “South Korea: PIPC announces pilot Preliminary Adequacy Review System” OneTrust Data Guidance (12 October 2023) <https://www.dataguidance.com/news/south-korea-pipc-announces-pilot-preliminary-adequacy> (accessed 12 April 2024).
[77] Kuksung Nam, “South Korea launches AI privacy team to address security concerns” The Readable (6 October 2023) <https://thereadable.co/south-korea-launches-ai-privacy-team-to-address-security-concerns/> (accessed 12 April 2024).
[78] Supra n 72.
[79] Ibid.
[80] Supra n 3.
[81] Taeyoung Roh, Ji Eun Nam, “South Korea: Legislation on Artificial Intelligence to Make Significant Progress” Kim & Chang (6 March 2023) <https://www.kimchang.com/en/insights/detail.kc?sch_section=4&idx=26935> (accessed 12 April 2024).
[82] Supra n 3.
[83] National Assembly of the Republic of Korea Press Release, “National Assembly Over-Defense Bill 2 Subcommittee, resolution of the “Metaverse Act” and “Artificial Intelligence Act”” (14 February 2023) <https://www.assembly.go.kr/portal/bbs/B0000051/view.do?nttId=2095056&menuNo=600101&sdate=&edate=&pageUnit=10&pageIndex=1#> (accessed 12 April 2024).
[84] Supra n 3.
[85] Supra n 29.
[86] Yulchon LLC, “Legislative Framework and Practical Implications of “Law on Nurturing the AI Industry and Establishing a Trust Basis”” Lexology (20 March 2023) <https://www.lexology.com/library/detail.aspx?g=fa073ec6-81a1-44fd-87ce-c8d3f5f7a706> (accessed 13 April 2024).
[87] Korean Progressive Network Jinbonet, “Controversial Cases on AI in Republic of Korea” Association for Progressive Communications (12 January 2024) <https://www.apc.org/en/pubs/controversial-cases-ai-republic-korea> (accessed 13 April 2024).
[88] Ibid.
[89] Ibid.
[90] Ibid.
[91] Ayang Macdonald, “Rights groups demand halt to South Korea facial recognition surveillance project” Biometrics News (11 November 2021) <https://www.biometricupdate.com/202111/rights-groups-demand-halt-to-south-korea-facial-recognition-surveillance-project> (accessed 13 April 2024).
[92] Jongcheol Kim, Constitutional Law: Introduction to Korean Law, (Springer, Berlin, Heidelberg, 2012) at p 52 – 54.
[93] Supra n 87.
[94] Ibid.
[95] Ibid.
[96] Supra n 6, at p 24.