Reading time: 7 minutes

Written by: Peng Huijuan

Introduction

Artificial intelligence (“AI“) has become a transformative force across a range of industries, including healthcare, finance, education, transportation, and entertainment. Its ability to analyse large datasets, automate complex processes, and enhance decision-making has revolutionised these sectors. However, the rapid pace of AI development has outpaced the capacity of existing legal frameworks to provide effective regulation, leading to significant regulatory gaps and uncertainties. These gaps not only hinder innovation but also fail to address critical societal concerns, including privacy, ethical use, and accountability (Calo, 2017). This regulatory lag, often referred to as the “pacing problem,” describes the growing disconnect between the rapid progression of emerging technologies and the slower evolution of the legal and ethical frameworks required to govern them (Marchant, 2011). 

Traditional regulatory approaches, typically rigid and slow to adapt, are ill-suited to rapidly evolving technologies like AI. As a result, there is an urgent need for a novel regulatory paradigm that can keep pace with technological advancements while preserving legal certainty and protecting societal interests. This paper introduces the Evolved AI Regulation Framework (“EARF“), an adaptive regulatory model inspired by evolutionary economics. EARF integrates the principles of variation, selection, and retention with inclusive stakeholder engagement, ethical alignment, and global coordination. By adopting this adaptive approach, the EARF aims to create a regulatory environment that evolves alongside with AI technologies, fostering innovation while safeguarding public interests and societal values.

Literature review

Existing legal frameworks applicable to AI are largely reactive and sector-specific, resulting in fragmented and inconsistent regulatory approaches. For instance, initiatives such as the European Union’s General Data Protection Regulation (“GDPR“) focus on data protection arguably do not directly address AI-specific challenges, including algorithmic transparency and accountability (Veale & Binns, 2017). The EU AI Act entered into force in August 2024 and is now being phased in, representing the first comprehensive attempt to close these gaps (European Union, 2024). However, globally, regulation of AI remains incomplete and fragmented. The complexity and rapid pace of AI technologies present substantial regulatory challenges, such as technological uncertainty, algorithmic bias, opacity in decision-making, and accountability gaps (Barocas & Selbst, 2016; Scherer, 2015).

The global reach of AI technologies highlights the pressing need for international regulatory coordination to avoid regulatory fragmentation. Disparate regulations across jurisdictions not only complicate AI deployment on a multinational scale but also exacerbate ethical and legal inconsistencies (Cath, 2018). This regulatory disparity creates a complex environment in which multinational companies face significant compliance challenges. Effective compliance mechanisms often require a combination of formal enforcement measures, such as legal penalties, and informal mechanisms, including community-based social sanctions. Research suggests that social pressures within communities can sometimes achieve compliance more effectively than legal penalties (Dietz et al., 2003). Therefore, adaptive, multi-layered institutions that incorporate both formal and informal mechanisms are better positioned to address the diverse and dynamic challenges of AI regulation at both local and global levels.

Evolutionary economics, as articulated by Nelson and Winter, provides a potentially valuable theoretical tool for addressing these challenges. Their framework highlights the roles of variation, selection, and retention in shaping economic and institutional dynamics (Nelson & Winter, 1982). Analogous to how firms adapt routines to respond to environmental pressures, legal systems can evolve in response to technological advancements and societal demands. This adaptive perspective suggests that regulatory frameworks for AI could benefit from a similar evolutionary approach, enabling laws to keep pace with rapid technological developments while maintaining coherence across jurisdictions. 

Building on these insights, this paper introduces the Evolved AI Regulation Framework (“EARF“). This framework seeks to address the limitations of current AI regulatory systems by fostering an adaptable regulatory environment capable of responding dynamically to technological evolution and societal needs. By integrating adaptive principles into AI governance, EARF aims to create a more effective and coherent regulatory architecture.

The Evolved AI Regulation Framework

(1) Variation and experimentation

The EARF begins by fostering diverse regulatory strategies to address the emerging challenges posed by AI. This framework facilitates the development and testing of various mechanisms, including guidelines, standards, pilot programs, and regulatory sandboxes. These tools enable regulators to explore a wide range of legal and policy approaches, fostering an environment conducive to experimentation and learning.

Drawing an analogy from evolutionary biology, this method mirrors the role of genetic variation in populations, where diverse traits are introduced and subsequently selected based on their adaptive fitness in specific environments. Similarly, in the regulatory context, such diversity enables policymakers to evaluate which approaches most effectively support innovation while mitigating risks and safeguarding public interests. 

(2) Selection through evaluation

Regulatory proposals within the EARF are subject to rigorous empirical analysis, informed by feedback from a wide range of stakeholders, including government, industry, academia, and the public. Pilot programmes and case studies serve as crucial tools for assessing the impact, efficacy, and ethical soundness of various regulatory approaches. Through this evaluation process, only the most effective, efficient, and ethically robust measures are retained and institutionalised. This selection mechanism resembles natural selection, where advantageous traits become more widespread within a population, enhancing adaptability and resilience in a rapidly evolving technological landscape.

(3) Retention and refinement

The EARF emphasises the continuous monitoring and refinement of successful regulations to ensure they remain responsive to ongoing advancements in AI technologies. Regulators leverage real-time data and insights to introduce adaptive updates, maintaining the framework’s relevance and its ability to address emerging challenges effectively. Ineffective or outdated regulations are systematically revised or phased out, ensuring the framework’s efficiency and alignment with the dynamic realities of technological development. This process parallels how beneficial genetic traits are preserved and gradually optimised within a population over time.

(4) Inclusive stakeholder engagement

Inclusive stakeholder engagement is fundamental to the legitimacy and efficacy of the regulatory process. The EARF employs formal consultation mechanisms, such as public hearings, advisory committees, and collaborative forums, ensure that diverse perspectives are incorporated. Participants include government agencies, industry representatives, academic experts, civil society organizations, and the public. This inclusivity fosters regulatory acceptance and ensures that policies are comprehensive, addressing the needs and concerns of a broad range of stakeholders. In evolutionary terms, stakeholder engagement functions as an “environmental pressure” that guides the direction of regulatory evolution. This dynamic process strengthens the legitimacy of governance structures and fosters trust among all involved parties.

(5) Ethical alignment

Ethical alignment is essential for fostering the responsible development and application of AI systems. Policymakers should develop guidelines and laws that encourage the ethical development and application of AI, ensuring that these systems align with human values and are deployed responsibly. To achieve this, dedicated ethics oversight bodies can be established to evaluate the ethical implications of AI technologies and their associated regulations. These bodies would address critical concerns such as algorithmic bias, privacy, accountability, and transparency. By embedding ethical considerations into the regulatory process, policymakers can ensure that AI systems not only comply with legal requirements but also uphold societal values and ethical standards. This approach strengthens public trust in AI-driven innovations, creating a foundation for sustainable and socially beneficial technological progress. Ethical alignment thus serves as a crucial pillar of the EARF framework, reinforcing the legitimacy and integrity of AI governance.

(6) Global coordination

Global coordination is critical for harmonising AI regulations across borders, addressing the challenges posed by fragmented regulatory frameworks, and fostering consistency in AI governance. International collaboration facilitates the alignment of standards, which is essential for tackling global issues such as cybersecurity, data privacy, and ethical governance. By creating a cohesive regulatory landscape, global coordination supports seamless cross-border operations for AI technologies while promoting shared ethical principles and best practices. In evolutionary terms, global alignment resembles the interconnectedness of ecosystems, where collaboration and mutual adaptation enhance stability and resilience, supporting the sustainable evolution of regulatory practices.

Discussion

The EARF fosters an environment that promotes technological innovation while safeguarding ethical standards and societal values, ensuring the responsible development and deployment of AI without compromising public trust or welfare. By integrating principles of variation, selection, and retention from evolutionary economics into legal frameworks, regulators can design more adaptive and resilient systems capable of responding dynamically to rapid technological advancements.

A cornerstone of the EARF is its emphasis on inclusivity. By incorporating input from a diverse range of stakeholders, the framework ensures that regulations are both comprehensive and reflective of varied perspectives, thereby enhancing public trust and encouraging compliance. In addition to inclusivity, ethical alignment and global coordination are pivotal to the framework’s success. Establishing dedicated ethics oversight bodies helps ensure that AI systems adhere to societal values and ethical norms, addressing pressing concerns such as bias, privacy, and accountability. Simultaneously, international cooperation facilitates the harmonisation of AI regulations across jurisdictions, reducing the complexities associated with cross-border regulatory inconsistencies and advancing a unified global approach.

Implementing an adaptive legal framework presents several challenges that must be carefully managed to ensure its effectiveness and equity. One significant hurdle is resource constraints, as the continuous monitoring, evaluation, and refinement required for such a framework demand substantial investment. Regulators can mitigate this challenge by prioritising high-impact areas where adaptive regulation is most needed. Additionally, leveraging technological tools, such as AI-driven analytics and automated monitoring systems, can improve efficiency and reduce resource burdens. Another challenge lies in the dynamic nature of adaptive regulation, which may introduce legal uncertainty. This uncertainty can complicate compliance for regulated entities and hinder consistent enforcement. To address this, regulators must strike a balance between flexibility and the establishment of stable foundational principles. Clear guidelines on the adaptation process and transparent communication of regulatory changes are essential to minimising uncertainty and building trust among stakeholders. Additionally, there is a risk of regulatory capture, where influential stakeholders could unduly influence the regulatory process. To mitigate this, it is crucial to implement checks and balances, promote diversity in stakeholder participation, and ensure transparency throughout the regulatory process. By addressing these challenges proactively, regulators can strengthen the implementation of EARF, ensuring it remains responsive, equitable, and aligned with societal values.

Conclusion

As AI technologies evolve at an unprecedented pace, their governance and regulation demand urgent, strategic, and forward-looking action. The EARF offers a practical and adaptive approach by integrating principles from evolutionary economics, such as variation, selection, and retention, with inclusive stakeholder engagement, ethical alignment, and global coordination. Grounded in Nelson and Winter’s evolutionary theory, the EARF fosters a resilient regulatory environment that adapts to technological advancements while addressing societal needs. This model strikes a critical balance between legal stability and flexibility, promoting innovation while upholding ethical standards and protecting public interests. The EARF serves as a comprehensive and actionable blueprint for lawmakers and policymakers, guiding the responsible development and deployment of AI technologies. By integrating ethical considerations and fostering public trust, the framework ensures that AI becomes a force for societal benefit, contributing positively to shaping a better future for the next generation.

Editor’s note: This paper was submitted to LawTech.Asia as a guest contribution. The views within this paper belong solely to the author, and should not be attributed in any way to LawTech.Asia.


References

Barocas, S., & Selbst, A. D. (2016). Big Data’s Disparate Impact. California Law Review104, 671.

Calo, R. (2017). Artificial Intelligence Policy: A Primer and Roadmap. U.C. Davis Law Review51, 399.

Cath, C. (2018). Governing artificial intelligence: Ethical, legal and technical opportunities and challenges. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences376(2133), 20180080. https://doi.org/10.1098/rsta.2018.0080

Dietz, T., Ostrom, E., & Stern, P. C. (2003). The struggle to govern the commons. Science (New York, N.Y.)302(5652), 1907–1912. https://doi.org/10.1126/science.1091015

European Union. (2024, June 13). Artificial Intelligence Act (Regulation (EU) 2024/1689). Official Journal of the European Union, L 202, 1–165. https://eur-lex.europa.eu/eli/reg/2024/1689/oj

Marchant, G. E. (2011). The Growing Gap Between Emerging Technologies and the Law. In G. E. Marchant, B. R. Allenby, & J. R. Herkert (Eds.), The Growing Gap Between Emerging Technologies and Legal-Ethical Oversight: The Pacing Problem (pp. 19–33). Springer Netherlands. https://doi.org/10.1007/978-94-007-1356-7_2

Nelson, R. R., & Winter, S. G. (1982). An Evolutionary Theory of Economic Change. Harvard University Press. https://www.hup.harvard.edu/books/9780674272286

Scherer, M. U. (2015). Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies. Harvard Journal of Law & Technology29, 353.

Veale, M., & Binns, R. (2017). Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data. Big Data & Society, 1–17. https://doi.org/10.1177/2053951717743530