I Introduction
Given the rapid development of data analytics, computational power, machine/deep learning, and other advances in information and communications technology (ICT), a wide range of artificial intelligence (AI) applications and systems have penetrated every sphere of society, with significant economic, legal, ethical, and political ramifications. A term first coined by John McCarthy and his collaborators at the Dartmouth Summer Research Project in 1955, AI has experienced both booms (AI summers) and busts (AI winters) for seven decades, but now readily promises a new wave of industrial and technological revolution.Footnote 1 Artificial intelligence has reemerged as an umbrella term covering machine/deep learning algorithms, robotics, autonomous systems, and natural language processing, among others, as well as an enabling technology that promises to transform the world through self-driving vehicles, digital assistants, robo-advisors, automated mortgage approval, legal services, law enforcement, and court proceedings, all driving significant changes and disruptions across jurisdictions and sectors.Footnote 2
The rise of AI technologies has also raised concerns and controversies. Algorithmically powered tools have been widely adopted in contemporary workplaces to forge business practices for the purposes of efficiency, productivity, and management.Footnote 3 Camera surveillance, data analysis, and ranking and scoring systems are algorithmic tools that have given employers enormous power over their employees, yet their use also triggers serious controversies surrounding privacy, ethics, labor rights, and due process protections.Footnote 4 Apart from businesses, governments are among the primary users of AI-powered tools to surveil, sort, profile, and rate their citizens, creating a data-driven infrastructure of preferences that condition people’s behaviors and opinions.Footnote 5 China’s social credit system, Australia’s robo-debt program, and the welfare distribution platform of the United States (US) are prime examples of how governments employ automated decision-making to allocate resources and provide public services.Footnote 6 The use of algorithmic risk assessment tools to support judges’ sentencing decisions has been particularly problematic.Footnote 7
A growing body of literature has explored critical dimensions of the promises and perils of AI and the need for regulatory responses. Some have pointed to rule of law deficits in the automation of government functions, especially through the use of preprogrammed rules such as expert systems, as well as predictive inferencing systems trained with historic big data.Footnote 8 Others have emphasized how such technologies systematically fuel inequalities due to the typically unintentional bias against and harm to underrepresented populations.Footnote 9 Still others have argued that a society constantly being scored, profiled, and predicted generally threatens due process, justice, and human rights.Footnote 10 While much ink has been spilled on the imminent and critical issue areas of AI law and policy,Footnote 11 given the nature of AI as an enabling, infrastructural technology situated in an evolving socio-techno complex, according to Carrillo, systematic and comprehensive research on AI remains “relatively limited,” if not impossible.Footnote 12 Some of the existing scholarship focuses on a specific issue area, such as autonomous vehicles, smart cities, or algorithmic healthcare, and provides an in-depth analysis of legal, policy, and ethical ramifications, paving the way for regulatory reconfiguration.Footnote 13 Other research explores the regulatory trajectories, preferences, and (proposed) models of the industrialized jurisdictions, such as the European Union (EU) and the US, alluding to their potential for global reference or normative diffusion.Footnote 14 While some portions of existing scholarship discuss global or comparative perspectives, few touch upon issue areas that directly resonate with the diverse context and dynamics of the non-Western world.Footnote 15 Worse, despite the considerable impact of AI on most Asian countries,Footnote 16 there is a significant gap in the literature on AI governance in Asia. The current AI governance literature could benefit from a contextual discussion of how Asian jurisdictions perceive and respond to the challenges posed by AI, as well as how they interact with each other through regulatory cross-referencing, learning, and competition.
This chapter therefore aims to explore various contexts and dynamics of interactions among (East) Asian countries – including China, Japan, Korea, Taiwan, and Singapore – and to examine whether, when, and how the paths of their legal and policy actions cross each other. The diverse legal systems and practices in East Asian countries may affect how governance models are designed to harness AI. How are governments in East Asia (re)shaping their regulatory approaches and institutional designs in response to the multifaceted challenges of AI? Have any normative interactions (or convergences and divergences) emerged in the region, and if so, why? This chapter hopefully offers a clearer picture of the origins and trajectories of various AI governance models in (East) Asia, as well as a contextual understanding of the dynamic interactions between key countries in the region.
II The Rise of AI Governance: Key Players and Initiatives in Asia
As AI systems are increasingly deployed in common and consequential contexts, society is according greater attention to how to govern them. Many governance initiatives – adopted by governments, industry sectors, and hybrid organizations at domestic, regional, and transitional levels – are being deliberated and implemented with the aim of responding to the challenges posed by AI. This section will identify the key players in AI governance in East Asia and their legal and policy initiatives.
A. First Movers in AI Governance in Asia: China, Japan, and Singapore’s Agile Approaches
In Asia, more and more jurisdictions are developing or experimenting with ways of governance in response to AI’s promises and challenges. Among them, the governments of China, Japan, and Singapore are the most active and agile nodes of governance in the region, as they have acted relatively early, contemplated or tested multiple versions and models of governance, and attempted to influence the law and policy landscape beyond their domestic contexts. In terms of Korea and Taiwan, while their industries and academia are generally vibrant in the issue area of AI, these governments have demonstrated less regulatory entrepreneurialism, due to their preference for hard(er) law and their pro-innovation orientation, in effect embracing a wait-until-the-time-is-ripe approach. This chapter examines and characterizes these countries’ respective governance initiatives below.
Viewed by many as a “global leader in AI,”Footnote 17 China not only has taken the lead in AI-related research but has also developed a rigorous set of instruments for AI policy and standards, with the broader aim of shaping global rules and securing the dominance of Chinese AI industries.Footnote 18 As early as 2017, China’s State Council announced the “New Generation Artificial Intelligence Development Plan,” which marks 2025 as the year for China to establish a comprehensive system of AI law, policy, and ethicsFootnote 19 and which highlights the need for market trust, innovation, and ex ante regulation.Footnote 20 In the same year, China’s Ministry of Science and Technology created the Office of the Development and Advancement of the New Generation of Artificial Intelligence, which serves as a coordinating body for fifteen other relevant agencies.
The first white paper on AI standardization, highlighting the gaps in AI safety and ethical norms and the need for standards harmonization, was published in January 2018 by the China Electronics Standardization Institute.Footnote 21 In 2019, the National New Generation Artificial Intelligence Governance Expert Committee released “Governance Principles for a New Generation of Artificial Intelligence: Develop Responsible Artificial Intelligence,”Footnote 22 which sets out eight responsible AI principles with “Chinese characteristics” (which may differ from “AI ethics developed in the Western contexts that emphasize more on individual rights and do not always put the collective at the forefront”)Footnote 23 but simultaneously emphasizes the importance of cross-border dialogue and collaboration.Footnote 24 A number of other AI governance policy papers and initiatives were also adopted or revised from 2019 to 2021. In April 2022, the China Academy of Information and Communications Technology (affiliated with the Ministry of Industry and Information Technology) published the “2022 White Paper on Artificial Intelligence.”Footnote 25 This 2022 White Paper stands out from previous governance instruments because it explicitly points out the imminent risks of AI, such as an algorithmic black box, bias, and discrimination, and makes references to international practices, including those in the US, the EU, Japan, G20, Organisation for Economic Co-operation and Development (OECD), and the G7.Footnote 26 In July 2023, the Cyberspace Administration of China released the “Interim Measures for the Administration of Generative Artificial Intelligence Services,” which regulates generative AI’s development and application considering its social and political implications.Footnote 27
The trajectory of China’s development in AI governance indicates a gradual shift toward clearer, stricter, harder, and more centralized rules for AI governance, with the involvement of all relevant agencies.Footnote 28 Having said that, China has yet to enact a unified, comprehensive AI law nationwide,Footnote 29 although many have been proposed.Footnote 30 According to Weixing Shen and Yun Liu, given that certain pressing issues surrounding AI governance have been addressed by specific laws and regulations (such as data-driven price discrimination),Footnote 31 the soft-law policy papers, principles, and guidelines appear to be adequate at this moment in striking a balance between technological innovation, economic growth, and social governance in a practical sense.Footnote 32
Japan is among the earliest movers in AI and has been keen to leverage industry expertise and resources in shaping the country’s AI governance. Beginning in 2016, the Ministry of Internal Affairs and Communications (MIC) initiated the “Conference toward AI Network Society” to engage with industry, academia, and civil society and foster a multi-stakeholder dialogue.Footnote 33 From 2017 to 2022, the outcomes of the deliberations of this Society were published annually as a report that consolidated developments in AI governance in Japan and internationally.Footnote 34 While such reports do not carry legal force, they hold fairly strong reference power and serve as the nationwide public–private forum and the common basis for policymaking in AI governance. Notably, this Conference helped the Japanese government prepare the “Draft AI R&D Guidelines for International Discussions” as early as 2017 – which propose nine principles for AI research and development, including collaboration, transparency, controllability, safety, security, privacy, ethics, user assistance, and accountability – to serve as the anchor for deliberations in the G7 and the OECD.Footnote 35 On this basis, in 2019, the MIC further developed and issued the “AI Utilization Guidelines: Practical Reference for AI Utilization,”Footnote 36 and the Cabinet Secretariat’s Integrated Innovation Strategy Promotion Council further announced the important “Social Principles of Human-centric AI,” which enumerates fundamental principles on human-centeredness, education, privacy, security, fair competition, fairness, accountability, transparency, and innovation.Footnote 37 Both instruments served as important inputs toward the OECD’s normative activities,Footnote 38 as Japan has been the leader in the AI expert group at the OECD and the OECD Council on Artificial Intelligence that drafted the OECD AI Principles, which are fairly similar to Japan’s principles.Footnote 39
The Ministry of Economy, Trade and Industry (METI) has also issued a few instruments aiming to address specific AI challenges. In 2018, it released the “Contract Guidelines on Utilization of AI and Data Released,” which can serve as a practical template for business transactions and collaboration on AI and big data.Footnote 40 Given AI’s trade, economic, and industrial implications, the METI also adopted a general policy paper in 2021, which was then updated in the same year, taking into account relevant international practices (such as the EU, Singapore, the US, International Organization for Standardization (ISO), and Institute of Electrical and Electronics Engineers (IEEE)).Footnote 41 The METI’s policy paper noted that “[b]ased on the opinions of industries and the direction of improvement of literacy … legally-binding horizontal requirements for AI systems are deemed unnecessary at the moment.”Footnote 42 The METI later issued its first comprehensive “Governance Guidelines for Implementation of AI Principles”Footnote 43 in 2021, based on a consolidated understanding of existing key codes of conduct, guidelines, and policies to offer concrete albeit voluntary action targets for the “Social Principles of Human-centric AI,” and again updated the Guidelines in 2022.Footnote 44 In 2024, the METI and the MIC integrated and updated the existing guidelines, compiled expert deliberations, and jointly released the “AI Guidelines for Business Ver1.0.”Footnote 45 Overall, perhaps due to the “techno-animistic tradition,” a relatively open attitude toward technology, robotics, and AI in society,Footnote 46 and more trust placed in industry self-regulation, Japan’s AI governance strategies have largely been on the soft side, highlighting the need for promoting innovation in the context of certain fundamental principles rather than establishing binding rules.
Singapore has been able to act quickly on AI governance by building on the existing international normative benchmarks. At the World Economic Forum in Davos in 2019, Singapore’s Personal Data Protection Commission (PDPC) released the first edition of the “Model AI Governance Framework” with the Infocomm Media Development Authority (IMDA), providing specific implementation guidance for the private sector worldwide and aiming to garner broader adoption and feedback.Footnote 47 A revision of this Governance Framework was issued in 2020, together with the “Companion to the Model AI Governance Framework: Implementation and Self-Assessment Guide for Organisations,”Footnote 48 the latter of which resulted from a partnership between PDPC, IMDA, and the World Economic Forum Centre for the Fourth Industrial Revolution.Footnote 49 Both of these instruments are closely aligned with their international counterparts (in particular, with those adopted by the European Commission’s High-Level Expert Group, the OECD Expert Group on AI, the IEEE, and the Software and Information Industry Association) in upholding core principles that AI-based decisions should be transparent, explainable, and fair, and that AI-based systems should be “human-centric.”Footnote 50 The Governance Framework also includes recommendations involving internal governance structures and measures, the human role in AI-augmented decision-making, operations management, and stakeholder involvement.Footnote 51 A Compendium of Use Cases was further published by IMDA and PDPC, featuring real-world examples of how various entities have implemented or aligned their practices with Singapore’s Model AI Governance Framework.Footnote 52 The IMDA and PDPC then launched the AI Governance Testing Framework and Toolkit “A.I. Verify” for businesses, which aim to align with the Model Framework in a verifiable manner. The Model Framework has already garnered initial support from companies such as Google, Microsoft, and DBS Bank.Footnote 53 In May 2022, AI Verify was released as a minimum viable product for an international pilot.Footnote 54 Singapore cooperated with the US on the interoperation on their AI governance framework.Footnote 55 All in all, it seems clear that Singapore has been moving swiftly to embrace international instruments in a rather aggressive manner, and in converting them into implementable practices. The primary focus of Singapore’s AI governance seems to remain closely connected to international benchmarks, to help the private sector identify practical and operational rules, and to provide regulatory clarity to the business environment.
B. Korea and Taiwan’s Hard-Law Incrementalism
The Asian Business Council regarded Korea and Taiwan as the “more resilient economies to AI-induced changes,” along with Singapore and Japan, based on their preparedness in the areas of data openness, AI-related tax credits and subsidies, government funding for AI development, policies targeted at job displacement, social safety nets, and the legal responsibilities of AI systems.Footnote 56 Yet, both countries have been relatively incrementalist in AI governance, implementing few initiatives, partly due to both countries’ preference for a hard-law approach backed by the legislature rather than administrative entrepreneurialism.
Korea moved early in adopting a hard-law framework in AI governance at the general level. In 2020, Korea adopted the “Framework Act on Intelligent Informatization,”Footnote 57 a comprehensive amendment that renames and replaces the old “Framework Act on National Informatization of 2009”Footnote 58 (formerly the “Framework Act on Informatization Promotion,” which sought to establish the policy foundation for promoting the ICT industry).Footnote 59 The purpose of the Framework Act on Intelligent Informatization is to realize a smart, well-informed society, to ensure national competitiveness, and to promote the well-being of the population under the fundamental principles of freedom and openness, human dignity and values, safety, privacy protection, and public–private collaboration, among others.Footnote 60 The Act also authorizes the Ministry of Science and the ICT to promulgate AI-related policies,Footnote 61 pursuant to which the Ministry and the Presidential Committee on the Fourth Industrial Revolution announced the “Draft of the National AI Ethical Guidelines,”Footnote 62 with three principles (human dignity, public good in society, and fitness for the purpose of technology) and ten general goals in November 2020 and the “Strategy to Realize Trustworthy Artificial Intelligence”Footnote 63 in May 2021. While the Framework Act on Intelligent Informatization is more legalized, it does not include specific implementable requirements. Additionally, the softer governance instruments that have been adopted under the framework seem to remain high-level, general, and abstract. More recently, the Science, ICT, Broadcasting and Communications Committee of the Korean National Assembly proposed the “Act on Promotion of AI Industry and Framework for Establishing Trustworthy AI” in February 2023,Footnote 64 which aims at ensuring the safety of and trust in high-risk AI technologies.Footnote 65 While the proposed Act has not been finalized, it is known to take the “adopting technology first and regulating later” approach, so as to promote AI innovation and economic development.Footnote 66
In 2017, Taiwan’s Ministry of Science and Technology (reorganized as the National Science and Technology Council) began to pour significant research funding into various fields of AI research and development, including the law and policy groundwork.Footnote 67 Taiwan is also regarded as “a good example for other countries in East Asia,” scoring high in innovation capacity and “having established an R&D framework based on co-innovation between Taiwanese and international companies.”Footnote 68 In some specific issue areas, Taiwan has adopted a hard-law approach to strike a balance between technological innovation and regulatory stability, albeit with a limited scope of application. Noting the disruptions posed by AI, especially when some autonomous vehicles may have been introduced in the market, the “Unmanned Vehicles Technology Innovative Experimentation Act” was enacted in 2018.Footnote 69 It authorizes the Ministry of Economic Affairs to set up a sandbox to regulate the development and testing of autonomous vehicles.Footnote 70 Similar developments appear in the financial sector, such as the enactment of the Financial Technology Development and Innovative Experimentation Act in 2018.Footnote 71 In September 2019, after consulting with experts in AI, the Ministry of Science and Technology announced the “AI Technology R&D Guidelines,” which incorporate three core values – human-centric, sustainable, and inclusive AI – as well as eight guidelines.Footnote 72 The AI Technology R&D Guidelines emphasize the importance of being aligned with relevant international benchmarks and make explicit reference to the “Ethics Guidelines for Trustworthy AI” prepared by the EU’s High-Level Expert Group on Artificial Intelligence, Japan’s “Social Principles of Human-centric AI,” the OECD AI Principles, and the IEEE’s Ethically Aligned Design.Footnote 73 Nevertheless, the AI Technology R&D Guidelines contain only high-level, concise guidelines (comprising only three pages for eight guidelines) and do not seem to be operable or implementable in practice.
In 2019 and 2020, a proposal on the “Basic Act for Artificial Intelligence Development” was introduced in the Legislative Yuan, referencing models and practices found in the US, the EU, the United Kingdom (UK), China, and Japan, but this proposal went no further.Footnote 74 In 2023, the Executive Yuan again postponed the proposal of the Basic Act, considering the changing technological landscape and the need to align with the other regulatory frameworks worldwide.Footnote 75 Nevertheless, faced with the acute challenges posed by the sudden rise of generative AI, the Executive Yuan issued “Guidelines for the Use of Generative AI by the Executive Yuan and its Subordinate Agencies” in late 2023.Footnote 76 In the same year, the Financial Supervisory Commission issued “Core Principles and Policies for AI Applications in the Financial Industry” to strengthen risk management in financial AI technologies.Footnote 77 Overall, Taiwan, like Korea, has a strong focus on technological and economic development and a modest preference for a hard(er) law approach,Footnote 78 which has resulted in its wait-until-the-time-is-ripe approach to AI governance. While there are some guidelines adopted by sector-specific regulatory agencies for certain issue areas with a narrow and concrete scope of application, a hard-law incrementalism seems preferred.
III Inter-Asian AI Governance? Normative Trajectories and Interactions
Interactions in shaping AI governance across East Asian countries have merit and should play a role in the development of AI governance. Some interactions can be encouraged through channels of cross-reference – the laws, frameworks, guidelines, principles, and use cases – while others may be facilitated by informal mutual dialogue, the exchange of best practices, and regulatory learning. Regular, dynamic interactions will likely result in constructive normative outcomes across jurisdictions, together contributing to global AI governance. Moving forward, two dimensions are worth noting in terms of Inter-Asian Law (IAL) and AI governance, as this chapter unfolds below.
A. Promoting Interactions in the Context of Divergent Regulatory Preferences and Technological Competition
Depending on issue areas – climate change, healthcare, financial services, ICT, food safety, and the pandemic – countries with different historical and socioeconomic underpinnings, as well as varying constitutional, administrative, and political environments, tend to diverge in their preferences in regulatory approaches.Footnote 79 This cannot be more evidently demonstrated than the case of genetically modified organisms regulation and the two regulatory extremes of the EU and the US.Footnote 80 Increasingly, regulatory preferences can amass and result in a path-dependent effect in a given issue area, leading to regulatory divergence and even cross-border conflicts.Footnote 81 In some cases, jurisdictional competition in an issue area shapes and reshapes countries’ regulatory preferences and strategies.Footnote 82
In the area of AI governance, while there are numerous voluntary guidelines and principles, divergences in regulatory practices prevail on the ground,Footnote 83 making normative interactions dynamic. For many Asian countries, AI has been regarded as a necessary area for socioeconomic development over the coming decades, and the primary objective of governance is therefore to “create an enabling environment for AI development and deployment.”Footnote 84 Apart from the practical value of incorporating and influencing international benchmarks (such as saving costs in contemplating new rules, seeking legitimacy, or competing for leadership in global AI governance), there is a need for Asian countries to examine each other’s governance initiatives, given the differences underlying their respective political systems, legal traditions, and social contexts,Footnote 85 to secure better competitiveness. Having said that, the established regulatory preferences, legal practices, institutional arrangements, and government–business relationships mean that there are transaction costs whenever a path change takes place,Footnote 86 but more interactions promote mutual understanding and regulatory learning, paving the way for cooperation.
Some East Asian countries have interacted with, and learned from, each other when developing their AI governance frameworks, in addition to referencing relevant international guidelines and principles. Due to the competition among China, Japan, Korea, and Taiwan in the global technology supply chain,Footnote 87 it is crucial for them to stay tuned to each other’s regulatory moves and connected to the international governance vibes. In their eyes, aligning with common standards at both IAL and global levels helps promote industry competitiveness, technology development, and an export-oriented economy, contributing to the maintenance of an enabling environment for AI.Footnote 88 For instance, China’s “2022 White Paper on Artificial Intelligence” explicitly refers to international normative instruments adopted in the G20, the OECD, the G7, the EU, the US, and Japan. Japan’s METI policy paper on “AI Governance in Japan” takes into account international practices of the EU, the US, the ISO, the IEEE, and Singapore. Additionally, Singapore ensures that its practical guidelines closely align with their international counterparts, particularly those adopted by the EU, the OECD, and the IEEE. Taiwan, in turn, looked to most of the above countries in setting its “AI Technology R&D Guidelines” and its proposal on the “Basic Act for Artificial Intelligence Development” – the EU, the US, the UK, France, the IEEE, the OECD, Japan, and China. While the inter-Asian cross-reference activities in AI governance have not been full-fledged, there has been a growing practice of inter-Asian learning. Practically speaking, some of the development and application of AI touches upon sensitive social and ethical values, as well as on public morals, and countries may in the end need to resort to shared standards in the region if not globally.
B. Supporting Inter-Asian Institutions as Forums for Governance Dialogue
There is also a need to support regional institutional settings for Asian countries to actively engage with each other (and the world), to forge consensus and collectively invest in substantive norm-making. For instance, the Asia-Pacific Economic Cooperation (APEC) has actively engaged in many AI-related policy activities. In December 2020, China, Chile, Mexico, and New Zealand cohosted the Learning Workshop in Artificial Intelligence: Experiences of APEC Economies to exchange views on various AI governance issues.Footnote 89 Also in 2020, the APEC Business Advisory Council (ABAC) released a report on AI adoption in APEC economies, which noted that APEC members have yet to forge a consensus surrounding critical AI issues, but pointed to the need for greater cooperation on coherent regulatory compliance requirements.Footnote 90 More recently, APEC published two case study policy reports, “Artificial Intelligence in Economic Policymaking”Footnote 91 and “Artificial Intelligence (AI) Policy Recommendation on Digital Transformation for Healthcare Ecosystem – Case Study Report,”Footnote 92 to contribute to regional and global dialogues. Also, in the area of data privacy, Japan, Korea, Singapore, and Taiwan, together with Canada, the Philippines, and the US, jointly adopted the Global Cross-Border Privacy Rules Declaration in late 2022 to ensure the free and secure flow of data.Footnote 93
In addition to APEC, which serves as a forum for inter-Asian AI governance dialogue, it is desirable to foster institutional venues and platforms for Asian countries to exchange best practices, forge a consensus, or draft substantive rules and guidelines. This resonates with the fact that Japan, Singapore, and China have been actively participating and sometimes playing leadership roles in both regional and international platforms. Notably, Japan hosted in November 2022 the Global Partnership on Artificial Intelligence (GPAI) Tokyo Summit, attended by member governments in Asia and beyond, which together adopted the “GPAI 2022 Ministers’ Declaration,” which confirms key commitments, including “protecting and promoting human-centred values and democracy that underpin an inclusive, development-oriented, sustainable and peaceful society,” “[opposing] unlawful and irresponsible use of artificial intelligence,” and is also committed to “[upholding the] multistakeholder approach and … stronger cooperation between public and private actors.”Footnote 94
One potential institutional setting for inter-Asian AI governance dialogue is the Digital Economy Partnership Agreement (DEPA), a module-based collaborative framework initiated by Singapore, Chile, and New Zealand in 2020 to forge common standards and practices that will promote data-driven trade and commerce, emerging technologies, and digital inclusion.Footnote 95 The DEPA includes provisions for Parties to cooperate in “developing ethical and governance frameworks for the trusted, safe and responsible use of AI technologies,”Footnote 96 and calls on its Parties to “promote the adoption of ethical and governance frameworks that support the trusted, safe and responsible use of AI technologies (AI Governance Frameworks).”Footnote 97 Despite its “best efforts” wording, DEPA moves well beyond the Comprehensive and Progressive Agreement for Trans-Pacific Partnership’s and Regional Comprehensive Economic Partnership’s e-commerce (or data flow) chapters in providing a concrete direction, key values, and a template for AI governance.Footnote 98 These are promising first steps toward institutionalized cooperation in inter-Asian AI governance, especially when considering that both China and Korea have applied to join DEPA.Footnote 99 Given its flexible institutional design and inclusive setting, in the long run, DEPA may serve as an important institutional forum for dialogue, the exchange of best practices, the establishment of a common playbook of general principles, and cooperation among (Asian) countries in AI governance.
IV Conclusion
As AI continues to develop and permeate various aspects of society, the need for robust, contextually aware governance frameworks becomes ever more critical. This chapter has examined the dynamic interactions among key East Asian countries, highlighting their diverse regulatory approaches and the importance of normative interactions such as mutual learning, exchange of best practices, and institutionalized cooperation. While China, Japan, Singapore, Korea, and Taiwan each exhibit their own strategies in addressing AI governance, shaped by respective political, legal, and socioeconomic contexts, the evolving nature of AI technology necessitates continuous adaptation and alignment with shared norms and standards, at both the IAL and global levels. The increasing practice of inter-Asian regulatory cross-referencing, joint efforts, and cooperation through international forums underscores the region’s recognition of the benefits of shared standards and collaborative governance. In particular, initiatives like DEPA offer promising platforms for fostering dialogue, best practices exchanges, and the development of AI governance. As East Asian countries navigate the complexities of AI governance and continue with regulatory experiments, leveraging regional institutions and facilitating normative interactions remain crucial to develop collective insights and frameworks that serve as vital references for global AI governance discourse.