This was written for, and in advance of, the Council of Councils Thirteenth Annual Conference, Session Three: Geopolitics, Diplomacy and AI.
This background memo provides brief responses to the guiding questions outlined in the agenda. The responses provide the basis for a set of policy prescriptions directed toward national governments. The reader is assumed to have some general knowledge of artificial intelligence (AI).
What governmental capacity is needed to execute the diplomacy of managing AI?
First, a firm grasp of AI’s impact on important economic sectors, on various measures of security and well-being, and on issues of global importance, including AI ethics, is needed. Starting with the “why” in diplomatic management of AI begins with an appreciation of AI’s potential benefits and risks, and its associated pathways to societal and economic impact. AI offers a range of potential benefits, including improved societal well-being, global economic prosperity and an increased capacity to address critical global challenges. Conversely, AI also raises challenges, such as inequality, competition, copyright issues, labour-market shifts, and threats to democracy and human rights.
Second, governments need a chief AI officer position with strategic and technical responsibilities. A chief AI officer holds strategic value if empowered to tackle immediate national priorities (i.e., not just fill an information technology support role). Such a role is needed to modernize government systems, invest in AI commercialization, collaborate with the technology sector, and encourage AI adoption and skill-building. A chief AI officer can drive collaboration across departments on issues such as data privacy, cybersecurity and intellectual property, resulting in the development of more coherent policies. A dedicated AI office can also pool expert knowledge and resources while ensuring the effective implementation of policies.
Third, governments need an enhanced ability to connect with the AI science and technology community. The ability to leverage the technical domain knowledge of a community of experts is critical and urgently needed to assess frontier AI risks. Governments that embed science translators and facilitate knowledge transfer can boost a country’s diplomatic capacity to manage AI. Attracting and retaining AI workers in government is a critical issue, as the demand for AI researchers working to advance frontier systems grows and outpaces those involved in AI safety research. This task is not trivial: take, for example, the UN system, which is facing a severe shortage of AI expertise across its different parts.
Fourth, governments need to be engaged in the responsible use and management of AI. The ability of AI models to fabricate results (i.e., hallucinations) is a well-known problem with significant consequences for trust. AI tools have other limitations: challenges of bias in data (with implications in border control, policing and justice systems); challenges of accuracy (including the use of reinforcement learning and synthetic data); and challenges of privacy (including data scraping). Such limitations call for additional capacity within government to develop internal policies and controls, and the capacity to cooperate internationally to advance solutions for responsible use of AI.
Fifth, governments need strengthened measurement and forecasting tools to capture future trajectories of AI computing and related implications. AI demand for computing power is growing exponentially. While the efficiency of AI chips is improving, the rapid rise in computing demand has critical implications for the semiconductor supply chain and the electricity sector. Notably, higher-performing computational models are driving burgeoning data loads associated with larger environmental footprints. The United States is already experiencing unforeseen demand for data-centre connections due to escalating AI power loads. Careful measurement, planning and analysis will be required for countries seeking to nationalize or increase access to AI computing resources and reach their green transition goals.
Sixth, governments need diplomacy at or near an accelerated digital pace. AI-accelerated scientific discovery and innovation create a new backdrop for diplomacy and policy making. Technology is undergoing a phase transition. AI combined with synthetic biology is one example. An array of other technologies are poised to bring about breakthrough capabilities when combined with AI (for example, quantum, robotics and advanced manufacturing). With the hyper-evolution and rapid global diffusion of technology, a new form of governmental capacity is needed to respond at or near digital pace. This could take the form of anticipatory science diplomacy, a model for diplomacy that seeks to act today to address the challenges of the future.
Seventh, governments need geopolitical perspectives based on national affiliations of AI model development. Technology is a strategic asset, driving foreign policy and beneficial outcomes, but also serving as a “sharp weapon of the modern state,” as declared by Chinese President Xi Jinping. Since 2019, the United States has led the world in originating the majority of foundation models (a proxy for frontier AI research), followed by China and the United Kingdom.
How should governments upskill diplomats and public officials?
First, governments should strengthen ties to AI policy, governance and standard-setting activities, and participate in forums that are positioned to influence the development of new international norms and rules related to AI.
AI governance is a rapidly developing area. By virtue of international participation, governments can advance their knowledge on AI topics and relevant implementation schemes. These include best practices in frontier AI governance, risk assessments and model evaluations for the responsible use of AI in diplomacy and the establishment of oversight bodies. Furthermore, countries can rely on the expertise and models developed by standards organizations to help implement new policies and regulations, while allowing these to be regularly updated through standards rather than cumbersome legislative processes.
Second, governments should bring more AI experts into government directly from universities and other centres of expertise. An effective way to build and sustain knowledge within a rapidly evolving area is to engage directly with those working in the field. Government collaboration with experts in AI, diplomacy and international relations could fully leverage the potential of AI. Governments can create safe spaces for high-trust skill and knowledge transfer between leading AI developers and regulators to boost technical understanding through interchanges, visiting or sabbatical assignments, and co-op placements.
Third, governments should introduce AI learning tracks. Governments can provide short courses — for example, through industry and university partnerships linked to on-boarding and annual refresher courses — with incentives for employees who complete them. To prepare for an AI-driven future in diplomacy, training subject matter should focus on the effects of AI on diplomacy (i.e., not just the basis of AI technology); which technological capabilities are likely to increase; AI ethics, responsible use and issues of global concern; and AI enhancements to public services. There are online courses today covering topics such as the practical use of ChatGPT in diplomatic work.
Fourth, governments should increase the adoption of AI in diplomacy and public official work in accordance with responsible-use principles. Government leadership and skill-building in technology foreign policy is stronger when led by example. The spectrum of AI-use cases in diplomacy is growing. Sometimes referred to as digital diplomacy, it includes assisting in data analysis (for example, for analyzing global security threats), facilitating communication (for example, real-time AI-powered language translation), and providing virtual platforms for diplomatic negotiations (for example, AI-enhanced virtual reality simulations that analyze behavioural patterns and provide feedback for improvement). Governments can experiment with responsible AI applications in service delivery and diplomatic functions through select pilot projects. The findings of those pilot projects can be captured and shared widely for accelerating adoption.
What are different governments and international organizations doing?
Some states are approaching AI capacity building by establishing digital academies, appointing new technology ambassadors and chief AI officers, and directly hiring AI experts and placing them “as close to the mission as possible.” Mentions of AI in legislative proceedings worldwide nearly doubled between 2022 and 2023, with discussions of AI taking place in at least one country from every continent. Examples of international activities include the Organisation for Economic Co-operation and Development (OECD) AI Principles; the United Nations Educational, Scientific and Cultural Organization (UNESCO) Ethics of Artificial Intelligence; China’s Global Partnership on AI; the Global AI Governance Initiative; the Group of Seven’s Hiroshima Process; the United Kingdom’s AI Safety Summit; and the UN Governing AI for Humanity interim report.
Are there lessons from other fields, such as global public health, for ways to build expertise into diplomatic activity?
The COVID-19 pandemic underscored the importance of a global surveillance system. Drawing from this example, AI expertise could be integrated into diplomatic activity through engagement with a global intelligence unit that assists with risk assessment, identifying AI incidents (for example, the OECD AI Incidents Monitor) and acting in response to threats. The pandemic also emphasized the need for building trust in science and scientific translation that moves policy stakeholders to act. Here, one might look to the recently formed Geneva Science and Diplomacy Anticipator (GESDA). GESDA solicits the opinions of science experts to inform a “breakthrough radar” that projects the impact of technology while also equipping leaders to act on such developments.