
The global governance of artificial intelligence: Next steps for empirical and normative research
This analytical essay outlines an agenda for research into the global governance of artificial intelligence (AI). It distinguishes between empirical research, aimed at mapping and explaining global AI governance, and normative research, aimed at developing and applying standards for appropriate global AI governance.
Please login or join for free to read more.

OVERVIEW
The authors identify that AI is becoming increasingly subject to regulatory initiatives at the global level due to its transformative potential. The regulations are aimed at harnessing and spreading AI’s benefits while limiting its negative consequences. The researchers argue that there is little systematic knowledge on the nature of global AI regulation, including the interests influential to this process and the extent to which emerging arrangements can manage AI’s consequences in a just and democratic manner.
Empirical research
The authors note that empirical research is aimed at mapping and explaining global AI governance. In answering such questions, researchers can draw on scholarship on international law and International Relations (IR) and characterised by regulatory mechanisms and categorised as regulatory arrangements. Research can benefit from discussing the variety of ways where new regulation may result, from the reinterpretation of existing rules to entirely new frameworks. Conceptualising global AI governance in terms of critical analytical dimensions could also be beneficial, such as horizontal-vertical, centralised-decentralised, and formal-informal.
Normative research
The normative perspective, on the other hand, aims to develop and apply standards for appropriate global AI governance. The researchers suggest defining normative ideals of justice and democracy as the focal point for analysing the governance of AI. Democratic and normative theories may need to be adapted to address questions of how highly asymmetric distribution of AI capabilities affects the formation of state interests and bargaining to negotiate institutional solutions. The team identifies that non-state actors, including tech corporations that develop cutting-edge AI technology, significantly influence these decisions, and whether different kinds of decision-making require different normative standards and actors’ normative status in decision-making arrangements.
Recommendations
The report highlight four primary avenues for researchers to follow in developing a better understanding of the governance of AI. Research is required to:
- Identify where and how AI is becoming governed globally.
- Explain why AI is being governed globally in specific ways, accounting for the factors that drive and shape regulatory processes and arrangements.
- Determine what normative ideals global AI governance ought to meet.
- Evaluate how well global AI governance conforms to these normative ideals.
Qualitative and quantitative evidence
The authors find that non-state actors, specifically tech corporations that develop cutting-edge AI technology, are significantly influencing the process of deciding on global AI governance. They also discuss how AI’s distribution and capabilities between producers, which are few, concentrated, and highly resourced, and users and subjects, which are many, dispersed, and less resourced, have affected the formation of state interests. Finally, the authors suggest that normative ideals of justice and democracy are used to evaluate and measure how well global AI governance conforms to these standards.
In conclusion, the authors provide insightful guidance on mapping and differentiating between empirical and normative research into the global governance of AI. While empirical research is necessary in identifying and describing global AI governance, normative research is required to suggest standards and regulations for appropriate governance.