News

Tracking national commitments with the Global Index on Responsible AI

As the development and use of AI systems rapidly expands across the globe, the need to cultivate a deeper understanding of what is required to govern these technologies in a manner that protects and upholds human rights is critical.
AI Ethics and Governance Lab - Tracking national commitments with the Global Index on Responsible AI

Authors: Kelly Stone, Director of Research and Capacity Building: Global Index on Responsible AI. Rachel Adams, Principal Investigator and Director: Global Index on Responsible AI. Nicolás Grossman, Deputy Project Director: Global Index on Responsible AI

The adoption of UNESCO’s Recommendation on the Ethics of Artificial Intelligence (Recommendation) in November 2021 established a worldwide ethical standard, marking a significant first step towards building a global governance regime for AI. However, the challenge that lies ahead is ensuring these principles are translated into tangible actions that can be implemented at a national level, which is where they can - and certainly will - have the most impact. 

In many ways, UNESCO anticipated this challenge when it developed the Readiness Assessment Methodology (RAM), which is a tool that collects national-level data on the capacity of Member States to implement the Recommendation. However, having the capacity to implement the Recommendation does not necessarily mean there will be a commitment by Member States to do so. Therefore, having a measurement system in place to track countries’ commitments and capacities to action responsible AI is one way to hold countries accountable by monitoring progress at a national level against a series of human-rights and ethically-based principles. 

What is the Global Index on Responsible AI? 

Accordingly, the (Global Index) is a new tool that aims to do this by providing a comprehensive, independent and reliable set of benchmarks to measure - and compare - countries’ commitments to building responsible AI ecosystems over time. 

In this regard, the basic premise of the Global Index is based on four assumptions: 

  1. Without strong national and international regulation and governance, AI can negatively impact individual and collective rights and freedoms.  

  2. Without adequate safeguards that are meaningfully implemented, AI can be used for purposes that undermine democracy.  

  3. AI can play a role in advancing progress toward the realisation of development goals, such as those outlined in the SDGs, if appropriate safeguards are in place. 

  4. For AI to benefit everyone in society, broader social, technical, environmental, economic and political conditions need to be in place. 

In September 2023, a pilot of the Global Index was completed in 10 countries, including Costa Rica, Kenya, Jamaica, Canada, Serbia, India, Sri Lanka, Palestine, Burkina Faso, and Georgia. Preliminary findings from the pilot study validated the Global Index’s and demonstrated its proof of concept, giving the green light to start full data collection in November 2023.  

Data collection is currently being driven by a Global Research Network, composed of more than 140 independent country researchers and 11 Regional Hubs positioned at leading AI technology research organisations around the world. All of the information collected, which consists mostly of primary data, will be made available online and published as open data.  

The 1st Edition of the Global Index will be published in June 2024 and will significantly raise the presence of countries that have been largely excluded from global discussions on AI. In this regard, the Global Index will also be a key mechanism for addressing significantly large data gaps on responsible AI and for incorporating the knowledge, wisdom and insights of experts from a wide range of countries.  

What exactly does the Global Index measure? 

The Global Index assesses steps both state and non-state actors have taken in relation to responsible AI across three dimensions, nine sub-dimensions, and twenty-nine thematic areas, which are then measured across four pillars: frameworks, government actions, activities of non-state actors, and responsible AI environment.  

Each pillar is built from composite indicators, some of which are from primary data, and others from secondary data, the results of which are then aggregated into an overall score. The goal is to develop a comparative assessment of efforts countries are taking to promote and ensure responsible AI and to fairly rank countries accordingly. Country-level assessment will take into consideration the respective positioning, interest and of each country to take concrete actions towards responsible AI.  

The Dimensions and Sub-Dimensions of the Index include the following: 

  • Dimension 1 - Responsible AI Governance 

This dimension measures the extent to which countries are establishing and implementing these key governance tools for responsible AI. 

  • Sub-dimension 1: Enabling Policies 

  • Sub-dimension 2: Rule of Law 

  • Sub-dimension 3: Technical Standards 

  • Sub-dimension 4: Technology-Specific Regulation 

     

  • Dimension 2 - Human Rights & AI  

This dimension measures steps countries are taking to protect fundamental human rights and freedoms implicated by AI.  

  • Sub-dimension 1: Civil and Political Rights 

  • Sub-dimension 2: Social and Economic Rights  

     

  • Dimension 3 - National Responsible AI Capacities 

The third dimension measures whether the competencies required to advance responsible AI exist and are being met at a national level.  

  • Sub-dimension 1: Competencies 

  • Sub-dimension 2: Investments 

  • Sub-dimension 3: Institutions 

     

How does it relate to the UNESCO Recommendation on the Ethics of AI? 

The underlying premise of the Global Index is that while AI offers many potential for human development, these are not distributed equally across countries, specifically those located in the so-called Global South, or Majority World. As such, not only can the potential benefits of AI not be justly realised without the existence of other social, political, and technical conditions, but the potential harms cannot be adequately mitigated in the absence of legal safeguards and deliberate action from the state and other non-state actors. This aligns closely with UNESCO’s Recommendation and was instrumental in identifying the areas of responsible AI governance the Global Index would need to include. In turn, the Global Index will offer a tool for inter-governmental actors and other interested parties to monitor implementation of the UNESCO Recommendation at a national level in a way that extends beyond the existence of legal frameworks and delves into the complexity of implementation by assessing the actions of government and relevant activities of non-state actors. 

Further, the conceptual framework of the Global Index aligns with, and in some respects augments, existing standards and principles on responsible and ethical AI. In particular, the framework aligns to the UNESCO Recommendation and the Organisation for Economic Cooperation and Development (OECD) AI Principles. It also builds off the international human rights canon, establishing human for responsible AI across core human rights groupings, including civil and political rights, socio-economic and cultural rights, children’s rights, labour rights and environmental rights. In this way, the Global Index on Responsible AI can serve as a key tool to monitor the implementation of globally-established human rights-based norms and standards for responsible AI.  

How can the Global Index be used to support global governance of AI? 

In addition to monitoring implementation of the UNESCO Recommendation, the Global Index can support international coordination and cooperation in supporting equitable AI development around the world, including by identifying regional and cross-regional capacity gaps. The tool can also support the advancement of interoperable global regulation on AI, by setting human-rights based standards for responsible AI and measuring all countries’ efforts toward meeting them.  

Finally, the hope and overall objective of conducting a global study on responsible AI is that with greater knowledge about how societies around the world are using and governing AI, better decisions can and will be made at the level of global governance. 

For more information and updates on the Global Index, be sure to visit   


The ideas and opinions expressed in this article are those of the author and do not necessarily represent the views of UNESCO. The designations employed and the presentation of material throughout the publication do not imply the expression of any opinion whatsoever on the part of UNESCO concerning the legal status of any country, city or area or of its authorities, or concerning its frontiers or boundaries.