News
Using the RAM as a tool to ensure trustworthy AI

Author: Saida Belouali, Professor of AI Ethics, Mohammed First University and lead expert of RAM exercise, Morocco
UNESCO鈥檚 Recommendation on the Ethics of AI: Beyond the 鈥淟ist Effect鈥
UNESCO's Recommendation on the Ethics of AI is a global normative framework that fundamentally refers to the ethical imperative associated with values and principles to guide behavior.
Applied to AI, the normative framework reveals itself as a holistic and systemic instrument, not just a list of principles without mechanisms guiding action. Operationalized through the Readiness Assessment Methodology (RAM), it allows states to consider the deployment and development of so-called trusted AI. This process helps us understand how we can move from an ethical imperative and a set of principles to developing and using AI with confidence.
Operationalization and prescriptions implementation
Operationalization is a process that allows the embedding of principles and values through institutional and regulatory provisions. The RAM helps to identify where ethical principles must be integrated to ensure responsible development and use of AI. The operationalization of the ethical imperative is made possible through the assessment of AI readiness.
The RAM鈥檚 qualitative and quantitative indicators enable a mapping of the country's ecosystem and assessment of AI readiness, identifying potentialities and strengths while highlighting areas needing improvement and development. These identified areas are where states should imprint ethical principles and implement prescriptions.
In conducting the RAM exercise, it can sometimes be challenging for some actors and stakeholders to understand the link, particularly in relation to legal aspects. These are two normative universes that function differently. Ethics is a deliberation or reasoning that frames action and the choices that must be made once faced with an ethical issue (for example, biased data or sensitive data). It is important to note here that for it to be ethical, the reasoning performed by individuals (moral agents) is based on ethical principles but also normative rules and laws.
Ethical deliberation and regulation?
It is up to humans to evaluate risk situations, contextual elements, and existing regulations to make appropriate decisions. This is the expression of the idea of "human-centeredness". An important dimension of any ethical deliberation is the mobilization of normative rules shared with the community to which the individual belongs (designer, developer, responsible person, etc.). A choice can only be ethical if it conforms to the laws and normative rules in force and within "just institutions," as Paul Ricoeur would say.
Therefore, the legal dimension is crucial in the philosophy of the Recommendation. The implementation of its normative framework cannot be accomplished without considering the regulatory aspect. Laws are consensuses, today's practices determining tomorrow's norms, but for them to become so, the concerned communities must engage in debate and legislate to decide their position on AI regulation.
The point is not to create new laws or adapt existing ones, but above all to be aware that AI regulation, whatever the chosen mechanism, is an urgent requirement for states. Since these technologies move so fast, the more we delay the initiation of collective reflection, the more we increase societal risks.
The ethical imperative expressed through regulations
The ethical imperative should, in this case, be expressed through adaptive regulation, especially because the phenomenon of AI is, on the one hand, very recent and, on the other, quite complex. Committing to the path of AI regulation through responsible governance is crucial for all states. Delaying reflection, legislation, and debates on this subject risks compromising countries' choices or even their geopolitical positioning or sovereignty. Each country should preserve its freedom to balance between innovation and regulation, taking into account its own choices and priorities; this can only be possible through responsible and engaged debate.
The regulation of AI is certainly an evolving process given the nature of this technology. In addition to changing rapidly, AI is largely unpredictable. Thus, we must regularly adapt to its effects. Regulation should not be synonymous with constraints and slowing down innovation efforts. Well-thought-out and contextualized, it should enable the potential of innovation with responsible adoption and use of AI.
Responsible AI means deploying AI while preserving rights and freedoms. In the AI context, new phenomena are emerging and should be taken into account. It is appropriate to focus on phenomena not covered by existing laws that may occur. For example, states should question social scoring, facial recognition, or intellectual and artistic property rights. Today, artists are forced to furnish their works with computer programs to prevent them from being plundered by large language models (LLMs).
Countries that have committed to deploying, developing, and using AI in accordance with ethical principles should continuously check whether the tools for regulating this technology remain sufficient or whether, on the contrary, they need to be reviewed, adjusted, or updated to ensure respect for fundamental rights with the aim of guaranteeing responsible, but also transparent, development and use.
Towards Balanced Regulation
Regulators worldwide are constantly faced with the challenges of new technological revolutions, forcing them to be creative and ingenious in developing appropriate measures. These measures must not only promote technological progress but also respect the fundamental principles of law, such as limiting unintended consequences, protecting citizens, promoting fair competition among companies, and preserving digital sovereignty. This perspective underscores the complexity of the task facing regulators.
These new technologies are transitioning the world towards unexpected new models. AI raises new questions, and its impacts may bring about unforeseen consequences. Countries that commit to implementing the Recommendation are aware of the risks brought about by technologies like AI, and commit through this choice to deploy and develop AI responsibly.
The ideas and opinions expressed in this article are those of the author and do not necessarily represent the views of UNESCO. The designations employed and the presentation of material throughout the article do not imply the expression of any opinion whatsoever on the part of UNESCO concerning the legal status of any country, city or area or of its authorities, or concerning its frontiers or boundaries.