News

We need to build agile regulation for AI

AI is at the heart of the most recent industrial revolution. Though humanity has lived through similar processes before, and many of their characteristics are known, the newest ingredient that we have never experienced before is its speed
Norwood Themes/Unsplash.com

Author: Elena Estavillo Flores - Founder and CEO of the think tank Centro-I for the Society of the Future. Harvard ALI Fellow 2023. Member of the Expert Group for UN Women CSW67 and member of Women4Ethical AI.   

AI, like other transformative digital technologies, is called exponential technology for a reason. Exponential speed means a pace of change that increases with each step, a concept that sounds simple but can prove difficult to grasp. And change in AI seems so explosive and self-feeding that we are even reaching for concepts beyond exponential, like tetration, or hyper-4, which refers to iterated exponentiation. That is, when the exponent itself also increases exponentially. Or, in layperson鈥檚 terms, a way to write truly unimaginably big numbers.  

The explosion of generative AI into our phones and laptops, and our daily interactions with it, has swiftly moved public opinion from the once popular 鈥渄on鈥檛 interfere with innovation鈥 to the more conscious 鈥渨e need to regulate鈥. 

But there are no easy answers for the who, what, and how. 

Regulators must learn from the tech industry

The exponential pace of innovation has been enabled by the learning-by-doing innovation model used by the AI industry, which follows adaptive, incremental methodologies like agile, SCRUM, continuous improvement, and others.  

Agile methodologies rely on learning, iteration and correction in a continuous process that returns a better outcome at each round. It calls for the input of users, decentralized decision-making, diverse teams, and collaboration. 

In contrast, regulators usually work within a one-shot framework, aiming at producing a comprehensive, cutting-edge compendium of rules, without much room for correcting or complementing in the short run. The regulator mindset doesn鈥檛 allow for mistakes because it doesn鈥檛 consider them part of the learning process.  

That is why rulemaking seeks to have a sufficient understanding of phenomena. It is also the reason why it typically comes late; and even for the most thought-through regulations, too soon they start to become outdated.  

While AI and the tech industry teaches developers to experiment, learn from mistakes, and iterate, regulators carry the burden of not making mistakes, since they are entrusted to act for and represent society, and not to put people at risk. A 鈥渇irst, do no harm鈥 approach indeed reflects the high bar set for public institutions; but such extreme risk aversion can in fact turn into active harm when it paralyzes policymakers and regulators altogether. 

In the case of AI, regulatory risk cannot be avoided. Failing to regulate carries its own risks, which many believe would be catastrophic.  

The regulation factory has to build in the possibility of making mistakes, and the mechanisms to detect them quickly, correct and iterate. Otherwise, it won鈥檛 move at the same pace as AI. We need regulatory processes that produce fluid and adaptable rules, embedding error, correction, learning and iteration. This requires a new regulator鈥檚 mindset prepared to accept errors and learn from them, and the accompanying legal framework that will not punish but encourage this knowledge-creating process.

Regulating through and despite uncertainty

AI has been developed through a good deal of uncertainty, 鈥渕oving fast and breaking things鈥. That is one of the reasons why it has had unwanted and sometimes serious impacts, i.e., limiting access to jobs for women and other historically discriminated groups. Uncertainty is not going away, so the idea of waiting to regulate until we attain a deep understanding of the issue will only keep us on the sidelines watching 鈥渢hings鈥 鈥 like, say, human rights and democracy 鈥 being 鈥渂roken.鈥 

Agile regulation responds to systemic uncertainty because it allows us to put rules in use, observe results, learn, and adjust.  

Moreover, agile rulemaking can be more helpful than traditional processes for risk management, by applying a strategic, deliberate approach to avoid major and catastrophic events.  

We are not starting from scratch

Applying the agile model to regulation implies that culture, mindsets, and institutions鈥攐rganizations, rules, processes, etc.鈥 need to go through major transformation efforts.  

But we have been heading in that direction and there are some useful experiences to look at. 

Something can be learnt from dominant carrier regulation where, in many countries around the world, a set of specific rules was imposed upon incumbent telecommunications companies. In some cases (i.e., the US and Mexico) the authority established a regular review of all the obligations considering the rapid pace of change in this sector. One of the lessons is that those types of reviews turned out to be very burdensome, and became openly adversarial processes. AI regulation needs to be lean, fluid, and collaborative. Another lesson is that, even if frequent, discrete reviews aren鈥檛 flexible enough. Continuous and dynamic processes are needed.  

Another experience worth studying is Internet governance, where global collaboration among multiple stakeholders, with decentralized decision-making, has allowed for innovation, interoperability, and standardization. The ability to make quick, small, decentralized decisions is key to building agile regulation, not putting the whole framework in question all the time but maintaining an open approach to making the necessary adjustments to any part of it. 

Shaping economic incentives

With or without regulation, AI development is unstoppable and will continue to thrive, because digital tech is a very good business. Economic incentives are strong, and investment will keep flowing. Since the driving force of investments is commercial, ethical criteria will very seldom prevail in project development.  

On the other side, economic incentives of the same magnitude don鈥檛 occur naturally for investing in public interest AI or to make it available for everybody, though social and ethical preoccupations are present there.  

Thus, our challenges unfold in two parallel, complementary building blocks:  

  • Regulation to prevent and eliminate damages caused by AI. 
  • Public policy to guide AI towards solving society鈥檚 most pressing challenges, drive investment for public interest AI, and prevent (or ultimately bridge) the AI divide. 

Both regulation and public policy need to be open to interaction. Regulation can foster economic incentives to consider ethics, inclusion, and accountability through the technological cycle. Public policy with an ethical compass can direct public and private resources where they are needed, connecting economic, social, and human interests. 

This is a way to stand up to the challenge posed by technological exponentiality. Continuous learning and adjustment accompanied by the necessary institutions to make it possible: authorities and processes for agile decision-making, rulemaking, and enforcement. To run at the same pace as technology.  

The ideas and opinions expressed in this article are those of the author and do not necessarily represent the views of UNESCO. The designations employed and the presentation of material throughout the article do not imply the expression of any opinion whatsoever on the part of UNESCO concerning the legal status of any country, city or area or of its authorities, or concerning its frontiers or boundaries.