3 min read

A recent New Yorker article by Joshua Rothman reporting on the chaos at OpenAI in late 2023 highlights speculation that there had been tension among the AI “accelerationists” and the AI “doomers.”  The article does a fair job of noting that most experts fall somewhere in between these two camps.  Nonetheless there are constant reminders in the media and the press, as well as from noteworthy experts about the potential for AI to become an existential threat to humankind.  In the shorter term, there are concerns about AI increasing fake news, eliminating jobs, infringing on intellectual property rights, and being used for illicit or unethical purposes.  So we need to ask: How can we govern AI’s development and use?

Possible answers to this question need thoughtful consideration because we do not know what AI will look like in a few years let alone decades into the future.  What we do know is that AI standards are going to be key.  For years the high tech industry largely focused on interoperability, security, and privacy in the form of technology standards.  For AI, these technical standards will be important but other types of standards will be at least as important if not more so.  For example, how do we eliminate bias, what ethical constraints should we consider essential, how do we control military use of AI and should we?  Equally as important as adopting appropriate standards is creating a global enforcement mechanism so that we all play by the same rules. 

I believe we need to do three things: 

  • Standards Setting.  Create a single global body to vet and approve AI standards developed by a myriad of current and newly forming standards development organizations. Let’s call these standards “Approved Standards.”  
  • Certification. Create and implement a process to test and certify AI applications as conformant with all relevant Approved Standards. 
  • Enforcement. Create and implement a system that enforces conformance.

All three tasks are extremely complicated.  First, we have to consider the evolving nature of AI.  We also have to consider how rapidly the technology and its uses are advancing.  We further have to understand how the technology can be used to address different problems, and in some cases competing problems.  Similarly, a solution to one problem could create a whole set of different problems. Problems and solutions may be viewed differently based on culture, geography, and other characteristics associated with individuals or groups. Given these complexities it will be very difficult to complete even one of these tasks.  And when a task is so difficult, we often suffer from paralysis or procrastination.  

We cannot leave these tasks up to the world’s governments, or even a few highly influential governments.  Governments are too political and too slow, and generally do not have the expertise needed. That said, government support and participation will be critical.  Much of the expertise will come from the private sector, academia, and independent thought leaders.  A global partnership among universities, research institutions, private sector companies, governments and other thought leaders is needed.  

I am not aware of any existing global model that is appropriate for any of the three tasks.  The UN is a great example of an organization with global participation but has little practical impact.  Organizations like ANSI in the US and international organizations like ISO are great examples of organizations that can vet and approve standards, but these organizations are not truly open (i.e., there is a large cost to participation) or balanced (i.e., they do not have a neutral arbiter), and their processes are influenced by those who have gained the most power in these organizations rather than by the best ideas.  

There are also many organizations that have developed testing and certification processes for standards conformance.  Most of these organizations, however, have no processes to handle the relevant scope or rapid changes in AI standards and applications. UL may be able to expand to cover the scope of testing needed but its governance frameworks should be evaluated to assess whether they may be agile enough to respond to the rapid changes in AI technology and its associated policy landscape. 

We have no organizations that can enforce standards on a global basis. Individual countries would have to agree to abide by enforcement requirements and be held accountable for failure to do so.  Such an organization would be useful for climate issues and other global concerns, not just AI.  Notwithstanding its usefulness, forming such an organization seems nearly impossible.  As Nelson Mandela said: “It always seems impossible until it’s done.”  

There have been numerous summits bringing together a broad spectrum of AI stakeholders and experts in the past few years ostensibly focused on AI governance.  The take aways from these meetings are far more focused on policies or principles than on actual mechanisms or frameworks that could be used for AI governance. It is time to tackle these challenges and we should try.  It seems to me we can start by studying the various organizational and governance models that exist, identifying which aspects of these models might work well for each of the 3 tasks (and specific use cases) and which aspects will not work.  Please share your ideas and feedback by sending email to info@justechlaw.com.

I BUILT MY SITE FOR FREE USING