<March 30, 2024> Today,, I submitted feedback on the Interim Report: Governing AI for Humanity published by the UN's Advisory Body on AI.
I applaud the work of the Advisory Body in articulating many clear and sound policies and goals for the future of AI. In particular, I commended the Advisory Body for recognizing the imminent need for global AI governance. My prior blog post on this topic covers many of the same points made in the Report. In order to institute a meaningful global governance framework, we will need to narrowly scope the framework’s governance functions. That will require separating out policy formulation from governance functions. My comments describe how to separate policy and governance functions and identify the governance functions, in my view, to be prioritized. Specifically, I recommended the Advisory Body focus on (i) a representative and inclusive model for stakeholder voting, (ii) identifying the standard or standards developed in other consensus bodies that articulate “accepted” red-lines for AI, and (iii) a framework for reporting and compliance enforcement.
Separating Policy and Governance Functions
Policy decisions can be developed by numerous entities including organizations within the United Nations. The Report identifies the need for broader access to data, technology, and talent. Those are worthy policy goals but should not be the focus for implementing the institutions needed for AI governance. The Report properly notes that any governance structure must be inclusive among many different groups and interests. It must also be capable of building a global enforcement mechanism. Separating the policy goals from the governance goals will help the Advisory Body to more efficiently and effectively develop and implement a global AI governance structure.
Table 1 of the Report lists 15 subfunctions that the 7 identified institutional functions should implement for AI governance. In my view, only subfunctions 7 (inclusive participation), 8 (convening, International learning), 9 (international coordination), 10 (policy harmonization, norm alignment), 11 (standard setting), 12 (norm elaboration), 13 (enforcement), and 15 (monitoring and verification) are clearly needed for global AI governance. Subfunctions 1 (scientific assessment), 2 (horizon scanning), 3 (risk classification), 4 (access to benefits), 5 (capacity building), and 6 (joint R&D) are policy functions or research functions for informing policy decisions that can be developed in a number of other organizations including UN organizations. Whether subfunction 14 (stabilization and response) should be implemented under the auspices of a global AI governing body will be unclear until the global AI governance framework is better defined.
In my view, the most critical functions that a global AI governing body must implement are global acceptance of certain critical AI standards and the ability to enforce such standards. The Advisory Body aptly notes:
Another category of risk concerns larger safety issues, with ongoing debate over potential “red lines” for AI — whether in the context of autonomous weapon systems or the broader weaponization of AI. There is credible evidence about the increasing use of AI-enabled systems with autonomous functions on the battlefield. A new arms race might well be underway with consequences for global stability and the threshold of armed conflict. Autonomous targeting and harming of human beings by machines is one of those “red lines” that should not be crossed.
Identifying those and other red-lines will be key. We could look to the ban on human cloning or the non-proliferation of nuclear weapons for guidance. There are lessons to be learned regarding how those “red-lines” have been policed and enforced.
Adopting a ban on human cloning at an international level has been elusive for decades, at best. That may be because there is both a scientific component as well as a philosophical component at play. As suggested in the literature, a more bottom-up, consensus-driven approach may have enabled a standard to emerge that banned or restricted certain aspects of human cloning.
Similarly, red-lines regarding nuclear weapons have been difficult to enshrine in international law. The failure of the 2022 NPT RevCon to adopt substantive recommendations may be case in point. Notwithstanding that failure, 147 countries signed on to a Joint Humanitarian Statement related to the non-proliferation of nuclear weapons and the International Committee of the Red Cross (ICRC) made concrete proposals to reduce nuclear risks.
These “softer” approaches may serve as reasonable examples of identifying red-lines. But what they do not demonstrate is a viable global enforcement mechanism.
Assuming one or more viable and representative communities such as the ISO and others working on responsible AI policies can agree on those red-lines, there needs to be a way to identify AI programs that cross those red-lines and an ability to shut them down. As mentioned above, it is unclear whether the shut-down switch needs to be controlled exclusively by the global AI governing body or whether some other authority or collection of authorities could be tasked with that responsibility.
Prioritizing Governance Functions.
Given the rapid pace of AI development and commercialization, unlike any prior emerging technology, global governance cannot wait for decades or even years before it is implemented, or worse, abandoned. For this reason, I urged the Advisory Body to focus strictly on adopting and harmonizing the most critical standards (limited in scope to specific red-lines) that have gained widespread consensus in other institutions. In view of the many organizations developing important AI policies and standards, any global AI governing body should avoid developing its own unique standard.
Adoption of policies and standards implies a voting structure. That structure must be agile so that it does not take years to vet, harmonize, and approve standards, and it must be globally inclusive. Government entities, industries, academics, interest groups, and other stakeholder groups must all have an opportunity to weigh in on any enforceable standard but no entity, individual, or stakeholder group should effectively have a veto.
The Advisory Body should also consider how a global AI governing body could initiate a reporting and compliance program to enable enforcement of these critical standards. There needs to be a fair and equitable process that considers the reasonable expectations of both governments and commercial entities to secrecy, as well as the privacy and safety interests of other AI users and developers. These reasonable expectations must be balanced with legitimate interests to ensure that red-lines are not crossed. I strongly encouraged the Advisory Body to prioritize developing a framework for the reporting and compliance function.
Any global enforcement concerning compliance will require buy-in from the vast majority of stakeholders. Because we have few, if any, successful models that have proven to be an effective global enforcement mechanism, it is only reasonable to scope the compliance issues as narrowly as possible. Notwithstanding that narrow scope, the global AI governing body may also have a role in harmonizing and adopting other standards and good practices where those standards and practices are encouraged through non-legally binding mechanisms rather than legally binding ones. This could also be an area of focus for the Advisory Body, but after it has studied and recommended (i) a representative and inclusive model for stakeholder voting, (ii) identifying the standard or standards developed in other consensus bodies that articulate “accepted” red lines for AI, and (iii) a framework for reporting and compliance enforcement.
Because AI is developing so rapidly, technology, policy and legal issues relating to AI will also change rapidly. Any global governance structure will need to be organizationally fluid. The structure itself should be re-evaluated frequently and modified as new needs and issues arise
In sum, despite the laudable goals and guiding principles described in the Report, I urged the Advisory Body to separate the policy and governance functions and prioritize key governance functions. As explained above, I believe the most important governance functions the Advisory Body should explore are: (i) stakeholder grouping and voting, (ii) approval of red-line policy(ies), and (iii) reporting and compliance.