July 30, 2024. Today the NTIA released its report on Dual-Use Foundation Models with Widely Available Model Weights. The report makes many common-sense recommendations, especially in regard to collecting evidence. It promotes (i) encouraging standards and where appropriate measures aimed at conformance, (ii) research into “safety, security, and trustworthiness of foundation models and high-risk models, as well as their downstream uses,” (iii) research into capabilities and risk-mitigation strategies, and (iv) developing and maintaining risk metrics.
In my previous posts I discussed the importance of global AI governance based on the development of standards and seeking rational ways to assess compliance with them, while balancing reasonable and legitimate expectations for secrecy and confidentiality associated with an AI tool’s data, code, documentation and model weights. I have also strongly supported the idea that access to AI tools and their underlying IP should be based on specific use-cases and risk profiles.
Although I think the NTIA report is largely helpful and has sensible conclusions at this time, I am concerned that some of the positive statements made in the report about open models may be misconstrued and may be taken out of context by others in harmful ways. Promoting openness by condemning licensing models that do not meet various definitions of “open” could inhibit innovation. A one-size-fits-all licensing mode should not be advanced for many use-cases or risk profiles. Pejorative terms such as “open-washing” have been applied to AI tools that are sufficiently open to assess the model’s capabilities and risks but the licensing scheme used by the AI tool developer does not meet certain definitions of “open.” For example, Open RAIL licenses are certainly open but do not meet the definition of open source promoted by OSI. Publication of model weights, at least in the US, may eliminate IP rights in those weights despite the possibility that substantial investments were made to produce the model weights. Licensing model weights in a way that preserves their confidentiality but allows access to a limited number of evaluators may strike a reasonable balance in some cases.
Competition drives innovation. Business model competition can be just as important in promoting innovation as product innovation. Dividing AI tools into “good” and “bad” based on whether a tool is “open” or “closed” places too much pressure on developers to forgo business models that are needed to compete, to reasonably promote safety and security, and to address other legitimate issues unique to AI capabilities and risks.
Such divisions have been problematic in the past. For example, entities with different business models have long fought over whether standards essential patents should be licensed for free or for reasonable royalties. Companies with service-oriented businesses promote free or low-cost IP licenses for infrastructure needed to operate their services while the companies that develop the IP or infrastructure promote licensing regimes that enable them to profit. These differences have led to never ending debates without comprises and solutions.
I hope we can learn from the past and avoid unhelpful divisions as we address many emerging AI-related risks. Not every aspect of every AI tool needs to be wide open for everyone’s assessment, analysis, and use in order to promote access for risk management or for innovation. I encourage anyone interested in AI and open models to review the NTIA report itself and not rely on characterizations touting only the benefits of unrestricted open AI tools.