2 min read

August 22, 2024. The Open Source Initiative (OSI) has released its latest definition of open source AI.  That definition tracks OSI’s open source definition (OSD) for software, the definition it uses to approve licenses as “open source” licenses. While OSI has appointed itself as the arbiter of what constitutes open source in the software arena, not everyone agrees that OSI should be the sole arbiter or that the restrictive OSD is suitable for many open applications.  

OSI says:” These licenses make it easy to collaborate on, share, and reuse code because they provide clarity regarding “intellectual property” and usage rights.” [1] Many OSD-approved licenses are anything but clear.  It is unclear whether, for example, permissive licenses like the BSD or MIT licenses grant rights to the software licensor’s or distributor’s patents, and if so, the scope of such licenses concerning downstream modifications.  The GPLv3, while  royalty free, takes 12 pages when downloaded (11- point font) to say that it is free.  

The issue is not that OSI is developing a definition for open source AI, it is that OSI will not want anyone to call their AI tools or models open source if the applicable licenses do not meet OSI’s definition. Software developers have long been reprimanded for calling their software “open source” when their licenses deviate from the OSD.  Some open source advocates have also referenced software that is not licensed under an OSI-approved license in pejorative terms. 

As a result, developers and businesses have chosen an OSI-approved license, not because it supports their technical roadmaps or business interests, but rather because they believe they need to market their software as “open source” and choose licenses that OSI has declared are open source.  Doing so, however, can mean that business interests are not achieved discouraging investment in innovation for those entities.  This is not to say that OSI-approved licenses do not promote innovation in many cases, but that is simply not the whole picture. 

While criticizing companies that market their software as open source software when the software is not licensed under an OSD-approved license may hinder innovation, it may not just be an innovation problem for AI tools and models. There are real risks associated with open AI tools and models that may make it easier for bad actors to access and use the AI tools for nefarious and harmful purposes. The NTIA has recently stated that:“[M]aking the weights of certain foundation models widely available could also engender harms and risks to national security, equity, safety, privacy, or civil rights through affirmative misuse, failures of effective oversight, or lack of clear accountability mechanisms.”[2] 

Instead of advocating exclusively for the use of a restrictive set of open source licenses let’s use common sense when selecting licenses for the various components of AI tools and models that not only align with legitimate business purposes but also support responsible use of the AI tools and models. 

[1] See https://opensource.org/deepdive 

[2] Dual-Use Foundation Models with Widely Available Model Weights, NTIA Report, page 2, July 2024 at https://www.ntia.doc.gov/sites/default/files/publications/ntia-ai-open-model-report.pdf

I BUILT MY SITE FOR FREE USING