Does EU’s soft touch on open-source AI opens the door to disinformation?
Open-source large language models (LLMs) are altering the landscape of AI development by allowing modifications to their core programming, which could potentially enable the creation of harmful content at scale. Unlike proprietary models like OpenAI’s ChatGPT or Google’s Gemini, which have strict interfaces to prevent misuse, open-source models offer a level of accessibility that, while beneficial for innovation and transparency, also presents significant risks. Recent tests on popular open-source models from the Hugging Face platform have shown their propensity to generate credible and harmful content, such as hate speech and misinformation.
The European Union’s new Artificial Intelligence Act, designed to regulate AI technologies, unfortunately, does not adequately address the specific challenges posed by open-source LLMs. While proprietary AI tools will be regulated under this new framework, open-source models that are not used for monetization enjoy broad exemptions. This oversight could lead to significant gaps in the regulation, allowing for the potential misuse of these technologies in ways that could threaten public safety, democratic discourse, and fundamental rights.
Stakeholders during the AI Act’s public deliberations, including representatives from Hugging Face and GitHub, highlighted the advantages of open-source AI, such as enhanced research capabilities and equal opportunities for smaller entities. However, the lack of stringent regulatory measures for open-source AI could undermine these benefits. It is crucial for the EU to reconsider its stance and close this regulatory loophole to prevent the misuse of open-source AI technologies that could harm societal norms and values.
To effectively mitigate the risks associated with open-source AI, the EU must classify these models as high-risk, subjecting them to rigorous obligations similar to those imposed on proprietary models. This approach would ensure a balanced regulation that protects consumers and upholds ethical standards, while still fostering innovation and transparency within the AI sector.
Source: Op-ed: The EU’s soft touch on open-source AI opens the door to disinformation