Loading...

News

White House Says No Limits Needed for Open Source AI – Is It the Right Call?

White House Says No Limits Needed for Open Source AI – Is It the Right Call?

Artificial intelligence is a controversial technology. For some, it needs to be regulated and restricted to reduce risk — for others, it’s an invaluable tool that needs to be left to flourish.

This week, we learned that the White House is firmly on the fence in this debate, after it released a report concluding that current evidence is not sufficient to implement restrictions on AI models with “widely available weights” — or in layman’s terms, open-source AI models.

While this is great news for the open-source AI community, and for the democratization of model development, those who wanted the government to impose safeguards on AI development will likely find the decision insufficient.

The White House’s decision comes less than a year after President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence called for expert recommendations on how the risks of open-source models (models with weights publicly available) could be managed.

Key Takeaways
The White House decides not to implement any restrictions on open-source AI for now.
This decision is great news for those who want to support open-source AI development.
Going forwards, this will help open-source AI to compete against proprietary AI.
Others commentators have criticized the government for adopting a “wait and see” approach in the face of significant risks
AI Regulations: Damned if You Do, Damned if You Don’t
Table of Contents
AI Regulations: Damned if You Do, Damned if You Don’t
The debate on AI regulation is unforgiving. On one hand, overregulating open-source model development can damage research progress while giving proprietary AI providers more control of the market, and driving innovation over to rival states like China.

On the other hand, as the White House report concedes, limited regulation on the weights of certain foundation models could create risks to national security, safety, and privacy, due to the lack of oversight or accountability.

For instance, an open model will likely have less content moderation than a proprietary model, enabling it to be misused, either to generate misinformation or deep fakes, or even launch automated cyber attacks.

So far, many in the AI industry have received the decision warmly.

Sebastian Gierlinger, VP of Engineering at Storyblok, told Techopedia: “The U.S. government’s position that open-source AI projects will not require new restrictions is broadly welcome.

“The danger of applying too many new rules too quickly is that it will have a chilling effect across the industry which [will] severely constrain adoption of AI and inhibit innovation.”

Gierlinger noted that the announcement should provide a short term boost to the AI sector, but notes that confidence in the technology remains reliant on AI companies acting transparently and ethically.

“A high-profile case of an AI product being misused could lead to a public backlash that will potentially make businesses more reluctant to use consumer-facing AI tools. It would therefore be wise for companies within the AI open-source community to not see the US government’s position as a blank cheque,” Gierlinger said.

To top
Warning Icon

Playnetic content is intended for those 18 years and older. Kindly confirm you meet the legal age to proceed.