BUSINESS

Elon Musk, Marc Andreessen debate merits of open-source AI


Vinod Khosla and Marc Andreessen, both founders turned investors, spent part of their weekends debating each other on whether the pursuit of artificial general intelligence—the idea that a machine could become as smart as a human—should be open source

The debate kicked off with a post from Khosla praising OpenAI and Sam Altman, the company’s CEO.

“We have known @sama since the early days of @OpenAI and fully support him and the company,” Khosla wrote. “These lawsuits are a massive distraction from the goals of getting to AGI and its benefits.” 

Andreessen responded to Khosla’s message by accusing him of “lobbying to ban open source” research in AI. 

Andreessen seemed to take issue with Khosla’s support for OpenAI because the firm has walked away from its previous open-source ethos. Since the advent of AI, Andreessen has come out as a big supporter of open-source AI, advocating it as a means to safeguard against a select few Big Tech firms and government agencies controlling access to the most cutting-edge AI research. 

Both in this debate and in the past, Andreessen has been dismissive of the concerns raised by some of AI’s biggest critics. Andreessen has previously chalked up these worries to fears of disruption and uncertainty rather than the technology being malicious in and of itself—a point he reiterated in his exchange on X.  

“Every significant new technology that advances human well-being is greeted by a ginned-up moral panic,” Andreessen posted on X. “This is just the latest.” 

Khosla, on the other hand, tends to look at AI through a geopolitical and national-security lens rather than through a strictly entrepreneurial one. In the past, Khosla has warned he believes AI competition between the U.S. and China will escalate into a “techno economic war.” At Fortune’s Brainstorm Tech conference in December, Khosla said the U.S. and China’s AI arms race would ultimately decide which of the two superpowers would exert political influence over the world. 

In responding to Andreesen’s claims that he isn’t in favor of open source, Khosla said the stakes were too high. 

“Would you open source the Manhattan Project?,” Khosla replied to Andreessen. “This one is more serious for national security. We are in a tech economic war with China and AI that is a must win. This is exactly what patriotism is about, not slogans.” 

The back-and-forth discussion between Khosla and Andreessen saw the two opine on Sam Altman, OpenAI’s lawsuits, and Elon Musk, who chimed in himself at one point. The debate also explored whether anyone should be allowed to pursue any form of AI research, or if its most advanced versions should be delegated to the government. So while it may have seemed like just some online sniping between a group of extraordinarily successful Silicon Valley entrepreneurs it contained a microcosm of the ongoing and critical debate around open-source AI. 

Ultimately, neither camp wants to thoroughly ban open- or closed-source research. But part of the debate around limiting open-source research hinges on concerns it is being co-opted as a bad-faith argument to ensure regulatory capture for the biggest companies already making headway on AI—a point that legendary AI researcher and Meta’s former chief AI scientist Yann LeCun made when he entered the fray on X. 

“No one is asking for closed-source AI to be banned,” LeCun wrote. “But some people are heavily lobbying governments around the world to ban (or limit) open source AI. Some of those people invoke military and economic security. Others invoke the fantasy of existential risk.”

Elsewhere in Silicon Valley, famed angel investor Rob Conway asked leading AI companies to pledge to “building AI that improves lives and unlocks a better future for humanity.” So far enlisting the likes of Meta, Google, Microsoft, and OpenAI as signatories to the letter.

Andreessen, sticking with Khosla’s Manhattan Project analogy, raised concerns about OpenAI’s safety protocols. He believes without the same level of security that surrounded the Manhattan Project—such as a “rigorous security vetting and clearance process,” “constant internal surveillance” and “hardened physical facilities” with “24×7 armed guards”—OpenAI’s most advanced research could be stolen by the U.S.’s geopolitical rivals. 

“In fact, what we see is the opposite—the security equivalent of swiss cheese,” Andreessen wrote on X. “Chinese penetration of these labs would be trivially easy using any number of industrial espionage methods, such as simply bribing the cleaning crew to stick USB dongles into laptops. My own assumption is that all such American AI labs are fully penetrated and that China is getting nightly downloads of all American AI research and code right now.”

Andreessen, though, appears to have been doing more of a thought exercise than arguing a point, writing in response to his own post, “of course every part of this is absurd.” 

Elon Musk enters the debate to criticize OpenAI’s security

At this point, OpenAI co-founder Elon Musk chimed in. 

“It would certainly be easy for a state actor to steal their IP,” Musk replied to Andreessen’s post about security at OpenAI.  

Khosla, too, made mention of Musk, calling his decision to sue OpenAI “sour grapes.” Last week, Musk filed a lawsuit against OpenAI, alleging it breached the startup’s founding agreement. According to Musk, OpenAI’s close relationship with Microsoft and its decision to stop making its work open source violated the organization’s mission. OpenAI took a similar tack to Khosla, accusing Musk of having “regrets about not being involved with the company today,” according to a memo obtained by Bloomberg

Musk responded by saying Khosla, “doesn’t know what he is talking about,” regarding his departure from OpenAI in 2019.

Khosla’s venture capital firm Khosla Ventures is a longtime backer of OpenAI. In 2019, Khosla Ventures invested $50 million into OpenAI. As such, he didn’t take kindly to Musk’s lawsuit. “Like they say if you can’t innovate, litigate and that’s what we have here,” Khosla wrote on X, tagging both Musk and OpenAI. 

With Musk now involved, the debate continued. Khosla remained adamant AI was more important than the invention of the nuclear bomb and therefore couldn’t afford to be entirely open source—though he did agree with Musk and Andreessen that its top firms should have more rigorous security measures, even relying on the government for assistance. 

“Agree national cyber help and protection should be given and required for all [state of the art] AI,” Khosla wrote. “AI is not just cyber defense but also about winning economically and politically globally. The future of the world’s values and political system depends on it.”

Despite his reservations about making all of AI research open source, Khosla said he did not want development to halt. “[State of the art] AI should not be slowed because enemy nation states are orders of magnitude bigger danger in my view,” Khosla said in response to Andreessen. 

But Khosla and Andreessen did find some common ground on the question of AI alignment, which refers to the set of ideologies, principles, and ethics that will inform the models on which AI technologies are developed. Khosla wondered which groups would determine how AI gets aligned, before Andreessen chimed in with his own suggestion.




Source link

Related Articles

Please, use our online surveys for check your audience.
Back to top button
pinup