The tech industry is currently embroiled in a heated debate over the future of AI: should powerful systems be open-source and freely accessible, or closed and tightly monitored for dangers?
On Tuesday, Meta CEO Mark Zuckerberg fired a salvo into this ongoing battle, publishing not just a new series of powerful AI models, but also a manifesto forcefully advocating for the open-source approach. The document, which was widely praised by venture capitalists and tech leaders like Elon Musk and Jack Dorsey, serves as both a philosophical treatise and a rallying cry for proponents of open-source AI development. It arrives as intensifying global efforts to regulate AI have galvanized resistance from open-source advocates, who see some of those potential laws as threats to innovation and accessibility.
[time-brightcove not-tgx=”true”]At the heart of Meta’s announcement on Tuesday was the release of its latest generation of Llama large language models, the company’s answer to ChatGPT. The biggest of these new models, Meta claims, is the first open-source large language model to reach the so-called “frontier” of AI capabilities.
Meta has taken on a very different strategy with AI compared to its competitors OpenAI, Google DeepMind and Anthropic. Those companies sell access to their AIs through web browsers or interfaces known as APIs, a strategy that allows them to protect their intellectual property, monitor the use of their models, and bar bad actors from using them. By contrast, Meta has chosen to open-source the “weights,” or the underlying neural networks, of its Llama models—meaning they can be freely downloaded by anybody and run on their own machines. That strategy has put Meta’s competitors under financial pressure, and has won it many fans in the software world. But Meta’s strategy has also been criticized by many in the field of AI safety, who warn that open-sourcing powerful AI models has already led to societal harms like deepfakes, and could in future open a Pandora’s box of worse dangers.
In his manifesto, Zuckerberg argues most of those concerns are unfounded and frames Meta’s strategy as a democratizing force in AI development. “Open-source will ensure that more people around the world have access to the benefits and opportunities of AI, that power isn’t concentrated in the hands of a small number of companies, and that the technology can be deployed more evenly and safely across society,” he writes. “It will make the world more prosperous and safer.”
But while Zuckerberg’s letter presents Meta as on the side of progress, it is also a deft political move. Recent polling suggests that the American public would welcome laws that restrict the development of potentially-dangerous AI, even if it means hampering some innovation. And several pieces of AI legislation around the world, including the SB1047 bill in California, and the ENFORCE Act in Washington, D.C., would place limits on the kinds of systems that companies like Meta can open-source, due to safety concerns. Many of the venture capitalists and tech CEOs who celebrated Zuckerberg’s letter after its publication have in recent weeks mounted a growing campaign to shape public opinion against regulations that would constrain open-source AI releases. “This letter is part of a broader trend of some Silicon Valley CEOs and venture capitalists refusing to take responsibility for damages their AI technology may cause,” says Andrea Miotti, the executive director of AI safety group Control AI. “Including catastrophic outcomes.”
The philosophical underpinnings for Zuckerberg’s commitment to open-source, he writes, stem from his company’s long struggle against Apple, which via its iPhone operating system constrains what Meta can build, and which via its App Store takes a cut of Meta’s revenue. He argues that building an open ecosystem—in which Meta’s models become the industry standard due to their customizability and lack of constraints—will benefit both Meta and those who rely on its models, harming only rent-seeking companies who aim to lock in users. (Critics point out, however, that the Llama models, while more accessible than their competitors, still come with usage restrictions that fall short of true open-source principles.) Zuckerberg also argues that closed AI providers have a business model that relies on selling access to their systems—and suggests that their concerns about the dangers of open-source, including lobbying governments against it, may stem from this conflict of interest.
Addressing worries about safety, Zuckerberg writes that open-source AI will be better at addressing “unintentional” types of harm than the closed alternative, due to the nature of transparent systems being more open to scrutiny and improvement. “Historically, open-source software has been more secure for this reason,” he writes. As for intentional harm, like misuse by bad actors, Zuckerberg argues that “large-scale actors” with high compute resources, like companies and governments, will be able to use their own AI to police “less sophisticated actors” misusing open-source systems. “As long as everyone has access to similar generations of models—which open-source promotes—then governments and institutions with more compute resources will be able to check bad actors with less compute,” he writes.
But “not all ‘large actors’ are benevolent,” says Hamza Tariq Chaudhry, a U.S. policy specialist at the Future of Life Institute, a nonprofit focused on AI risk. “The most authoritarian states will likely repurpose models like Llama to perpetuate their power and commit injustices.” Chaudhry, who is originally from Pakistan, adds: “Coming from the Global South, I am acutely aware that AI-powered cyberattacks, disinformation campaigns and other harms pose a much greater danger to countries with nascent institutions and severe resource constraints, far away from Silicon Valley.”
Zuckerberg’s argument also doesn’t address a central worry held by many people concerned with AI safety: the risk that AI could create an “offense-defense asymmetry,” or in other words strengthen attackers while doing little to strengthen defenders. “Zuckerberg’s statements showcase a concerning disregard for basic security in Meta’s approach to AI,” says Miotti, the director of Control AI. “When dealing with catastrophic dangers, it’s a simple fact that offense needs only to get lucky once, but defense needs to get lucky every time. A virus can spread and kill in days, while deploying a treatment can take years.”
Later in his letter, Zuckerberg addresses other worries that open-source AI will allow China to gain access to the most powerful AI models, potentially harming U.S. national security interests. He says he believes that closing off models “will not work and will only disadvantage the U.S. and its allies.” China is good at espionage, he argues, adding that “most tech companies are far from” the level of security that would prevent China from being able to steal advanced AI model weights. “It seems most likely that a world of only closed models results in a small number of big companies plus our geopolitical adversaries having access to leading models, while startups, universities, and small businesses miss out on opportunities,” he writes. “Plus, constraining American innovation to closed development increases the chance that we don’t lead at all.”
Miotti is unimpressed by the argument. “Zuckerberg admits that advanced AI technology is easily stolen by hostile actors,” he says, “but his solution is to just give it to them for free.”
source https://time.com/7002563/mark-zuckerberg-ai-llama-meta-open-source/
0 comments: