A.I. is ‘quite stupid,’ top exec at Facebook parent company Meta insists

This post was originally published on this site

https://content.fortune.com/wp-content/uploads/2023/07/GettyImages-1245436125-e1689764865839.jpg?w=2048

Experts may be divided on whether superintelligent A.I. will be the source of humanity’s salvation or extinction—but according to one Big Tech executive, the technology isn’t yet smart enough to be either.

“The hype has somewhat sort of run ahead of the technology,” Nick Clegg, president of global affairs at Facebook and Instagram parent firm Meta, told the BBC’s Today Programme on Wednesday.

“I think a lot of the warnings of existential threats relate to models that don’t currently exist, so-called superintelligent, super powerful A.I. models. In other words, the vision of A.I. where A.I. develops some autonomy and agency and it can think for itself, it can reproduce itself.”

Clegg’s interview came after Meta announced that it would be making its own A.I. large language model (LLM), Llama 2, available for free to the public. Chief executive Mark Zuckerberg said the decision—which is in stark contrast to moves made by other major players including Google and OpenAI—would “unlock more progress,” although critics have argued open sourcing such technology makes it available to bad actors.

Among the critics is ChatGPT creator OpenAI, which U-turned on its own open sourcing decision in March. The company’s chief scientist and co-founder Ilya Sutskever later labeled the decision to make its LLM technology freely available “wrong” and “not wise.”

Clegg, however, said on Wednesday that he “can assert without any fear of contradiction” that Meta’s LLMs are “safer than any of the other A.I. LLMs which have been open-sourced.”

He told the BBC that current A.I. models, such as Llama 2, still had a long way to go before they could realistically be deemed any sort of threat.

“The models that we’re open sourcing are far, far, far shorter [than superintelligent machines] and in fact, in many ways, they’re quite stupid,” Clegg—who served as Britain’s deputy prime minister between 2010 and 2015—said.

“They are in effect vast textual databases, which act like great gigantic autofill tools,” he explained. “They guess at great speed and try to detect patterns across billions of parameters across the internet—they literally try and guess the next word in response to a prompt that you give them. They have no innate autonomous intelligence at all.”

With billions being invested in the development of cutting-edge A.I. technology, many are speculating about how it will disrupt our day-to-day lives—leading to predictions of deadly machines, calls for greater A.I. governance, and forecasts that the world will soon see the dawn of a new A.I. era.

Back in March, 1,100 prominent technologists and A.I. researchers—including Elon Musk and Apple cofounder Steve Wozniak—signed an open letter calling for a six-month pause on the development of powerful A.I. systems. They pointed to the possibility of these systems already being on a path to superintelligence that could threaten human civilization.

Tesla and SpaceX cofounder Musk has separately said the tech will hit people “like an asteroid” and warned there is a chance it will “go Terminator.” He has since launched his own A.I. firm, xAI, in what he says is a bid to “understand the universe” and prevent the extinction of mankind.

Not everyone is on board with Musk’s view that superintelligent machines could wipe out humanity, however.

On Tuesday, more than 1,300 experts came together to calm anxiety around A.I. creating a hoard of “evil robot overlords,” while one of the three so-called “Godfathers of A.I.” has labeled concerns around the tech becoming an existential threat “preposterously ridiculous.”