The alarm bells have been rung. In May, computer scientist Geoffrey Hinton, known as the “godfather of AI”, quit his role at Google to warn of the “existential threat” posed by artificial intelligence. The Center for AI Safety followed up with an open letter, signed by Hinton and hundreds of others, warning that advanced AI could destroy humanity. “Mitigating the risk of extinction from AI should be a global priority,” read the statement.
This sudden surge of concern seems to have been motivated by the rapid advance of AI-powered chatbots like ChatGPT and the race to build more powerful systems. The fear is that the tech industry is recklessly accelerating the escalation of AI’s capabilities. All of which sounds scary.
But the warnings are also suspiciously vague. When you scrutinise the scenarios typically put forward for precisely how AI could wipe out humans, it is hard to avoid the conclusion that such fears aren’t well-founded. Many experts are instead warning that fretting over long-term doomsday scenarios is a distraction from the immediate risks posed by existing AIs.
Generally, the people who talk about existential risks reckon that we are on a trajectory towards artificial general intelligence (AGI), roughly defined as machines that can out-think humans. They predict that people will invest advanced AIs with more autonomy, giving them access to vital infrastructure, such as the power grid or financial markets, or even putting them at the forefront of warfare – at which point they could go rogue or otherwise resist our attempts to control them.
But it remains to be seen if AIs will ever reach the kind of super-intelligence…