Subscribe now

Technology

We must move faster to understand and regulate AI, says Rishi Sunak

Speaking at the end of the UK's AI Safety Summit, prime minister Rishi Sunak said that we don't yet understand enough about AI models to regulate them properly, but work to do so must happen faster

By Matthew Sparkes

2 November 2023

UK prime minister Rishi Sunak at the AI Safety Summit at Bletchley Park on 2 November

Justin Tallis/WPA Pool/Getty Images

Artificial intelligence models must be better understood and subject to testing before any mandatory legislation to oversee the industry can be introduced, UK prime minister Rishi Sunak told the AI Safety Summit at Bletchley Park – but he also said that such efforts must be accelerated.

Sunak announced the establishment of a UK AI Safety Institute last week that will engage with technology companies on a voluntary basis to ensure that their models are safe to roll out to the public. But the body won’t have official regulatory powers and companies won’t be compelled to submit to whatever testing protocols they set up.

In a press conference that marked the end of the summit, Sunak said that regulation will ultimately be needed, but should be based on evidence. Large technology companies working on AI, including Meta, Google DeepMind and OpenAI, have agreed to engage with the new organisation, he said.

“We now have the agreement we need to go and do the testing before the models are released to the public,” said Sunak. “What we can’t do is expect companies to mark their own homework.”

Sunak said that regulation “takes time, and we need to move faster”, adding that more information on AI must be gathered before effective regulation can be written.

Sign up to our The Daily newsletter

The latest science news delivered to your inbox, every day.

“When the people who are developing it themselves are constantly surprised by what it can do, it’s important that that regulation is empirically based, that it’s based on scientific evidence,” he said.

But he said he believed that the state has a strong role to play in the future of AI. “Fundamentally, it’s only governments that can test the national security risks. And, ultimately, that is the responsibility and knowledge of a sovereign government and – with the involvement of our intelligence agencies, as they have been with all our AI work thus far – that is the job of governments and no one else can do it on behalf of them.”

Around 100 politicians, business leaders and academics spent two days at the AI Safety Summit discussing the potential dangers posed by smarter-than-human AI, which Sunak had previously said could be equal to that from nuclear war.

The event was criticised by some for a lack of transparency after a list of governments and organisations in attendance was published by the UK government – but not the names of all the guests. Reporters at the event were also prohibited from mingling with delegates.

But one notable achievement at the summit was the signing of the Bletchley Declaration by 28 countries, including the US and China, and the European Union. The document states that there are risks from AI and says that countries should continue to research these risks. The declaration also added a smaller summit on the same topic in South Korea on the calendar within the next six months, and another large-scale conference next year.

But progress was panned as being too vague and sluggish by experts. “We’ve already been slow to regulate AI and reach international agreements on it,” says Carissa Véliz at the University of Oxford. “Having another meeting in six months’ time doesn’t seem ambitious enough, given the high stakes and the rapid development and implementation of AI.”

The prime minister was also due to hold a live-streamed conversation on 2 November with Elon Musk, owner of xAI, which will be broadcast on Musk’s social media platform X, formerly known as Twitter.

Topics: