Prime Minister Rishi Sunak has revealed the UK will launch an artificial intelligence (AI) safety institute, following a similar announcement from the US.
The announcements took place during the inaugural AI safety summit, a meeting of world leaders, computer scientists and tech executives held this week at Bletchley Park in Buckinghamshire – the home of codebreaking and computing.
The UK’s AI institute has been presented as an evolution of the Frontier AI Taskforce created in June, with a similar leadership, as Ian Hogarth has agreed to continue in his role as chair. Celebrated AI researcher Yoshuo Bengio will be taking the lead on the production of the institute’s first report.
The institute will “help spur international collaboration on AI’s safe development” by forming partnerships with research bodies such as the Alan Turing Institute, leading AI companies such as Google DeepMind, and nations including Singapore and the US.
Sunak said the institute will “act as a global hub on AI safety, leading on vital research into the capabilities and risks of this fast-moving technology”.
“It is fantastic to see such support from global partners and the AI companies themselves to work together so we can ensure AI develops safely for the benefit of all our people,” he added. “This is the right approach for the long-term interests of the UK.”
At the summit, US secretary of commerce Gina Raimondo stressed the need to regulate AI to prevent harm, and announced the launch of the US AI safety institute.
“I will almost certainly be calling on many of you in the audience who are in academia and industry to be part of this consortium,” she said. “We can’t do it alone – the private sector must step up.”
Raimondo said the institute would also seek to facilitate the development of standards for safety, security and testing of AI models as well as develop standards for authenticating AI-generated content, and provide testing environments to evaluate emerging AI risks.
The two countries were among the signatories of the Bletchley Declaration, described as a “landmark achievement” that signals a starting point in the conversations around the risks of AI technologies. The document has also been signed by representatives from the European Union and 28 countries, including China.
The announcement follows US President Joe Biden’s signing of an executive order that would require AI developers to share the results of safety tests with the US government before they are released to the public. The order also directs agencies to set standards for that testing and address related chemical, biological, radiological, nuclear and cyber security risks.
The Biden administration has been very vocal about its concerns over the rapid development of generative AI, and even unveiled a new AI Bill of Rights, which outlines five protections internet users should have in the AI age.
It’s not clear how much funding the UK and US governments will provide to their AI institutes.