David Dalrymple, a leading AI safety expert, has warned that the world “may not have time” to prepare for the safety risks posed by cutting-edge AI systems. Dalrymple, who works as a programme director and AI safety expert at the UK government’s Advanced Research and Invention Agency (ARIA), told The Guardian that the development of AI was moving “really fast” and that it can’t be assumed these “systems are reliable”.
“I think we should be concerned about systems that can perform all of the functions that humans perform to get things done in the world, but better,” he told the publication.
“We will be outcompeted in all of the domains that we need to be dominant in, in order to maintain control of our civilisation, society and planet,” Dalrymple added.
He went on to describe the consequences of AI’s progress getting ahead of safety as “destabilisation of security and economy.” The researcher highlighted that there is a need for more technical work on understanding and controlling the behaviour of advanced AI systems.
“I would advise that things are moving really fast and we may not have time to get ahead of it from a safety perspective,” Dalrymple said. “And it’s not science fiction to project that within five years most economically valuable tasks will be performed by machines at a higher level of quality and lower cost than by humans.”
Humans are sleepwalking into this transition, says Dalrymple:
ARIA is publicly funded but is reportedly independent of the government. Dalrymple works on developing systems that can safeguard the use of AI in critical infrastructure like energy networks. He told the publication that governments should not assume that advanced AI systems are reliable.
“We can’t assume these systems are reliable. The science to do that is just not likely to materialise in time given the economic pressure. So the next best thing that we can do, which we may be able to do in time, is to control and mitigate the downsides,” Dalrymple said.
He also suggested that human civilisation is sleepwalking into this “high-risk” transition that is happening with AI.
“Progress can be framed as destabilising and it could actually be good, which is what a lot of people at the frontier are hoping. I am working to try to make things go better, but it’s very high risk and human civilisation is on the whole sleepwalking into this transition,” he said.
Dalrymple also went on to give a stark warning that AI systems would be able to automate a full day of research work and development by late 2026. This, he says, would lead to “a further acceleration of capabilities” because the technology would be able to self-improve on maths and computer science elements of AI development.