Artificial intelligence poses an "extinction risk" to human civilisation, an Oxford University professor has said.
Almost everything about the development of genuine AI is uncertain, Stuart Armstrong at the Future of Humanity Institute said in an interview with The Next Web.
That includes when we might develop it, how such a thing could come about and what it means for human society.
But without more research and careful study, it's possible that we could be opening a Pandora's box. Which is exactly the sort of thing that the Future of Humanity Institute, a multidisciplinary research hub tasked with asking the "big questions" about the future, is concerned with.
"One of the things that makes AI risk scary is that it’s one of the few that is genuinely an extinction risk if it were to go bad. With a lot of other risks, it’s actually surprisingly hard to get to an extinction risk," Armstrong told The Next Web.
"If AI went bad, and 95% of humans were killed then the remaining 5% would be extinguished soon after. So despite its uncertainty, it has certain features of very bad risks."
Above: Student Alejadro Bordallo plays rock-scissors-paper with a robot programmed by scientists to use artificial intelligence to learn strategy as they play
The thing for humanity to fear is not quite the robots of Terminator ("basically just armoured bears") but a more incorporeal intelligence capable of dominating humanity from within.
The threat of such a powerful computer brain would include near-term (and near total) unemployment, as replacements for virtually all human workers are quickly developed and replicated, but extends beyond that to genuine threats of widespread anti-human violence.
"Take an anti-virus program that’s dedicated to filtering out viruses from incoming emails and wants to achieve the highest success, and is cunning and you make that super-intelligent," Armstong said.
"Well it will realise that, say, killing everybody is a solution to its problems, because if it kills everyone and shuts down every computer, no more emails will be sent and and as a side effect no viruses will be sent."
The caveat to all this is that creating AI is difficult, and we're nowhere near it. The caveat to that is that it could happen far more quickly than anyone would expect, if just one developer came up with a "neat algorithm" that no one else had thought to construct.
Armstrong's conclusion is simple: let's think about this now, particularly in relation to employment, and try to adjust society ourselves before the AI adjusts it for us.
It's fascinating, necessary speculative reading. Head over to The Next Web for the full account.