A computer scientist considered the “godfather of artificial intelligence” has said governments need to establish a universal basic income to address AI's impact on inequality.
Professor Geoffrey Hinton told BBC Newsnight that benefit reform was needed to give every citizen a fixed amount of cash because he was “very concerned that AI will take away many everyday jobs”. .
“I have been consulted by people in Downing Street who have advised me that universal basic income is a good idea,” he said.
He feels that AI will increase productivity and wealth, but that money “will go to the wealthy rather than the people who have lost their jobs, and that will be very bad for society.” Ta.
Professor Hinton is a pioneer in neural networks, which form the theoretical basis of today's explosion of artificial intelligence.
He worked at Google until last year, but left the tech giant to be able to speak more freely about the dangers of unregulated AI.
The concept of universal basic income means that the government pays a fixed salary to all individuals, regardless of their means.
Critics say this is very expensive and diverts funds from public services, not necessarily helping alleviate poverty.
The BBC contacted the government to ask if the possibility of a universal basic income scheme was being discussed.
Professor Hinton reiterated his concern that a threat to the level of human extinction is emerging.
Looking at trends over the past year, he said, governments have been reluctant to curb the military use of AI, while the rapid race to develop products has led to tech companies not putting “sufficient efforts into safety.” He said it meant there was a risk.
Professor Hinton said: “My guess is that between five and 20 years from now, humanity will have to face a problem where there is a 50% chance that AI will take over.”
This would pose an “extinction level threat” to humanity because we “may have been able to create a form of intelligence superior to biological intelligence… that would be very worrying to us.”
He said AI could “evolve to be motivated to leverage itself further” and autonomously “develop sub-goals of gaining control.”
He said there is already evidence that large-scale language models (a type of AI algorithm used to generate text) have chosen to be deceptive.
He said recent applications of AI to generate thousands of military targets are the “thin end of the wedge.”
“My biggest concern is when they will be able to autonomously decide to kill someone,” he says.
Professor Hinton said something similar to the Geneva Conventions, an international treaty that sets legal standards for humane treatment in war, may be needed to regulate the military use of AI.
“But I don't think that will happen until something very troubling happens,” he added.
When asked whether Western countries were competing with authoritarian states such as Russia and China regarding the military use of AI, similar to the Manhattan Project, which referred to nuclear weapons research during World War II, Professor Hinton said: He answered:[Russian President Vladimir] President Putin said several years ago that whoever controls AI will control the world. I think that's why they're working so hard.
“Fortunately, the Western world is probably a lot ahead in research. We're probably a little bit ahead of China. But China is putting more resources into it. I think there will be competition in terms of usage.
He said a better solution would be to ban military uses of AI.