A scientist who wrote a leading book on artificial intelligence said experts were “scared” by their own success in the field, comparing the progress of AI to the development of the atomic bomb.
Professor Stuart Russell, founder of the Center for Human-Compatible Artificial Intelligence at the University of California at Berkeley, said most experts believe machines smarter than humans will be developed in this century, and he has called for international treaties to regulate the development of technology.
“The AI community has not yet adjusted to the fact that we are now starting to have a really big impact in the real world,” he told The Guardian. “That just wasn’t the case for most of the history of the domain – we were just in the lab, developing things, trying to make things work, most of the time we were couldn’t get things to work. The question of impact in the real world was therefore irrelevant at all. And you have to grow up very quickly to catch up.
Artificial intelligence underpins many aspects of modern life, from search engines to banking, and advances in image recognition and machine translation are among the key developments in recent years.
Russell – who in 1995 co-authored the seminal book Artificial Intelligence: A Modern Approach, and who will be lecturing at the BBC Reith this year titled “Living with Artificial Intelligence,” which begins Monday – says urgent work is necessary to ensure humans maintain control as super-intelligent AI is developed.
“The AI was designed with a particular methodology and a sort of general approach. And we are not careful enough to use this kind of system in complex real world contexts, ”he said.
For example, asking AI to cure cancer as quickly as possible could be dangerous. “He would probably find ways to induce tumors in the whole human population, so that he could run millions of experiments in parallel, using all of us as guinea pigs,” Russell said. “And that’s because it’s the solution to the goal we gave it; we just forgot to make it clear that you cannot use humans as guinea pigs and you cannot use all the GDP in the world to conduct your experiments and you cannot do this and you cannot do that.
Russell said there is still a big gap between today’s AI and that depicted in movies such as Ex Machina, but a future with machines smarter than humans is on that map.
“I think the numbers range from 10 years for the most optimistic to a few hundred years,” Russell said. “But almost every AI researcher would say it’s going to happen in this century.”
One concern is that a machine would not need to be smarter than humans in all things to pose a serious risk. “This is something happening now,” he said. “If you look at social media and the algorithms that choose what people read and watch, they have enormous control over our cognitive input. “
The result, he said, is that the algorithms manipulate the user, brainwash him so that his behavior becomes more predictable in regards to what he has chosen to engage with, thus increasing the results. click-based revenue.
Have AI researchers been afraid of their own success? “Yeah, I think we’re more and more scared,” Russell said.
“It reminds me a bit of what happened in physics where physicists knew atomic energy existed, they could measure the masses of different atoms, and they could figure out how much energy could be released if you could do the conversion between different types of atoms, “he said, noting that experts have always emphasized that the idea was theoretical.” And then it happened and they weren’t ready for it. “
The use of AI in military applications – such as small anti-personnel weapons – is of particular concern, he said. “These are the ones that are very easily scalable, which means you can put a million of them in one truck and you can open up the back and they go off and wipe out a whole town,” Russell said.
Russell believes the future of AI lies in developing machines that know the real goal is uncertain, just like our preferences, which means they have to check with humans – rather like a butler – for any decision. But the idea is complex, not least because different people have different – and sometimes contradictory – preferences and these preferences are not set in stone.
Russell called for measures, including a code of conduct for researchers, legislation and treaties to ensure the safety of the AI systems in use, and training of researchers to ensure that AI is not susceptible to attacks. problems such as racial prejudice. He said EU legislation that would ban theft of human identities by machines should be passed around the world.
Russell said he hoped the Reith lectures would highlight that there is a choice about what the future holds. “It is really important that the public is involved in these choices, because it is the public who will benefit or not,” he said.
But there was also another message. “Progress in AI is something that will take a long time to happen, but that doesn’t make it science fiction,” he said.