‘Godfather of AI,’ ex-Google researcher: AI might ‘escape control’ by rewriting its own code to modify itself
Geoffrey Hinton, the computer scientist known as a “Godfather of AI,” says artificial intelligence-enhanced machines “might take over” if humans aren’t careful.
Rapidly-advancing AI technologies could gain the ability to outsmart humans “in five years’ time,” Hinton, 75, said in a Sunday interview on CBS’ “60 Minutes.” If that happens, AI could evolve beyond humans’ ability to control it, he added.
“One of the ways these systems might escape control is by writing their own computer code to modify themselves,” said Hinton. “And that’s something we need to seriously worry about.”
Hinton won the 2018 Turing Award for his decades of pioneering work on AI and deep learning. He quit his job as a vice president and engineering fellow at Google in May, after a decade with the company, so he could speak freely about the risks posed by AI.
Humans, including scientists like himself who helped build today’s AI systems, still don’t fully understand how the technology works and evolves, Hinton said. Many AI researchers freely admit that lack of understanding: In April, Google CEO Sundar Pichai referred to it as AI’s “black box” problem.
As Hinton described it, scientists design algorithms for AI systems to pull information from data sets, like the internet. “When this learning algorithm then interacts with data, it produces complicated neural networks that are good at doing things,” he said. “But we don’t really understand exactly how they do those things.”
Pichai and other AI experts don’t seem nearly as concerned as Hinton about humans losing control. Yann LeCun, another Turing Award winner who is also considered a “godfather of AI,” has called any warnings that AI could replace humanity “preposterously ridiculous” — because humans could always put a stop to any technology that becomes too dangerous.