Elon Musk believes that London research lab DeepMind is a “top concern” when it comes to artificial intelligence.
DeepMind was acquired by Google in 2014 for a reported $600 million. The research lab, led by chief executive Demis Hassabis, is best-known for developing AI systems that can play games better than any human.
“Just the nature of the AI that they’re building is one that crushes all humans at all games,” Musk told The New York Times in an interview published on Saturday. “I mean, it’s basically the plotline in ‘War Games.’”
DeepMind declined to comment when contacted by CNBC.
Musk has repeatedly warned that AI will soon become just as smart as humans and said that when it does we should all be scared because humanity’s very existence is at stake.
The tech billionaire, who profited from an early investment in DeepMind, told The New York Times that his experience of working with AI at Tesla means he is able to say with confidence “that we’re headed toward a situation where AI is vastly smarter than humans.” He said he believes the time frame is less than five years. “That doesn’t mean everything goes to hell in five years. It just means that things get unstable or weird,” he said.
Musk co-founded the OpenAI research lab in San Francisco in 2015, one year after Google acquired DeepMind. Set up with an initial $1 billion pledge that was later matched by Microsoft, OpenAI says its mission is to ensure AI benefits all of humanity. In February 2018, Musk left the OpenAI board but he continues to donate and advise the organization.
Musk has been sounding the alarm on AI for years and his views contrast with many AI researchers working in the field. In May, CNBC reported that Musk’s relationship with the AI community is complex.
“A large proportion of the community think he’s a negative distraction,” said an AI executive with close ties to the community who wished to remain anonymous because their company may work for one of Musk’s businesses.
Super-intelligent A.I.
Building machines that are just as smart as humans is widely regarded as the holy grail of AI. But some, including Musk, are concerned that machines will go on to quickly outsmart humans when human-level AI is achieved.
Last October, AI pioneer Yoshua Bengio told the BBC: “We are very far from super-intelligent AI systems and there may even be fundamental obstacles to get much beyond human intelligence.”
At the Beneficial AI conference in 2017, Musk and Hassabis sat on a panel alongside Oxford professor and Superintelligence author Nick Bostrom; Skype cofounder Jaan Tallinn; Google engineering director Ray Kurzweil; Berkeley University computer scientist Stuart Russell; and several others.
At the start of the panel — titled “Superintelligence: Science or Fiction?” — everyone agreed that some form of superintelligence is possible, except Musk. However, he appeared to be joking. Asked whether it will actually happen, everyone said “yes”. When asked if they would like superintelligence to happen, Hassabis said “yes” while others gave a more nuanced “it’s complicated.”
In 2016, Bostrom said he believed DeepMind is winning the global AI race. Asked about the matter again earlier this year, Bostrom told CNBC: “They certainly have a world-class, very excellent, large and diverse research team. But it’s a big field so there are a number of really exciting groups doing important work.”
AI consultant Catherine Breslin, who used to work on Alexa at Amazon, told CNBC: “There’s an idea that’s popular, of raising concerns about AI by imagining a future where it becomes powerful enough to oppress all of humanity. But, projecting into an imagined future distracts from how technology is used right now. AI has done some amazing things in recent years.”