Stephen Hawking, the renowned physicist, cosmologist and author, in an interview with the BBC this week, said "the development of full artificial intelligence could spell the end of the human race."
The BBC noted that Hawking said the state of artificial intelligence (AI) today holds no threat, but he is concerned about scientists in the future creating technology that can surpass humans in terms of both intelligence and physical strength.
"It would take off on its own, and re-design itself at an ever-increasing rate," Hawking said. "Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."
Hawking's comments closely follow those made by high-tech entrepreneur Musk, who raised controversy in late October when he warned an audience at MIT about the dangers behind AI research.
"I think we should be very careful about artificial intelligence," said Musk, CEO of electric car maker Tesla Motors, and CEO and co-founder of the commercial space flight company SpaceX. "If I were to guess at what our biggest existential threat is, it's probably that... With artificial intelligence, we are summoning the demon. In all those stories with the guy with the pentagram and the holy water, and he's sure he can control the demon. It doesn't work out."
Musk, who tweeted this past summer that AI is "potentially more dangerous than nukes," also told the MIT audience that the industry needs national and international oversight.
Musk's comments raised discussion about the state of artificial intelligence, which today is more about robotic vacuum cleaners than Terminator-like robots that shoot people and take over the world.
Yaser Abu-Mostafa, professor of electrical engineering and computer science at the California Institute of Technology, said he was a little surprised that AI is getting so much negative attention since the fearful talk hasn't been preceded by the creation of a new, potentially scary technology.
"We indeed have not made any huge advances in AI recently that would warrant such concern," Abu-Mostafa told Computerworld. "One factor is that the quick advances in technological products, like cell phones, and their broad availability to everyone, even children, have made it easier for science-fiction-level predictions to be believable by the general population."
While Musk and Hawking are far from typical members of the general population, he still doesn't agree with them.
"I am not worried, not only because we are probably decades away from a superior level of machine intelligence, but also because I believe we can control it when we get there. Using the nuclear technology analogy, the fact that we now have the physical ability to destroy the entire world in minutes does not mean that it will just happen. Humans can and do put the safeguards in place to prevent that."
Some scientists do have concerns about artificial intelligence advancing beyond human control, but they admit the technology is 50 to 100 years away. That leaves plenty of time to prepare for any threatening advances in AI technology.
"I actually do think this is a valid concern and it's really an interesting one," said Andrew Moore, dean of the School of Computer Science at Carnegie Mellon University, in a previous interview. "It's a remote, far future danger but sometime we're going to have to think about it. If we're at all close to building these super-intelligent, powerful machines, we should absolutely stop and figure out what we're doing."
Stuart Russell, a professor of electrical engineering and computer science at the University of California Berkeley, said he sees some future danger to artificial intelligence, and that's why he's making efforts now, organizing talks and workshops, to educate scientists about it.
Now is the time to start thinking about the issue, before scientists are capable of building the machines, he said.