Bostrom - who leads the Future of Humanity Institute at the University of Oxford and is known for his work on existential risk, human enhancement ethics and superintelligence risks - said there's a "decent probability" that machines will outsmart humans within the next hundred years.
"100 years is quite long," he said at the University of Oxford on Sunday during the annual Silicon Valley Comes to Oxford conference. "We haven't even had computers for 100 years so everything we've seen so far has happened in like 70 years. If you think of the simplest computers, so some simple thing like Pong, and compare that to where we are now, it's a fairly large distance. So it doesn't seem that crazy to say that in 100 years, or indeed much less than that, we will take the remaining steps."
AI can be defined as the intelligence exhibited by machines or software. It has the potential to have a profound impact on the world and it's an area being pursued by global tech giants such as Google and Facebook.
Bostrom said there will be a fundamental transformation in human civilisation when machine intelligence reaches the same level as human intelligence, adding that it will arguably be the most important thing that will ever happen in human history.
"I personally believe that once human equivalence is reached, it will not be long before machines become superintelligent after that," he told an audience of students, aspiring entrepreneurs, academics and business leaders. "It might take a long time to get to human level but I think the step from there to superintelligence might be very quick. I think these machines with superintelligence might be extremely powerful, for the same basic reasons that we humans are very powerful relative to other animals on this planet. It's not because our muscles are stronger or our teeth are sharper, it's because our brains are better."
If humans do create superintelligent machines, Bostrom said our future is likely to be shaped by them, for the better or the worse.
"Superintelligence could help humans achieve our long term goals and values," he said. "They could be an extremely powerful ally that could help us solve a number of other problems that we face."
But superintelligence could also be "extremely dangerous" said Bostrom, pointing to the extinction of the Neanderthals and the near-extinction of the gorillas when the more intelligent Homo sapiens arrived.
Stephen Hawking
Last week Stephen Hawking warned that computers will overtake humans in terms of intelligence within the next 100 years.
"Computers will overtake humans with AI at some within the next 100 years," he said at Zeitgeist 2015 in London. "When that happens, we need to make sure the computers have goals aligned with ours."
Hawking, who signed an open letter alongside Elon Musk earlier this year warning that AI development should not go on uncontrolled, added: "Our future is a race between the growing power of technology and the wisdom with which we use it."
Bostrom joined Stephen Hawking, Max Tegmark, Elon Musk, Lord Martin Rees, Jaan Tallinn and several others, in signing the Future of Life Institute's open letter.
The signatories "...believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today."
AI technology is already built into devices we use in our every day lives. For example, Siri, an intelligent personal assistant that sits inside iPhones and iPads is underpinned by AI developed by Apple, while Google's self-driving vehicles also rely heavily on AI. According to the FT, more than 150 startups in Silicon Valley are working on AI today.