Nvidia chief downplays challenge from Google’s AI chip
Google's Tensor Processing Unit, or TPU, was built specifically for deep learning, a branch of AI through which software trains itself to get better at deciphering the world around it, so it can recognize objects or understand spoken language, for example.
TPUs have been in use at Google for more than a year, including for search and to improve navigation in Google Maps. They provide "an order of magnitude better-optimized performance per watt for machine learning" compared to other options, according to Google.
That could be bad news for Nvidia, which designed its new Pascal microarchitecture with machine learning in mind. Having dropped out of the smartphone market, the company is looking to AI for growth, along with gaming and VR.
But Nvidia CEO Jen-Hsun Huang isn't phased by Google's chips, he said at the Computex trade show Monday.
For a start, he said, deep learning has two aspects to it -- training and inferencing -- and GPUs are still much better at the training part, according to Huang. Training involves presenting an algorithm with vast amounts of data so it can get better at recognizing something, while inferencing is when the algorithm applies what it's learned to an unknown input.
"Training is billions of times more complicated that inferencing," he said, and training is where Nvidia's GPUs excel. Google's TPU, on the other hand, is "only for inferencing," according to Huang. Training an algorithm can take weeks or months, he said, while inferencing often happens in a split second.
Besides that distinction, he noted that many of the companies that will need to do inferencing won't have their own processor.
"For companies that want to build their own inferencing chips, that's no problem, we're delighted by that," Huang said. "But there are millions and millions of nodes in the hyperscale data centers of companies that don't build their own TPUs. Pascal is the perfect solution for that."
That Google built its own chip shouldn't be a big surprise. Technology can be a competitive advantage for big online service providers, and companies like Google, Facebook and Microsoft already design their own servers. Designing a processor is the next logical next step, albeit a more challenging one.
Whether Google's development of the TPU has affected its other chip purchases is tough to know.
"We're still buying literally tons of CPUs and GPUs," a Google engineer told The Wall Street Journal. "Whether it's a ton less than we would have otherwise, I can't say."
Meanwhile Nvidia's Huang, like others in the industry, expects deep learning and AI to become pervasive. The last 10 years were the age of the mobile cloud, he said, and we're now in the era of artificial intelligence. Companies want to better understand the masses of data they're collecting, and that will happen through AI.