Servers with Nvidia's Tesla P100 GPU will ship next year
Dell, Hewlett Packard Enterprise, Cray and IBM will start taking orders for servers with the Tesla P100 in the fourth quarter of this year, Nvidia CEO Jen-Hsun Huang said during a keynote at the GPU Technology Conference in San Jose, California.
The servers will start shipping in the first quarter of next year, Huang said Tuesday.
The GPU will also ship to companies designing hyperscale servers in-house and then to outsourced manufacturing shops. It will be available for in-house "cloud servers" by the end of the year, Huang said.
Nvidia is targeting the GPUs at deep-learning systems, in which algorithms aid in the correlation and classification of data. These systems could help self-driving cars, robots and drones identify objects. The goal is to accelerate the learning time of such systems so the accuracy of results improves over time.
Nvidia's GPUs are widely used in supercomputers today. Two of the world's 10 fastest supercomputers use Nvidia GPUs, according to a list compiled by Top500.org.
The Tesla P100 is based on Nvidia's new Pascal architecture. Many new features could help the GPU improve overall server performance.
The GPU has 15 billion transistors, and its floating point performance tops out at 21.2 teraflops.
The chip was made using the 16-nanometer FinFET process. Chips are stacked on top each other, allowing Nvidia to cram in more features.
The Tesla P100 has HBM2 (High-Bandwidth Memory 2) memory that boasts bandwidth of 256GBps (gigabytes per second), which is two times faster than its predecessor, HBM.
A new NVLink interface can transfer data at 160Gbps (bits per second), which is five times faster than PCI-Express.
However, questions remain on how servers will fit with NVLink. IBM has said its Power architecture will support NVLink, but servers with Intel chips use PCI-Express to hook up GPUs to motherboards.
At the conference, however, Nvidia showed a supercomputer called the DGX-1 running on Intel Xeon chips with the Tesla P100 GPU.