The leading university announced a partnership with Intel earlier this week, which will see the chip-maker invest in its HPC facilities. The deal will see Intel work along with Dell staff to upgrade the high performance infrastructure used to serve research departments within the univeristy, working in areas such as genomics and astronomy, as well as a growing number of businesses with large compute demands.
As an extension to its HPC environment, the university has begun to test Intel's latest Xeon Phi chips to meet fast-growing demands from its users, and plans a larger rollout in 2016. The accelerated co-processors, born out of the now defunct Larabee graphics programme, offer similar functionality to GPUs, targeted at repetitive, parallel processing workloads. The second generation Phi processor, dubbed Knights Landing, is expected later this year.
"We have a large test-bed for Xeon Phi where we will be using that to generate demand to help users port their code," Paul Calleja, head of HPC services at the University of Cambridge, told ComputerworldUK.
"When Intel releases its second generation Phi product it will coincide with the increase in demand that we create with the test bed, and we will deploy a large Xeon Phi cluster."
Migration
The university is currently in the process of moving to a new data centre, as part of a £20 million investment to increase its HPC capacity. Its current environment includes 600 Dell servers, with a total of 9,600 processing cores on Sandy Bridge-generation Xeon chips. A GPU environment consists of a 128 node, 256 card Nvidia K20 GPU cluster, claiming to be the fastest in the UK.
Calleja said that the university will continue to use its Nvidia GPUs to meet demand from a large and developed user community.
"It will probably coexist because we have to respond to our users' demands, and they have applications that they have already ported to the GPU. Users have spent the past five years developing GPU code with CUDA, and that code will remain," he said.
"We will probably develop a heterogeneous architecture where we have standard Xeon processors, Xeon Phi and GPUs, so there will be three different types of processing environment. Then users will run their codes on elements of the architecture that give them the best efficiency, because most of the time people are looking at the most cost-effective compute that they can get."
However, Calleja said that Phi offers certain advantages, such as the ease of use as it relies on the same core technology as standard Xeon chips, making it easier to program for.
It also offers improved power efficiency, he said, which is an important factor for certain workloads the university is targeting, such as a project to support the Square Kilometer Array telescope in Australia and South Africa, and ongoing work to speed genome processing.
"Our involvement in the SKA needs extreme levels of performance, and because the data centres are going to be in the desert, power is at a premium so they need extreme performance and extreme high power efficiency - so that is one of our drivers for Xeon Phi," said Calleja.
"Also in genomics the demand for parallel processing is huge, and these devices also want to be in a hospital setting where, again, power and large data centres are not suitable - they want to use real estate for patients and not for data. So in those areas where you need high levels of compute in a reasonably small power-constrained environment that drives the use of accelerators like Xeon Phi."