Some innovative university researchers are focusing on cutting the cost of cooling the hot racks of servers in data centers. Last month, to create one of the world's most efficient data centers on the school's campus, while the Georgia Institute of Technology where researchers can test new cooling designs and measure the impact that the designs have on power efficiency.
[ For more data center news and expert advice on data center strategy, see CIO.com's section. ]
The Georgia Tech researchers aim to analyze power consumption "all the way from the chip to the data center facility," says Yogendra Joshi, a professor of mechanical engineering at the university.
"We are addressing the inefficiencies at all scales," Joshi says. "Some researchers are looking at cooling at the chip level, some are looking at the cabinet level, and some are looking at the facilities level."
Two major trends in the data center sector are driving the interest in cooling. As the demand for data centers continues to rise, despite the down economy, Moore's Law-the prediction that processors will become twice as powerful every 18 months to 2 years-means that data centers will produce more heat. However, companies looking to build new data centers are finding resources increasingly scarce. Power is more expensive, and water for cooling is harder to come by.
"It is a key cost and a rising one," says Marion Howard Healy, an analyst focusing on data-center cooling for the Broad Group. "The increase in unstructured data means that storage costs are going up. And servers are becoming much more powerful, so (they) require more cooling then they used to."
Five years ago, a typical server rack, which is the size of a household refrigerator, produced between 1 and 5 kilowatts of heat. Today, typical server racks generate around 18 kilowatts, about as much as two average households. The trend towards hotter hardware will only continue: Manufacturers are working on cabinets containing higher-power chips that will produce three times as much heat, or about 60 kilowatts.
That could limit the types of cooling technology that could be used.
"We are getting to the point where you cannot do the cooling from air alone," Joshi says. "We want to do liquid cooling. You could certainly do a 60-kilowatt rack with liquid cooling."
The two trends mean that future data centers need to drastically reduce the cost of cooling to prevent it from overwhelming facility budgets. Typically, the energy required to cool the data center consumes 30 to 50 percent of the cost of running such facilities. In total, 60 percent of the cost of a data center relates to energy, Broad Group's Healy says. And, with more nations considering some form of carbon tax, companies should expect that figure to move higher.
"All of these things conspire to make sure that you are using your resources in the most efficient way," Healy says.
That's why more efficient cooling has become a key problem for information-technology companies. Different companies are tackling the problem in different ways. Intel has focused on more efficient processors and methods of cooling processors on chip. Server manufacturers are focusing on creating more compact machines that can be cooled efficiently. And facility architects are finding better configurations that save on cooling costs.
Georgia Tech's Joshi aims to reduce data center cooling costs by more than 15 percent. They are making good progress: The research group has found a way of configuring cabinets in the data center to increase air-cooling efficiency. Rather than long rows of server racks with hot air exiting the cabinets on one side and cool air entering on the other, Joshi and his colleagues found that four cabinets arranged in a plus formation, with cool air entering from the middle, works best.
"Just by changing the arrangement, you can get 20 to 30 percent lower energy costs," he said. "In some cases, it can be even more."
It's a holistic approach to tackling the cooling problem, and one that other researchers are following as well. Working with IBM, Syracuse University has embarked on a project to half the energy costs for its on-campus data center. Announced on May 29, the project will incorporate on-site power generation and a liquid cooling system that pumps chilled water to heat exchangers on the rear of the server cabinets.
"Energy use is becoming the largest single cost in operating data centers, with $2 billion per year wasted nationally dues to inefficiencies," Vijay Lund, vice president for development and manufacturing operations for IBM, said in a statement announcing the partnership.