Let’s refresh this week.
The clear benefits of having a highly integrated and tested system are short deployment times and relatively high reliability. This is because each component is put through a massive series of tests to assure it works with every other component and, done right, you end up with an enterprise class appliance. Basically, it is almost a data center in a box (we’ll get to the “almost” part in a minute).
Where a traditional solution based on buying components could take months to configure and install a hyper-converged system can be implemented in weeks and sometimes even in days. And because so much work is put into the interoperation of the components and given that the management systems that surround the solutions are uniquely designed around them, much of the complexity that typically makes managing and assuring a data center is eliminated. This offers a huge advantage for those providing a broad range of relatively generic services either for their own companies or as a service provider for others.
When I’ve spoken to service providers who have implemented a good hyper-converged solution it is almost like talking to a religious fanatic. They just gush at how flexible and surprisingly powerful the result is.
But there are issues.
The big one is performance and that was what the audience member was explaining to me. His Hadoop deployment required minimal latency and massive performance and no hyper-converged solution he tested met his needs. He lived on the cutting edge of Intel technology and when he tried to get a solution based on that he couldn’t find one. This is because part of creating a hyper-converged solution is massive interoperability testing, which can take months to complete after a new processor and chipset are announced.
The reason they deploy so quickly is because this testing is done before the system is certified for sale, but you can always buy a server on the cutting edge and do this testing yourself. And because Intel has done a stunning job tuning its solution for Hadoop the result is you get a massive improvement in performance tied to the performance of their new products. So if you are willing to trade off interoperation for performance and you need the absolute highest performance then, as my audience member pointed out, hyper-converged solutions, at least for this use, is not for you.
Now one other thing I just find kind of annoying is that hyper-converged should include everything that is in the data center. I’ve seen folks pitch hyper-converged as commodity servers and storage, leaving out networking. But most annoying is that only one vendor seems to include telephony even though we are increasingly turning to VoIP solutions and requiring the same kind of monitoring and controls over phone calls in many industries that we place on digital forms of communication.
In addition, and for some time, security breaches start with a phone call that an employee believes is coming from a trusted source often inside the company, but instead is coming from an attacker outside the company who then gets information on how to penetrate the firm’s data security.
My view is that hyper-converged solutions should at least have the option to include telephony if only for security and reporting reasons alone, yet this is almost never done.
As you would expect, a hyper-converged solution is a really good path to create a data center in a box, basically a near complete enterprise data appliance. But what makes it work is a massive amount of up front testing, which slows the path of technology to market. This works against firms who have point requirements pushing the technology envelope. So for those problems the more traditional approach of deploying cutting edge servers, storage and even networking will likely provide better results.
In addition, all hyper-converged solutions aren’t created equal. Start with a solid specification and you’ll likely find some vendors perform better against it than others.