Network Performance Monitoring is dead
Step back and imagine the world of technology 10 years ago. YouTube was in its infancy, the iPhone was more than a year away from release, Blackberry was the smartest phone on the market and Twitter was barely making a peep.
While the masses are now glued to their iPhones watching cat videos and pontificating 140 characters at a time, the backend infrastructure that supports all of that watching and tweeting—not to mention electronic health records, industrial sensors, e-commerce, and a myriad of other serious activities—has also undergone a massive evolution. Unfortunately, the tools tasked with monitoring and managing the performance, availability, and security of those infrastructures have not kept up with the scale of data or with the speed at which insight is required today.
There is no nice way to say this: What worked 10 years ago isn’t working now. Today, exponentially more data is moving exponentially faster. IT organizations who cling to the old models of monitoring and managing will be at a significant disadvantage to their counterparts who adapt by embracing new technologies.
Take Ethernet, for example. It’s been less than 20 years since the standard for 1Gbps was established, and less than 10 years since 10Gbps started to gain a meaningful foothold. Now, we’re looking at 40Gbps and 100Gbps speeds. It is a different world, and it’s not slowing down. According to the Global Cloud Index, “global IP traffic has increased fivefold over the past five years, and will increase threefold over the next five years.” Between 2014 and 2019, IP traffic is expected to grow 23% annually.
Speeds and feeds are not the only forces at work. Server and application virtualization, software-defined networking and cloud computing are also catalysts for IT change, reshaping how infrastructures are architected and resources are delivered.
Increasingly complex, dynamic and distributed, the network is a different place today than it was 10 years ago. Some view that as a problem to be solved. On the contrary, it’s an opportunity to be seized by forward-thinking network professionals.
The reality is that traditional network performance monitoring (NPM) technologies—packet sniffers, flow analyzers, and network probes for deep packet inspection -- can’t scale or evolve to meet this new demand. Capturing, storing and sniffing packets was relatively straightforward for “fast” Ethernet supporting 100Mbps of throughput. At 100Gbps, capturing and storing terabytes worth of packets would require massive time and infrastructure investments, not to mention hours of a person's life just to sniff a small subset of those packets.
The market is starting to take notice. In its most recent Magic Quadrant and corresponding Critical Capabilities report for Network Performance Monitoring and Diagnostics, Gartner placed a high emphasis on operational analytics functionality capable of elevating network data beyond the realm of the network and even IT. At the same time, the analyst firm also noted stagnating innovation in the space, the result of legacy frameworks built for speeds and architectures that were already being phased out more than a decade ago. Put simply, these legacy architectures are ill-equipped to address the new realities that modern IT delivers.
While legacy architectures are stymying technological innovation, marketing innovation abounds. The monitoring and analytics markets have long been fraught with misleading statements, and vendors in these sectors are growing more and more adept at applying the latest buzzwords to antiquated technologies in the hopes of extending their lifespans by a few short years.
Compounding this problem is the lack of transparency in these markets. Apples-to-apples comparisons of competitive offerings are nearly impossible because so few vendors publish their performance numbers. Even when they do, definitions are often fluid, confusing, or outright misleading, making it a massive challenge to put those numbers in context.
As enterprise customers increasingly look for next-generation solutions, it will be critical for them to understand the nuances of vendor terminology and architecture in order to separate and effectively assess what is actual functionality versus what is marketing gloss.
IT buyers deserve to be able to make a fair, real, honest comparison of vendor offerings, which is nearly impossible in the current climate where performance numbers are obscured and terms are loosely defined.
It is time for every vendor in the network performance monitoring sector—and frankly, every vendor in the IT operations management sector—to put our money where our mouth is. Customers deserve the opportunity to make an apples-to-apples comparison of claims around performance, scale, and deployment.
Every IT person should be able to get a real answer to these and many other questions:
Information technology is a different world than it was 10 years ago, and the demands a typical organization experiences are increasing. Over the next 18 to 24 months, the massive shift IT is undergoing will start to meaningfully separate the wheat from the chaff in the NPM market, if for no other reason than because the solutions that cannot evolve will start to fail in deployment. It’s only a matter of time now before the lipstick wears off the pig.
Jesse Rothstein is the CEO and co-founder of ExtraHop, the leader in powering data-driven IT operations for the real-time enterprise.