Frustration in IT operations is at an all-time high because it is hard to make informed decisions quickly if you can’t correlate user, network and applications data for any client or groups of clients in real time and across time, location and other dimensions to gain a broader perspective into each user’s actual experience on the network.
But recent advances in IT Operations Analytics (ITOA) now makes it possible to passively collect wireless data directly from network elements, such as wireless controllers and/or access points, without requiring sensors or agents on clients. These solutions perform out of band, deep packet inspection on real user data traversing the network in order to extract information about clients, network services and applications.
Emerging ITOA systems have been developed to deliver a more complete picture of infrastructure performance by collecting, inspecting, interpreting and analyzing traditionally disparate data within the network with the goal of providing:
To do this, a myriad of data types, varying from vendor to vendor, need to be extracted and analyzed, such as:
Today, when problems occur, operations staff are inundated with volumes of uncorrelated network, client and application data that must be methodically analyzed before anything useful can be accomplished. Traditional IT operations solutions are simply reactive, do not pinpoint problems, and are ill equipped to address today’s modern mobile network environments. They are able to gather data, but that still puts the burden on IT to manually correlate the data.
It’s all about the user experience
When connecting to a network there are countless wired and wireless transactions that take place across all layers, each of which can affect the user experience.
Today there’s no easy way for network operations staff to obtain a holistic view of the entire network application stack to pinpoint current and potential problems and analyze real-time and historical trends, to improve client performance, network reliability or proactively plan for additional users and network-sensitive applications. Proper configuration and performance at each layer and every transaction (see figure below) are crucial to reliable application delivery, network operation, and optimal user performance.
Until recently, each IT operations team had their own set of responsibilities and tools. This often results in finger pointing among groups and vendors when performance problems crop up.
While emerging ITOA technologies are a good start at gathering raw wired network data, they don’t analyze data from the perspective of an enterprise end user all the way up the network/application stack, or automatically summarize the data into something that can be used by IT staff that don’t have the time or expertise to comprehend. What’s more, the network analytics must meld the analysis of wireless data with wired data, with a combination of domain knowledge as well as data science to produce root-cause determinations with “next steps” to solve any problems.
This means understanding concepts such as wireless signal strength, channel utilization and interference, all of which are essential in tuning today’s mobile enterprise networks, and then marrying this information with client device types and operating systems, response times of protocols such as DNS and DHCP, as well as application performance measures such as web page load times and MOS scores. Additionally, it means surfacing problems, especially client issues seen at one or more customer environments, to all “similar” customer environments. The former requires domain knowledge, while the latter requires data science.
Many new IOTA systems also require on-site appliances or discrete sensors to gather and index data, and don’t provide any predictive, historical or location-based trend analysis that is critical for IT operations staff to find problems and resolve them faster. For most of today’s ITOA systems, data collection is completely divorced from the data analysis. Equipment vendors make their SNMP MIBs or system logs available but this data is not collected or formatted with analytics in mind. In other words, more work is involved to make sense of it all.
What’s really missing is the prioritization, and plain English summarization of all of this information rather than these tools becoming yet another information fire hose.
All of this information needs to be collectively correlated to give IT staff a more complete view of a single client, a top-level view across all of the clients in an enterprise, and a view of problems and trends among similar environments. This correlation results in usable, easy to understand insight for all levels of the IT department. The inter-relationship between data collection and data analysis is vitally important.
It is difficult, if not impossible, to do this in a single box, whether a physical appliance or a virtual machine. Therefore there’s an increased importance on a solution architecture (see figure below) that centralizes the collection and storage of large amounts of data, and analyze this information with data analytics platforms specifically designed for this purpose.
Next Gen ITOA and Cloud-Sourcing
Next gen cloud-based ITOA solutions typically utilize newer, horizontally scalable technologies for storage and analysis. For example, they use time-series NoSQL databases such as Apache Cassandra, as well as big data analytics platforms such as Apache Spark. Moreover, the entire system is also managed from the cloud, so the component of the solution that extracts data from enterprise environments can be altered whenever necessarily so that it collects the right data to fulfill the overall use case.
Armed with a complete view of what’s going on in the network, engineers can quickly see root cause analysis for individual clients or client groups and better predict what will happen as the application environment constantly changes. Moreover, IT staff now have a clearer picture of network performance over time that allows them to better plan for network capacity changes that might be needed.
What’s more, using big data techniques within the cloud for network infrastructure data and trending opens the door to something never possible before called “cloud sourcing.”
Cloud sourcing is the ability to securely share and compare infrastructure and client analytics metrics as well as key performance indicators between different organizations in a completely anonomized manner. This allows organizations to gain a deeper understanding of network operation best practices and predictive insights into the user experience. Cloud-sourcing also lets network engineers know definitively if a change made to the infrastructure was effective or not with real data to bak it up.
Imagine the ability to know if a new version of Android or IOS or new wireless LAN code isn’t behaving well with other network services or is negatively affecting the user experience. Cloud sourcing now makes this possible, enabling new levels of insight into best practices, potential problems and infrastructure rends that are occurring in real-time everywhere. Now companies can immediately observe the potential impact of any changes to the network as well as where, why and what is working in similar environments.
The emergence of new IT analytics technology, leveraging cross-stack correlation for both wired and wireless data, can now automatically characterize network issues related to clients, wired services, wireless connectivity and even application behavior. This promises to radically transform the ability for overwhelmed IT staff to keep users happy and the network humming.
Learn more by visiting Nyansa.com.