Network analysis is like turning over rocks
It all started when I downloaded a trial version of software that analyzes network traffic to the Internet. It's a pretty cool product. Not unlike the Web filtering technology, it uses a database to compare the traffic on my network to known risks, like file-sharing sites and unapproved cloud services. The way it works is simple: I export my firewall logs to a (rather large) file, import them into the software, and it combs through all the traffic to websites and compares it against the risk database. I thought it would be good validation of my website blocking capability -- and I was right. But I expected my website filtering to be a lot more effective than it turned out to be.
When I got my first report from the software, I thought it must be wrong. Google Drive, DropBox and other file-sharing services were prominent on the list. But I block those sites! And webmail -- another category that I block -- was being accessed a lot more than I had thought. I also found some usage of remote access services and collaboration sites that can allow remote control of my company's end-user computers. Those also should be blocked. There were quite a few other surprises as well, including a website that aggregates communications from email, instant messaging, social media and mobile devices -- along with a huge potential for data leakage.
Unfortunately, the report was not wrong. Since it was based on my own firewall logs, there wasn't much question of the integrity of the data itself. I was able to verify that people have indeed been going to the websites in question.
I did some investigation and discovered that my Web filtering product is not 100% effective at categorizing all websites. For example, Google Drive has many URLs that are not in the file-sharing category. And it's also not completely effective at blocking access to websites over SSL-encrypted browser sessions. So if my end users know a particular URL, and especially if that URL is https rather than http, they can get past my filter. And as it turns out, many of my users are especially adept at finding ways around the system.
So while it's a good thing I went through this exercise to check the effectiveness of my Web filtering, I was a lot happier before I knew the truth.
Now I've come to realize that blocking websites based on categories is like playing whack-a-mole. Every time a company like Google brings up a new URL, I'm dependent on my Web filtering vendor to find it and add it to the right category. And it seems the vendor is not as efficient at doing that as I had expected.
So for now, I'm going to use the new software I downloaded to continue analyzing the traffic going from my network to the Internet. When I find people going around the system, I'll manually block the offending sites. But in the long run, I may need to consider using a different product, or a combination of products -- or maybe even a completely new approach. Blocking known, unwanted websites is a "blacklist" approach, which relies on the effectiveness and completeness of the blacklist. A "whitelist" approach, in which I would specify all known good websites that have appropriate (and approved) business purposes, may turn out to be a lot more effective. But it also may turn out to be unmanageable, due to the large number of websites in use by my company's employees. The analysis software may be able to help with that too. This is something I'll be thinking about as I plan my next set of security technology improvements.
This week's journal is written by a real security manager, "J.F. Rice," whose name and employer have been disguised for obvious reasons. Contact him at jf.rice@engineer.com.
Join in
Click here for more security articles.