How much security can you turn over to AI
For one thing, between ecommerce, company websites, email, mobile users and overseas divisions, your company is doing business 24/7; however, your IT security team probably works business hours. That’s one way 60 percent of attackers are able to compromise an organization in minutes, according to Verizon’s 2015 Data Breach Investigations Report. But only a third of businesses can detect a breach within a few days.
In Cisco’s 2016 Annual Security Report, less than half of the businesses interviewed were confident about detecting the scope of a network compromise and cleaning up after it. Hackers routinely use automation – from distributed denial of service attacks run over botnets to exploit kits that help them change malware – so it’s harder to detect.
Can machine learning help you detect attacks more quickly and deal with them faster
There are some ambitious projects using machine learning. Deep Instinct is trying to use deep learning to map how malware behaves, so its appliances can detect attacks in real time, reliably enough to replace a firewall. More realistically, perhaps, Splunk is adding machine learning to its log analysis system to use behavioral analytics to detect attacks and breaches.
“Most organizations lack visibility; if you can’t see it, you can’t protect it. We can detect outliers,” explains Splunk’s Matthias Maier. “We summarize similar users who have similar behavior and then we show that, and if there’s an outlier who has always behaved similarly but is now behaving differently That’s an anomaly you want to look at.”
Splunk can analyze users, computers, IP addresses, data files and applications for unusual behavior, and you don’t need to hire machine learning experts. “We a lot of this right out of the box,” says Maier. “Most organizations don’t have the capability to develop this on their own.” Early adopters include John Lewis and Armani’s retail stores.
[Related: Can AI solve information overload]
Just detecting anomalies can still leave you with a lot of data to look at. A large organization could see thousands of anomalies a day, so Splunk uses further analysis to keep that manageable. Maier expects the tool to surface five or 10 threats a day, in enough detail to make it clear what’s happening (avoiding the problem where noisy or overly complex alerting systems are ignored when they find a real breach).
“We have the full picture on the ‘kill chain’ [of the attack]. We provide a security organization with the information, from the compromise point – when did the attacker come in, what was the initial attack vector, when did they expand in this environment, what other files or servers or user accounts did they connect to – and then the exfiltration phase when they were sending data out … From all these anomalies and individual data points, we create a full picture and present it in a way that every security analyst can understand.”
You can also use the machine learning features in Splunk for more intelligent operations and monitoring, like having your web site alert you that it’s going to need more bandwidth because demand is increasing before the load causes problems, extending the usual analysis options Splunk is known for. But on the security side, Maier says, “We’re concentrating on providing full solutions: detecting insider fraud, or detecting external attacks with valid credentials.”
Microsoft’s Advanced Threat Analytics tool (based on its Aorato acquisition) combines a similar machine learning approach – learning about entities like user accounts and devices from Active Directory, network traffic and your security information and event management (SIEM) systems, then profiling their normal behavior to perform behavioral analysis – but also detecting suspicious activities that it presents in an Attack Timeline, complete with recommendations for dealing with the issue.
“We analyze all the Active Directory data, all the natural traffic going in and out of your domain controllers,” says Microsoft’s Anders Vinberg. “You can fake a lot of things but not natural traffic. We build a graph of which devices you interact with, which resources you access. We start learning normal behavior and once we have learned that, we begin alerting you.” The system also creates traps to mislead attackers.
ATA concentrates on three types of suspicious activities. The first are mistakes and misconfigurations that amount to security risks in your network. “These are security issues that make the life of an attacker much easier, like using plaintext passwords over the wire,” says Vinberg. It can also detect common attacks in real time, including the Pass-the-Ticket and Pass-the-Hash attacks commonly used to move from one system in your network to another.
The third area is where the machine learning comes in. “We detect abnormal behavior. There is always new malware, there are always new attacks … but every one of them would show up as abnormal behavior, because the account would act differently in the network from the regular user behavior,” he explains.
You don’t have to run machine learning on your own network to get protection. In fact, cloud services like Azure AD are able to help you protect identities and user logons in ways you just can’t do within your own organization. And protecting individual users is key to keeping attackers of your network; nearly every data breach turns out to start with legitimate credentials that have been stolen or phished. The insider threat isn’t necessarily coming from inside your company any more.
“We’re using huge machine learning systems and world class techniques to protect all the identities at Microsoft,” points out Alex Weinart, from Microsoft’s Identity, Security and Protection group. “That includes Azure Active Directory, the Microsoft account system and Skype. Because we have one of the largest mail systems in the world, we are heavily targeted. Every attack that happens will pass our door; they’ll try it against Google but they’ll try it against us as well.”
And because people are bad at remembering passwords, those attacks don’t just expose passwords for Microsoft systems. “If someone gets your credentials, we’ll see it; we’re able to see breached credentials very early. If one of your employees has reused their work credentials to set up an account on a shopping site and that’s been breached, the bad guys tend to test it against us to see if it works,” Weinert explains. “At that point, before those credentials are ever tried against your company, we can say that the credentials have gone bad and you should go protect that person.”
Microsoft also gets to see the methods attackers are using. “We see where attacks are coming from at a very nuanced level, and what attacks are shaped like, in both the consumer and enterprise space,” says Weinert. “The adaptability of the bad guys means that the things that mattered yesterday may not matter today. And no-one in the enterprise space has the volume we have [to learn from].”
That volume is tens of terabytes a day and 13 billion login transactions, which are fodder for Microsoft’s machine learning systems to stay up to date on the latest attacks. A deluge of data is only part of what you need to build a system like this. According to Weinert, “a relatively sophisticated and well trained machine learning system takes years, and you also need some expert level human supervision to look as see if there is anything the system isn’t catching.”
That matters because this is about more than spotting patterns and warning you later. As Weinert points out, “the goal is protection, not remediation. A lot of machine learning systems detect what’s happened. Our primary goal is to stop attacks getting through, so we’re training our protections systems. Every day we learn the nuances of the newest attack patterns … and we use the system to generate code on our front end servers that scores everything that comes through.” That score uses around a hundred factors, from browser user agent strings to the time of day.
[Related: Who’s in charge of AI in the enterprise]
A low score means a login getting blocked or turning on multi-factor authentication for that account. You might see false positives, with legitimate users being challenged, but Weinert believes that’s less likely than with traditional systems built on theories about behavior that might block your account because the desktop PC you’ve left on in the office is still connected (and might be writing files when it does a backup) because the system can learn that you’re travelling and logging on from another PC in another location.
It’s not just the scale of data that makes a difference, he suggests. “As humans, we want to believe our hunch is right, we get very attached to our theories, but machine learning doesn’t care. Even if something is a strong signal today, if that fades out of fashion the system is completely willing to throw that away and pick out a new pattern. It adapts to the reality of what actually results in a compromise, not our suspicions or our suppositions. As a result, our precision – which is the number of times when we’re targeting someone that they’re is actually a bad guy – is very high.”
The reports in Azure AD Premium (or Microsoft’s Enterprise Mobility Suite) let you know when the machine learning system has detected that your credentials are being exposed. “Very soon we’ll be offering that in a policy-based way,” Weinert says, “so you can ask the system to act on your behalf rather than you having to catch the report in time. Leaked credentials, password hammering, we can detect all these patterns as they drift around because the bad guys are attacking in volume. Machine learning can out-adapt to these guys, so we’re bringing to the enterprise real protection, not just detection.”
At some point, you can expect the machine learning systems in Azure AD and ATA to start working together. “Active Directory on premise is this incredible nexus for data collection and analysis because essentially every use of an app on premise ends up going through the directory somehow,” points out Microsoft’s Alex Simons. “Part of the vision is to take all the data we’re collecting in the cloud and to marry it up with data we’re collecting on premise, to bring those data source together.”
Whether you look at on premise or cloud systems, it might be time to take machine learning security systems seriously; because bad as it is today, it’s going to get even harder to stay ahead of the hackers. Weinert warns: “We're now seeing that the criminals are starting to invest in machine learning systems themselves.”