RSA: Fight attackers by making software revisions so quickly, exploits could be evaded
Perhaps network defenders need to take advantage of what University of Pennsylvania researchers call the honeymoon effect, where new software goes unmolested for a period after it is issued while adversaries analyze it for flaws, says Dan Geer, CISO of In-Q-Tel. With enough revisions, software is never in place long enough to fall prey to exploits, he says.
+ Follow all the coverage from RSA 2015 +
Writing secure software is a goal, he says, but even machine-written software has weaknesses, so the solution may be to keep churning out new systems and versions of systems to stay one step ahead.
The downside is that these systems never get the chance to stabilize or mature to the point of becoming accepted standards. "Constant code churn is inconsistent with compliance and certification," he says.
Perhaps alterations in code that are generated on the fly is the answer, creating a moving-target offense against attackers to prevent exploitability. To do so might be accomplished with randomization of code at runtime, Geer says.
Geer's talk on the future of security was speculation on a new framework for cybersecurity that better reflects the changing realities of the infrastructure on which business and social interactions take place. He says the current framework seems outdated enough that a new one is needed that better addresses the real problems.
That aspect of his talk echoed the keynote remarks of RSA President Amit Yoran, who urged attendees to stop addressing the wrong security problems.
Geer says the paradigm and he uses the term in a scientific sense of perimeter defenses and even defense in depth are in crisis because they are no longer effective. He says a paradigm is a set of methods and standards universally accepted by scientists working in a given field, and when paradigms reach a crisis, it's a sign that it's time to create a new one.
Perimeter defense and defense in-depth are becoming harder as systems become more complex. For example a mobile phone might have two dozen processor cores to support its various features. He asks: Where, then, is the perimeter Are there too many to defend
Traditionally, researching defined security problems yields new security answers. But now it's more and more difficult to define the problems because attackers keep changing their methods and seeking out new surfaces to assault. For instance encryption is very good, so much so that attackers generally try to circumvent it rather than try to crack it, he says. "Research isn't patching up the mess," he says.
With the Internet of Things growing at 35% per year the idea of authentication and being authorized to reach resources may be fading. What may be needed is a single unspoofable identity for everyone that is used universally. "The user would have to submit to being identified or withdraw [from using the system]," he says.
That would eliminate online privacy because it would eliminate the option to "selectively revealing oneself to the world," Geer says. "Privacy is the capacity to misrepresent yourself."
Loss of privacy may be a new paradigm, he says, and it might not even be considered wrong if looked at as the path chosen consciously or not by citizens of the Internet. "It can't be wrong, it's only real," he says. "Confidentiality is quaint and irrelevant."
If that's the case, though, data integrity needs to be absolute so data used to decide how a person is treated is accurate, he says.