Hacked Opinions: Vulnerability disclosure -- Jeff Williams
Hacked Opinions is an ongoing series of Q&As with industry leaders and experts on a number of topics that impact the security community. The first set of discussions focus on disclosure and how pending regulation could impact it. In addition, we asked about marketed vulnerabilities such as Heartbleed and bounty programs, do they make sense
CSO encourages everyone to take part in the Hacked Opinions series. If you would like to participate, email Steve Ragan with your answers to the questions presented in this Q&A, or feel free to suggest topics for future consideration.
Where do you stand: Full Disclosure, Responsible Disclosure, or somewhere in the middle
Jeff Williams, CTO of Contrast Security (JW): It's a false dichotomy.
I love security research -- particularly on new classes of vulnerability rather than exploitable instances of well understood vulnerabilities. I have responsibly disclosed many vulnerabilities in the past and the process was always painful -- but that's the road I'd take if I happened across a vulnerability.
But the idea that a handful of talented researchers working at risk for little to no financial reward is going to change our cybersecurity situation is insane and dangerous. Disclosure hasn't tilted the market towards more security products in 20 years, and it's not going to in the future.
In fact, disclosure creates the illusion that security research and disclosure is a substitute for security engineering and analysis, which it is not. The companies that create and run our critical infrastructure have impossibly large amounts of code -- many in the billions of lines of code.
Security research and vulnerability disclosure will only ever touch a tiny fraction of the latent vulnerabilities in the mountains of code we've created over the past 20 years.
If a researcher chooses to follow responsible/coordinated disclosure and the vendor goes silent - or CERT stops responding to them - is Full Disclosure proper at this point If not, why not
JW: Absolutely. I think the goal is to minimize the overall amount of damage that can be done with a vulnerability.
Full disclosure may cause some harm to folks that can't react quickly enough, but it almost always results in a quick fix. In fact, vendors are essentially training security researchers that if they want a problem fixed, then full disclosure is the easiest way to get it.
Vendors might complain about full disclosure, but they created an environment where it's often the only choice available to researchers.
Bug Bounty programs are becoming more common, but sometimes the reward being offered is far less than the perceived value of the bug/exploit. What do you think can be done to make it worth the researcher's time and effort to work with a vendor directly
JW: In my opinion, the economics can't ever really work at scale. Even if the program is paying a high value for vulnerabilities, you have to factor in the odds of winning. Let's imagine bug bounty really takes off and there are lots of people doing it.
There will always be a rush to report "easy to find" vulnerabilities, making it increasingly unlikely to win the bounty. So the expected value to the researcher drops precipitously. The "hard to find" vulnerabilities that exist after the company has done everything they can to secure their product are a much more viable market.
These should command a much higher bounty and there may be very talented researchers willing to take a chance on getting paid. But this dramatically limits the scalability of the bug bounty model. Until the relationship between the researcher and the company is more like a real consulting arrangement, with access to source code, confidentiality agreement, structured rates, etc...
I can't imagine bug bounty programs finding anything more than a niche. In the meantime, the danger of bug bounty programs is that they are being advertised as a replacement for structured security verification.
This can lead to a false sense of security from companies that aren't doing enough for their own security testing programs.
Do you think vulnerability disclosures with a clear marketing campaign and PR process, such as Heartbleed, POODLE, or Shellshock, have value
JW: Yes. Unfortunately. The game is set up so that security researchers are forced to advertise the "worst case scenario" for vulnerabilities because otherwise nobody will pay attention.
Within days, everyone scanned and patched their networks for Heartbleed and even demanded compliance from their subcontractors. That's how we should handle all new dangerous vulnerabilities.
Except that it shouldn't be a fire drill, it should be a standard part of IT development and operations. But when organizations fail to fix serious vulnerabilities, like Struts and Spring RCE that have been out for years, they force researchers into ridiculous marketing and stunt hacking.
If the proposed changes pass, how do you think Wassenaar will impact the disclosure process Will it kill full disclosure with proof-of-concept code, or move researchers away from the public entirely preventing serious issues from seeing the light of day Or, perhaps, could it see a boom in responsible disclosure out of fear of being on the wrong side of the law
JW: I doubt that it will have very much of an effect. There have been many legal threats to security researchers over the past 20 years, but only a tiny fraction have ever actually been prosecuted.
Essentially, the chilling effect has already been in effect for years. Ironically, I think the biggest outcome of Wassenaar will be to continue the fiction that controlling the attackers is possible.
We need to recognize that we have created an environment where it is impossible to identify or control attackers, and therefore the only sane strategy is to build rugged code, create strong defenses, and block attacks.