Hacked Opinions: Vulnerability disclosure - Casey Ellis
Hacked Opinions is an ongoing series of Q&As with industry leaders and experts on a number of topics that impact the security community. The first set of discussions focus on disclosure and how pending regulation could impact it. In addition, we asked about marketed vulnerabilities such as Heartbleed and bounty programs, do they make sense
CSO encourages everyone to take part in the Hacked Opinions series. If you would like to participate, email Steve Ragan with your answers to the questions presented in this Q&A, or feel free to suggest topics for future consideration.
Where do you stand: Full Disclosure, Responsible Disclosure, or somewhere in the middle
Casey Ellis, CEO, Bugcrowd (CE): I'm a big believer in responsible disclosure, as long as it's clear to all involved that the "responsible" bit applies to the companies running the programs as well as the researchers. Being a responsible program owner is largely about setting clear expectations and sticking to them, especially when it comes to respecting the researcher, their skills and time, and the fact that they've just done some very valuable work for you for free.
The reason that full disclosure even exists as an option is that this process fails regularly. At this point, the researcher is in a position where he's tried to help all the stakeholders in play and focused on communicating effectively. But if the system breaks down, then the next question to ask is, "what leverage do I have to get this done" So while I'm not an advocate for full disclosure, I understand why it exists because companies aren't always good at following through with these processes.
If a researcher chooses to follow responsible / coordinated disclosure and the vendor goes silent -- or CERT stops responding to them -- is Full Disclosure proper at this point If not, why not
CE: I've sat on both sides of this, as a researcher trying to get it done, a consultant for companies on the receiving end. I can't say that full disclosure is "proper," as full disclosure is almost never the ideal outcome at the end of the day. But sometimes the researcher will end up in a situation where it's the only path they have to pursue to get heard.
Overall, this process is pretty weak. The very fact that we're having you're asking this question shows that it's still a problem that we should be fixing despite the fact that it's been around for a long time. Bottom line, clear expectation setting and communication between companies and researchers is a must to avoid this in the first place. The researchers are already at the table - it's up to the companies to learn up and step up.
Bug Bounty programs are becoming more common, but sometimes the reward being offered is far less than the perceived value of the bug / exploit. What do you think can be done to make it worth the researcher's time and effort to work with a vendor directly
CE: The simple answer is to increase the rewards for the researchers. As this concept of incentivized disclosure grows, you end up with a marketplace where companies will compete for the attention of researchers. Then, as there's more competition for this attention, companies will want to offer more rewards.
I heard someone say, "the best deals are the ones that both parties walk away from feeling a little bit screwed, and happy overall." It's a classic business exchange, the kind we participate in every day - the seller wants to sell their bug for as much as they can, but the buyer doesn't want to pay too much. At the end of the day, it comes down to finding that reasonable middle ground where the value is being transacted in both directions.
At Bugcrowd, we want to make sure there's value behind what organizations are paying. While a researcher may try to up the value of a certain exploit, we examine what's actually realistic. To keep this cost balanced, I recommend starting a dialog around the value of a vulnerability. If you can create a clear understanding, then everyone will walk away from a transaction feeling like it was a good deal.
Do you think vulnerability disclosures with a clear marketing campaign and PR process, such as Heartbleed, POODLE, or Shellshock, have value
CE: At a security conference, I asked for a show of hands to this question: "How many people know the CVE for Heartbleed" Not a single hand was raised, and this was at a security conference. Everyone in that room was aware of "Heartbleed" though, and most to the extent that that could explain the bug, where it exists and how to mitigate it. That's valuable.
Security is fundamentally a marketing problem. If you're outside of the security realm, you need to be made aware that this stuff goes on. If awareness requires a fancy logo and name, then I will support it. There is some concern about overhyping and distracting from other issues that vulnerability marketing creates, like reactive scenarios where CISOs only hear about stuff it it pops up in the press. However, there's balance to everything, and my strong belief is that we're net ahead on this one.
If the proposed changes pass, how do you think Wassenaar will impact the disclosure process Will it kill full disclosure with proof-of-concept code, or move researchers away from the public entirely preventing serious issues from seeing the light of day Or, perhaps, could it see a boom in responsible disclosure out of fear of being on the wrong side of the law
CE: If Wassenaar causes a net negative effect on America's ability to defend itself from cyber threats, I suspect that will become obvious pretty quickly and will be fixed not long afterward. Everything I've read so far tells me that the BIS are a little out of their depth on this one, but thing is that they seem to be listening to us. The security community have been very vocal about this.
Suppose Wassenaar rolls out in a very negative way. It can be certain that hackers will find a way around it. It's in our nature to find a way to achieve the outcome we want and get the job done. However, I don't think it will come to that. There may be bumps along the way, but I don't see much really changing and it won't kill vulnerability disclosure.
Overall, I think that it will end up with more people being transparent with vulnerabilities that affect the public at scale, which creates a "this is public domain researcher, not the development of a munition" out clause, thereby preventing the activation of Wassenaar. It's a pretty good example of "hacker will always find a way" and we're seeing people do it now.