Hacked Opinions is an ongoing series of Q&As with industry leaders and experts on a number of topics that impact the security community. The first set of discussions focused on disclosure and how pending regulation could impact it. Now, this second set of discussions will examine security research, security legislation, and the difficult decision of taking researchers to court.
CSO encourages everyone to take part in the Hacked Opinions series. If you would like to participate, email Steve Ragan with your answers to the questions presented in this Q&A. The deadline is October 31, 2015. In addition, feel free to suggest topics for future consideration.
What do you think is the biggest misconception lawmakers have when it comes to cybersecurity
Jeremiah Grossman (JG): The biggest misconception, from my view, is that idea that cybersecurity laws function as a preventative measure rather than reactive. This is, in some ways, what makes cybersecurity legislation unique – laws are typically enacted as a means to deter activity, however, laws only work within the jurisdiction in which they are in effect.
When it comes to cybersecurity, a huge percentage of the bad guys are not physically located in the United States, so laws within our country will not deter their activity. And, foreign law enforcement investigations and extradition can only focus on a select few cases due to resource constraints on both sides.
Cybersecurity laws will only have an effect on attackers that are within our borders. So the idea that such laws would deter criminal or malicious cyber behavior is limited at best.
What advice would you give to lawmakers considering legislation that would impact security research or development
JG: If the overall intent is to increase cybersecurity as a whole, one half of legislative focus should be on criminal prosecution and the other on software security liability.
The people and companies in the best position to protect the systems and data that are targeted currently have little incentive to do so. Software end-user license agreements, which almost universally disclaim all liability, are a huge problem. Software, as an industry, functions with an “as is” mentality.
This stands in stark contrasts to essentially every other market. For example, if a certain type of car is manufactured with a faulty braking system and that car gets into an accident, then the car manufacturer holds the liability for that faulty braking system.
If a piece of software does not protect against what it was designed to protect, there is no liability against the vendor that makes it. And since software is now an integral part of not only cars, but just about every aspect of modern day life, this cannot continue. Every day, more dollars and lives will be at risk.
Cybersecurity legislation, to be more effective, really needs to focus not only on the criminal prosecution piece, but also the software liability piece.
If you could add one line to existing or pending legislation, with a focus on research, hacking, or other related security topic, what would it be
JG: The intent of the researchers’ actions has to be malicious; if it is not, they should not suffer the risk of possible criminal prosecution. Researchers need something similar to “Safe Harbor” protection. For example, if a customer of a hosting provider or ISP puts up something illegal on their systems, the hosting provider / ISP is not culpable because it has legal Safe Harbor protections.
Of course the offending material would have to be removed in a timely fashion, but they could not be sued or prosecuted for their customer’s actions. Security researchers need something similar.
Now, given what you've said, why is this one line so important to you
JG: This Safe Harbor protection is important to me because I’m also a security researcher, and have been disclosing flaws to vendors, for free, for over nearly 20 years. I find this work to be a community service, a Good Samaritan act and a way for me to help protect the Web and the systems that I use.
If this research had put me at likely legal risk, it would definitely have chilled my work, and the Web would have been that much less safe place for everyone. We must avoid this however we can.
Do you think a company should resort to legal threats or intimidation to prevent a researcher from giving a talk or publishing their work Why, or why not
JG: Yes, but only as an absolute last resort, when all other options have been exhausted, and vulnerability disclosure is likely to cause real harm to the vendor’s customers. If the company is simply trying to avoid embarrassment, or having to expend effort to do the right thing, this does not meet the line where legal threats or intimidation should be approved of or tolerated.
In cases where the researcher is not believed to be acting in good faith, which can happen, then the company should have a legal option available to protect themselves and their customers.
What types of data (attack data, threat intelligence, etc.) should organizations be sharing with the government What should the government be sharing with the rest of us
JG: I think that the government should share information about the attackers or attack groups (not just about technical details about the malware) with the private sector. And in such cases where they do make prosecutions, they should provide aggregate data on who the attackers are, where they are breaking in and whom they are targeting.
On the flip side, if an organization is breached and they would like government assistance (from the FBI or other agency), they should make all relevant/aggregate incident data available to the government to assist in the investigation.