Launched in 2007, the Common Vulnerability Scoring System (CVSS) is a free and open industry standard for assessing the severity of computer system security vulnerabilities. Currently in version 2, with an update in version 3 in development, CVSS attempts to establish a measure of how much concern a vulnerability warrants, compared to other vulnerabilities, so efforts can be prioritized. The scores are based on a series of measurements, called metrics. The scores range from 0 to 10. High vulnerabilities are those with a base score in the range 7.0-10.0, medium in 4.0-6.9 and 0-3.9 are low.
Most commercial vulnerability management tools use CVSS as a baseline. In turn, enterprises will often base much of their vulnerability management programs on the CVSS score. CVSS can be a worthwhile way to quickly prioritize and identify vulnerabilities. But that speed comes at the cost of customization.
While CVSS can be a powerful indicator, it like all generic values is generalized. For the best efficacy, it needs to be customized to the specific entity using it. But the reality is that most organizations don't do that. They will simply use the information from Rapid7, Qualys, and Tenable without tailoring it to their specific risks and environment.
For example, security analytics firm Rapid7 is upfront when it notes that base CVSS metrics measure only the potential risk (likelihood plus impact) of a given vulnerability, not requiring temporal or environmental metrics to calculate its score. As such, base metric CVSS scores do not consider the whole context of the identified vulnerability to the organization.
Strictly speaking, CVSS doesn't actually represent likelihood of an event. It only represents the likelihood of compromise if attacked.
I attended a talk at the recent Infosec World conference on risks associated with security investments, presented by Jack Jones, President of CXOWare and co-author of Measuring and Managing Information Risk: A FAIR Approach. In the talk, Jones (an outspoken critic of CVSS) mentioned parenthetically that CVSS has potential but is poorly understood and inappropriately used by most organizations that rely on it.
Jones is not the only CVSS critic. Carsten Eiram and Brian Martin wrote an open letter to FIRST on CVSS shortcomings, faults and failures in formulation, while Patrick Toomey, formerly of Neohapsis, writes that CVSS over-complicates the issue of assigning risk to vulnerabilities.
Another issue with CVSS is that it is often used as a scoring mechanism for vulnerabilities, which in turn is combined with risk measurement. The result is that resources are wasted and organizations can't identify and focus on their most important problems.
Jones' main concern with CVSS involves its use of weighted values. The rationale behind the weighted values in CVSS is not documented, so you are using values that someone else has made up for you without understanding their basis. In Jones' experience, weighted values are rarely developed with significant rigor and are often tightly coupled to a set of assumptions that may only apply to specific conditions. He also believes that they introduce ambiguity, limit the scope of where the analysis can be applied, and can in some cases completely invalidate results. At the very least, if weighted values are going to be used, some well-reasoned rationale should be provided so that users can make an informed choice about whether they agree with the weighted values.
CVSS like all statistical indicators is only ever as good as its design and implementation. In an interesting new book, Statistics Done Wrong: The Woefully Complete Guide, author Alex Reinhart notes that statistical analysis is tricky to get right, even for the best and brightest. It's also surprising how many scientists are doing it wrong. His words need to be heeded for anyone using the CVSS.
To fix that, the CVSS calculator lets you refine the CVSS base score as you see fit for your organization. But my experience is that most organizations will use the standard CVSS weights that the vendor defined rather than customizing it for themselves. The truth is that each organization needs to determine its own weights and values rather than rely on best practices. If that's too much to undertake, then it should start with customizing the temporal and environmental factors as directed in the CVSS standard, and then it can worry about evaluating the weights later.
Getting CVSS right
CVSS can be a powerful tool that can provide a lot of value, and for those needing a quick, dirty, and generally effective rough scoring mechanism for vulnerabilities, it certainly fits the bill. But quick and dirty information security should be the rare exception rather than the rule. Vulnerability management should be customized for each organization. Generic best practices may work, but they won't be optimized.
With that, consider the following in order to make CVSS usage more effective:
The more an organization can customize CVSS to its vulnerability management program, the better it will be. With the CVSS, mileage may indeed vary. CVSS off the shelf' is OK, but with limited mileage. CVSS customized' is useful and will allow companies to maximize their mileage as best they can.
Ben Rothke CISSP is a Senior eGRC Consultant with Nettitude, Inc. and the author of Computer Security: 20 Things Every Employee Should Know.