Quelle: CIO USA
You need fire sprinklers. Obvious advice, maybe, but once upon a timefire sprinklers were considered a waste of money. In fact, in 1882,sprinklers were considered to be as dubious an investment asinformation security is today.
That's why George Parmalee, in March of that year, set a Bolton,England, cotton spinning factory on fire. In 90 seconds, flames andbillows of thick black smoke engulfed the mill. After two minutes, 32automatic sprinklers kicked in and extinguished the fire.
It was a sales pitch. Parmalee's brother Henry had recently patentedthe sprinklers and George hoped the demonstration would inspireBritain's mill ownersmany of whom came to watchto invest in hisbrother's new form of security.
But they didn't. "It was slow work getting sprinklers established inthis country," wrote Sir John Wormald, a witness to the conflagration.Only a score of factories bought the devices over the nexttwo years.
The reason was simple, and it will sound familiar to CIOs and chiefsecurity officers: "[Parmalee] realized that he could never succeed inobtaining contracts from the mill owners...unless he could ensure forthem a reasonable return upon their outlay," Wormald wrote.
Today, it's data warehouses, but data is as combustible as cotton.Thousands of George Parmalees - CIOs and CSOs, not to mention securityconsultants and vendors - are eager to demonstrate inventions thatextinguish threats to information before those threats take down thecompany. But the investment conundrum remains precisely what it was120 year ago. CEOs and CFOs want quantifiable proof of an ROI beforethey invest.
The problem, of course, is that until just recently a quantifiablereturn on security investment (ROSI) didn't exist. The best ROSIargument CIOs had was that spending might prevent a certain amount oflosses from security breaches.
But now several research groups have developed surprisingly robust andsupportable ROSI numbers. Their research is dense and somewhat raw,but security experts praise the efforts as a solid beginning toward aquantifiable ROSI.
"I was quite surprised, to be honest," says Dorothy Denning, aprofessor at Georgetown University and a widely regarded informationsecurity expert. "I have a good sense of what's good research, and allof this seems good. They are applying academic rigor."
IT executives are hungry for this kind of data. "It's very easy to geta budget [for security] after a virus hits. But doing it up frontmakes more sense; it's always more secure," says Phil Go, CIO atdesign and construction services company Barton Malow in Southfield,Mich. "Numbers from an objective study would help me. I don't evenneed to get hung up on the exact numbers as long as I can prove thenumbers are there from an unbiased study."
If the new findings about ROSI are proven true, they willfundamentally change how information security vendors sell security toyou and how you sell security to your bosses. And the statement "Youneed information security" will sound as commonsensical as "You needfire sprinklers."
Soft ROSI
Tom Oliver, a security architect for NASA, recently spent tens ofthousands of dollars on a comprehensive, seven-week external securityaudit. At the end, Oliver received a 100-page booklet with theresults - which were mostly useless.
"[The auditors] said, 'You were very secure. We were surprised wecouldn't access more [sensitive data]," says Oliver, who is employedby Computer Sciences (under contract to NASA) at the Marshall SpaceFlight Center in Huntsville, Ala. "But I wanted to know how wecompared to other government agencies. If I put another $500,000 intosecurity, will that make me more secure?
"There was no return on investment in there at all," he adds. "I spent$110,000, and I got, 'You're good.' What's that?"
This is the dilemma that faces CIOs and CSOs everywhere. A lack ofdata on infosecurity makes it difficult to quantify what security getsyou. In lieu of numbers, information executives rely on softROSIsexplanations of returns that are obvious and important butimpossible to verify.
Executives know the threat is real, but CIOs say executives don't feelthe threat. No one buys burglar alarms until someone they know isrobbed. For that reason, IT relies on, more than anything, fear,uncertainty and doubt to sell security - in other words, FUD. Thethinking is, if you scare them, they will spend.
But even FUD has limitations, especially during a recession. The signsof the down economy's impact are everywhere. At Fidelity, the chiefinformation security officer (CISO) position was eliminated. At StateStreet Global Advisors in Boston, CISO Michael Young needs four moresecurity staffers, but there's a hiring freeze. "If we invest inanything that promotes less downtime, that's a positive ROI," Youngsays. "But still, there's no quantified value associated with[staffing], and that's a problem. If I could go in there with a returnon the bottom line resulting from these hires, bingo! That would beit."
To say there's no good ROSI data is not to say there's no data.Numbers are indeed used to sell security; it's just that they've hadzero statistical validity.
The marquee example of that is the Computer Security Institute's (CSI)annual computer crime survey. Each year, CSI and the FBI reportsecurity trends in plain, often stark terms. The 2001 report'scenterfold is a chart called "The Cost of Computer Crime." It saysthat losses from computer crime for a five-year period from 1997 to2001 were an eye-popping $1,004,135,495.
There's just one problem with that number. "It's crap," says BruceSchneier, security expert, founder and CTO of security services vendorCounterpane Internet Security in Cupertino, Calif.
"There's absolutely no methodology behind it. The numbers are fuzzy,"agrees Bill Spernow, CISO of the Georgia Student Finance Commission inAtlanta. "If you try to justify your ROSI this way, you'll spend asmuch time just trying to justify these numbers first."
Therein lies the appeal of the current crop of studies. They havescientific method and a foundation of previously establishedresearch.
Hard Numbers, at Last
In 2000 and 2001, a team at the University of Idaho followed GeorgeParmalee's example. The team built an intrusion detection box, asecurity device that sits at the edge of a network and watches forsuspicious activity among users who get past the firewall. Incomingtraffic that follows a certain pattern is flagged, and someone isalerted to look into it.
The researchers then hacked the box, code-named Hummer. Their goal wasto prove that it's more cost-effective to detect and then deal withattacks using intrusion detection than it is to try to prevent themusing other means. The problem was assigning valid costs for thiscost-benefit analysis. For instance, what does it cost to detect anincident? What are day-to-day operational costs of security? What arethe cost consequences if you miss an attack?
The Idaho team, led by University of Idaho researcher HuaQiang Wei,began by culling research from all over. Then they combined what theyfound with some of their own theories, assigning values to everythingfrom tangible assets (measured in dollars with depreciation taken intoaccount) to intangible assets (measured in relative value, forexample, software A is three times as valuable as software B).Different types of hacks were assigned costs according to an existingand largely accepted taxonomy developed by the Department of Defense.Annual Loss Expectancy (ALE) was figured. ALE is an attack's damagemultiplied by frequency. In other words, an attack that costs $200,000and occurs once every two years has an ALE of $100,000.
To verify the model, the team went about attacking their intrusiondetection box with commonly attempted hacks to see if the costs thesimulation produced matched the theoretical costs. They did.
Determining cost-benefit became the simple task of subtracting thesecurity investment from the damage prevented. If you end up with apositive number, there's a positive ROSI. And there was. An intrusiondetection system that cost $40,000 and was 85 percent effective nettedan ROI of $45,000 on a network that expected to lose $100,000 per yeardue to security breaches.
If applied to real-life examples, the Idaho model could produce thedata that CIOs need in order to demonstrate not only that theirinvestment pays off, but by how much. Next, the Idaho team wants toput the ROSI analysis inside Hummer. As threats are detected, the boxwill compare response cost against damage cost. Only if the damagecost is higher will it stop an attack. In other words, the deviceitself decides if it's cost-effective to launch an emergencyresponse.
Of course, Hummer's data would be logged for review. Putting thosefeatures in commercial intrusion detection systems would yield reportsthat showed how much money CIOs saved using intrusion detection. Thiswould then allow them to compare the costs of one security systemagainst another. And wouldn't that be handy?
The Value of Building Security in Early
While Idaho was toying with Hummer, a group of researchers from MIT,Stanford University and @Stake, a security consultancy located inCambridge, Mass., was playing with Hoover.
Hoover is a database. Amassed by @Stake, it contains detailedinformation about software security flawsfrom simple oversights toserious weaknesses. Hoover reveals an ugly truth about softwaredesign: Securitywise, it's not very good.
Right now, Hoover contains more than 500 data entries from nearly 100companies. Participants in the study, such as Bedford, Mass.-based RSAand Fairfax, Va.-based WebMethods, wanted to assess how securely theywere building their software and how to do it better.
First, the Hoover group focused on the ROSI of secure softwareengineering. The group wanted to prove a concept that seems somewhatintuitive: The earlier you build security into the softwareengineering process, the higher your return on that investment. Andprove it they did.
It took 18 months of letting Hoover suck up data from @Stake's clientsto create a representative sample of the entire software landscape.Data in hand, they looked for previous research to base their work on.There was little, so they made a critical assumption, which unlockedthe study's potential. The team decided that a security bug is nodifferent than any other software bug.
Suddenly, security was a quality assurance game, and there was a tonof existing data and research on quality assurance and software. Forexample, one bit of research they used came from a widely accepted1981 study that said that spending a dollar to fix a bug (any bug) inthe design process saves $99 against fixing it duringimplementation.
"The idea of security software as quality assurance is extremely new,"according to team member and Stanford economics PhD Kevin Soo Hoo."Security has been an add-on at the last minute, and detectingsecurity problems has been left to users." And, of course,hackers.
With the research in hand, Soo Hoo, MIT Sloane School of Managementstudent Andrew Sudbury and @Stake Director Andrew Jaquith tweaked thegeneral quality assurance models to reflect the security world, asbased on the Hoover data.
Overall, the average company catches only a quarter of softwaresecurity holes. On average, enterprise software has seven significantbugs, four of which the software designer might choose to fix. Armedwith such data, the researchers concluded that fixing those fourdefects during the testing phase cost $24,000. Fixing the same defectsafter deployment cost $160,000, nearly seven times as much.
The ROSI breakdown: Building security into software engineering at thedesign stage nets a 21 percent ROSI. Waiting until the implementationstage reduces that to 15 percent. At the testing stage, the ROSI fallsto 12 percent.
"Our developers have said they believe they save 30 percent by puttingsecurity in earlier, and it's encouraging to see proof," says MikeHager, vice president of network security and disaster recovery atOppenheimer Funds in Engelwood, Colo. "Executives need answers toquestions like, 'What risk am I mitigating?' We haven't had the meansto educate them without FUD." From numbers like those, he adds, "We'llbe able to sell security from a business perspective."
Hoover keeps growing. The group plans to publish other ROSI numbers.Next up: assigning a statistically valid ROSI to incident readiness.It will (they hope) show how ROSI increases as the effective responsetime to a security incident decreases.
The Law of Diminishing ROSI
If you want to give CEOs and CFOs a ROSI they can love, show them acurve.
That's what researchers at Carnegie Mellon University (CMU) did in"The Survivability of Network Systems: An Empirical Analysis." Thestudy is as dense and dispassionate as its title. (So are itsbureaucratic underpinnings: It was done at the Software EngineeringInstitute in conjunction with the public-private cooperative effortcalled CERT, both housed at CMU.)
The study measures how survivability of attacks increases as youincrease security spending. Economists call it regression analysis.It's basically a curve showing the trade-off between what you spendand how safe you are.
To get the curve, the team relied on data from CERT, established bythe government in 1988 after a virulent worm took down 10 percent ofthe then-very-limited public network (what would become the Internet).CERT logged security breaches and tracked threats, mostly through thevolunteer efforts of the private and public organizations directlyaffected.
CMU researchers took all the CERT data from 1988 to 1995 and modeledit. Among the variables they defined were what attacks happened, howoften, the odds any one attack would strike any given company, whatdamage the attacks produced, what defenses were used and how they heldup.
The researchers used the data to build an engine that generatedattacks on a simulated enterprise, which reflected the rate andseverity of attacks in the real world. The computer program was anattack dogCMU set it loose on a fictitious network and said,"Sic!"
Then they recorded what happened, how the network survived theattacks. After that, the researchers tweaked the variables. Sometimesthey gave the faux-enterprise stronger defenses (higher cost). Othertimes they increased the probability of attack to see how the networkwould hold up against a more vicious dog.
An inventive aspect of the CMU study was that it didn't treat securityas a binary proposition. That is, it didn't assume you were eitherhacked or not hacked. Rather it measured how much you were hacked.Survivability was defined as a state between 0 and 1, where 0 is anenterprise completely compromised by attack, and 1 is an enterpriseattacked but completely unaffected. This provides a far more realisticmodel for the state of systems under attack than an either-orproposition.
The data from the simulation was plotted on a curve. The X-axis wascost, which was in absolute terms (that is, a cost of 10 is twice asmuch as a cost of 5, but they don't have direct analogs to dollars).The Y-axis was survivability, plotted from 0 to 1.
The curve looks like smoke pouring out of a smoke stack; it rises in asharp vertical at first, then trails off in an ever more taperingcurve. The ROSI rises as you spend more, but (and this will gladdenthe hearts of CFOs) it rises at a diminishing rate.
The researchers believe that they could also overlay that curve withsomething called an indifference curve, which instead of mapping datamaps behavior. It plots the points at which the CEO is satisfied withthe combination of cost and survivability. The curve always slopesdown and to the right, like the bottom half of a C.
Where the indifference curve and the actual ROSI curve intersect wouldprovide the optimal security spending point. In other words, not onlycould you prove you need fire sprinklers, you could tell the CEO andCFO how much should be spent on them.
Green Data = Skepticism
Most information executives and security experts believe these ROSIstudies will be a significant new tool. But a certain caution lingers.Some CIOs point out that the studies are useless as raw documents;they require translation before the data hits their desks. Severalexecutives also worried about applicabilitytaking the data out of thelab and putting it in the real world. "The worst thing is for peopleto say security requires a trillion dollars, and then offer nosolution in the real world," says Micki Krause, director ofinformation security of PacifiCare Health Systems, an HMO in SantaAna, Calif.
The data itself was also a concern. The CERT data used in CMU's modelsonly went to 1995, for example. The model for types and frequency ofattacks has changed since then. And while Hoover, @Stake's database,provides gritty details about security holes in software, they aregritty details only from companies willing to participate. Is thatrepresentative?
In risk management parlance, the actuarial data is quite green, andCIOs bemoan that fact. The rub is, you can't just collect data aboutsecurity the way you can about auto accidents. More CIOs must agree todisclose detailed data about the state of their own security in orderto build a portfolio of numbers that will test the earlytheories.
CIOs want proof, yet they don't want to be the ones providing the datathat will improve the science. Those collecting data have promisedprivacy in exchange for the knowledge of what the enterprise isspending on security, but it's slow going getting recruits. "At CERTwe've protected confidentiality for 12 years. But it's so hard becausethey keep [data] to themselves," says Jim McCurley, technical staff atSoftware Engineering Institute. Despite all this, security expertssuch as Georgetown's Denning believe that those studies are thebeginning of a golden age in information security, with the potentialto change every aspect of securityfrom how it's built, to how it'sperceived in the enterprise, to how it's paid for.
Such research could set off a chain reaction. First, ROSI numberscould be used to convince executives to invest in security, therebyspurring the development of new technologies and the hiring of moreknowledgeable security workers.
Then, as the studies are repeated and improved, insurance companiescould use the ROSI numbers to create "hacking insurance," withadjustable rates based on what security you employ. Dave O'Neill willbe one of the people writing those insurance plans over the next year.Currently, as vice president of e-commerce solutions, he writes plansfor general e-commerce insurance for Schaumburg, Ill.-based ZurichNorth America. Today, he confesses, the rates for such plans aremostly set by guesswork. Zurich bases its premiums largely on a58-question yes-or-no survey, with questions such as "Are securitylogs reviewed at least daily for suspicious activities?"
"From our perspective this will change by the end of 2002. It will bea whole different landscape. We'll know much more scientifically howto do this," says O'Neill. "What it boils down to is getting credibledata."
The insurance industry in all likelihood will be the engine thatdrives both the science of ROSI and the technology of security. Allother factors being equal, the insurance discounts will eventuallymake one Web server a better buy than another. Software vendors willbe forced to fix the holes in their products in order to benefit fromlower premiums.
In fact, that is precisely what happened with fire sprinklers. Shortlyafter Parmalee's fiery demonstration, British insurance carriers beganoffering discounts to mill owners who bought sprinklers and deeperdiscounts to owners with more advanced sprinkler systems. Naturally,insurance rates rose on mills without them.
Ultimately, because it made no business sense not to invest in firesprinklers, everyone had them. And mill owners could stop thinkingabout fires and start thinking about their business.