Organizations normally jump into IT measurement because they believe that measurement is good. That is, they have no clear business aim for their measurement. They tend to treat it as a program in itself, whereas measurement really should be part of a larger program. As a result, IT delivers data with the expectation that the value will magically "happen." Those programs almost always fail within a year. The lesson is that, while it is relatively easy to capture and deliver measurement data, it is much harder to deliver value that will justify the expense and revenue commitment required to generate that data.
Approximately 80% of IT measurement programs fail within a year of their start. While they deliver data, they fail to deliver value. They are started because management adopts the attitude that "you cannot manage what you cannot measure." In response, IT might create a list of metrics that may come from books, presentations on measurement, vendor suggestions, and other sources. IT gathers data for those metrics and creates a report, which generates excitement because no one had visibility into their processes before. Management may even spot some things that should be fixed.
For a short time, such a report might stimulate discussion among management, but since the measures are not linked to specific needs, they very seldom spark change in the way the enterprise functions.
The overall lesson is that measurement programs, by whatever name, have a very poor track record. Therefore, an organization that desires a measurement program needs to design it for success. That means going beyond capturing data, to delivering value that justifies the expenses of measurement efforts (see Figure 1.1).
The first lesson, therefore, is that metrics by themselves generate little value and value will not happen by magic. The problem with these measurement operations is they are standalone programs that treat measurement as an end product. To have value, however, measurement needs to be a tool to help drive and measure progress in a program designed to sustain or transform business value.
The second lesson is that interesting does not equal valuable. IT often delivers metrics that are interesting, cutting-edge, or creative. Early in the program, they may appear to be working because they generate discussion among management or trigger a revelation. Management may even fix a previously unrecognized technical problem or two revealed by the measurement. But the real test is whether the metrics provide a positive return on investment, and the answer almost always is no.
The third lesson is that, within a typical measurement program, performance data alone may not be enough to generate value. Some, but not all, performance needs may be satisfied by a traditional metrics program. Questions that may need answers, which a metrics program by itself cannot provide, include: Is the enterprise improving over time? What are our strengths and weaknesses in this area? How are we performing against our commitments? What value is the technology organization providing?
The fourth lesson is that, whatever it may be and however it is designed, the measurement system will evolve over time. The more valuable it is, the more it will evolve. As problems it identifies are solved and as issues it initially focuses on are resolved, it will need to refocus on new questions that in many cases are prompted by the initial measurements. The program will need to present information in new ways that better fit the changing needs of managers as they become more sophisticated in the use of measures. Thus, the driving force behind this evolution will be the rapid learning curve organizations experience when presented with effective measurement.
The fifth lesson is that measurement costs money and other resources, and the enterprise has only a limited amount of those resources to dedicate. Measurement solution planners may not know what that limit is, but they will find out rapidly when they reach it.
Lesson six is that performance data always involves "guesstimates" -- and making it more accurate usually carries a price tag. This relates directly to the issue of moving beyond Phase 2 in the sophistication of the measurement program. Gathering rigorous data at any level usually requires a fairly large initial investment, and even then, the data quality will only improve gradually over time.
Finally, the seventh lesson is that a measurement system does not run itself. A measurement program comprises many processes, which need continuous oversight, direction, review, and at times, improvement.
Measurement Principles
META Group has created a set of principles to guide the creation of successful measurement practices. These are based on the lessons learned from experience and the observation of good industry practices.
Ensure That Measurement Supports Accomplishment
Performance information delivery practices must be developed to support accomplishment. A value-focused program is based on a different model from a purely measurement-focused program.
Therefore, the program's designers should shift their thinking away from a measurement program and to an accomplishment program -- a program designed to accomplish specific improvements.
Embrace the Evolution
Since any measurement system will need to evolve quickly, particularly during the early stages of design, implementation, and operation to meet changing management needs and increasing sophistication, the system's developers should learn to work with that evolution rather than against it.
In most cases, companies starting their initial performance measurement effort should focus on the CIO level and the management level just below the CIO for two reasons. First, the decisions the managers at that level make will impact everyone in the depart- ment. Second, they allocate the resources. Therefore, making them happy will make it easier for the measurement system designers to gain access to more of those resources.
Für weitere Informationen wenden Sie sich bitte an die Meta Group.