The scaling problem is familiar to people in growing companies. Systems that originally worked well begin to fail as they are scaled up. Likewise, spreadsheets that worked well when starting an evaluation struggle as the number of requirements grows.
Evaluating enterprise software is a lot of work. Managing this work in a spreadsheet seems the obvious choice, but that is deceptive. What many people don’t realize is that at the enterprise level, they will end up building a complete evaluation system in a spreadsheet, which is always far more work than expected. In this article, we look at the limitations of using spreadsheets to evaluate enterprise software and conclude with suggestions.
To select best-fit software you need well-written requirements. Formatting and text size limitations of spreadsheet cells are real problems. Also, Excel and Google spreadsheets limit you to one link per cell. In reality, you sometimes need more than one link:
In spreadsheets you can only have one link per cell; you can’t add inline links to words or terms in the cell text. As a workaround, you can add extra columns to the spreadsheet but this can get unmanageable very quickly.
Relational databases have the concept of normalized data, where each data item exists only once in the dataset. Denormalized data is where multiple copies of data items are used. The problem with denormalized data is that when a data item is changed, it must be found and manually updated in multiple places to keep everything consistent.
The limited data structure of spreadsheets means that multiple copies of data are inevitable. Keeping everything consistent is difficult if not impossible. This flat and non-relational data structure is a core problem with spreadsheets.
Spreadsheets usually start with all products on one tab, but limitations quickly lead to a separate tab for each product being evaluated. When you consider that every requirement on an evaluation should appear on a master list, and again on a tab for each product being evaluated, the problem of manually updating multiple requirements starts to become apparent.
To help manage large numbers of requirements, they are always organized in groups. In our experience, around 15% of requirements belong in two or more groups. For example, the requirement:
Control data deletion
The system should control who can delete data and what data they can delete. Where users can delete data, it should be possible to view the deleted data, or even restore that data if necessary.
Could belong to the following requirement groups:
If this requirement was on an evaluation that had 8 products, the requirement would appear on that evaluation 27 times: 3 times on each of the 8 product tabs, and 3 times on the master list of requirements.
What is needed is a system that has normalized requirements: one requirement can be in multiple groups and on multiple products. Edit the requirement anywhere and it updates everywhere.
Groups of requirements need to have relative weights. In practice, those weights are adjusted towards the end of an evaluation when the requirements are better understood.
Group weights tend to be hard coded into the spreadsheet. When a group weight must be changed, it must be changed in multiple places. Again, manually keeping denormalized data consistent is difficult.
Googles spreadsheets allow multiple people to edit the same spreadsheet simultaneously, a huge advantage because it prevents multiple versions of the spreadsheet from proliferating. Microsoft has introduced this same feature into Office365. However, enterprise evaluations are large, and in our experience, large online spreadsheets can be excruciatingly slow. The alternative, a local Excel spreadsheet, always leads to the problem of reconciling multiple versions of the same spreadsheet.
The whole idea behind evaluating multiple products is to score them relative to your particular requirements. This is the gap analysis. When doing the gap analysis on a spreadsheet you need to design a scoring system. This should take into account the weights of various requirement groups and the weights of the individual requirements in those groups.
Scoring is made more difficult by the lack of variables in formulas. The usual way to overcome this is by adding an extra column to hold the variable, and then hiding that column. Manually keeping the scoring formulas consistent across a large evaluation is mission impossible. Even if you do manage to make formulas consistent, there is no way to know those formulas are consistent.
Enterprise evaluations are large, and you always need to search requirements, for example, to see if a potential requirement already exists. A simple search is easy in a spreadsheet, but advanced searches need to be built. While this is not particularly difficult, the spreadsheet is starting to turn into a programming project.
With spreadsheets, it is difficult to see who made changes. While you can track versions of a spreadsheet, there is no audit trail of individual changes to show who did what.
Because spreadsheets are so manual, it is very easy for errors to creep in. All error checking must be manually coded, which is where the spreadsheet morphs into a fully blown programming project. Usually the person responsible for the software evaluation ends up doing this work, which makes the evaluation project take even longer.
Enterprise evaluations are a lot of work that sometimes never seems to end. Spreadsheets have no tools built in to track progress, making these projects even more difficult to manage.
The total cost of ownership of enterprise software runs into many millions. When selecting that software your goal is to maximize the return on the investment. That means finding the software best matched to your organization’s particular needs. While spreadsheets are tremendously useful for some kinds of problems, evaluating enterprise software does not play to their strengths. They are too manual, and simply don’t scale up to large evaluations. Using spreadsheets for enterprise evaluations takes an already large project and adds another layer of work and risk. Your choices are:
If you select the third option, you may consider requirements management systems. While they excel at requirements management, they are optimized for software development and lack the features needed to evaluate and compare software products. There are cloud based RFP or procurement systems. While very usable, they tend to be optimized for the RFP process. Since we could not find an application tailored for enterprise software evaluation & selection, we developed our own which we use in our consulting practice. To help others solve this same software selection problem we have made our application available at no cost.
Any new enterprise software project is a significant opportunity for a company to improve. To make the most of this opportunity, the real needs of the company must be discovered. Then the software that best meets those needs must be identified. In addition to selecting the software, the evaluation process should also provide significant input to the implementation. Spreadsheets simply don't cut it anymore - it's time to use tools designed for the job.