Information Infrastructure EII TCO/ROI Hardware Uncategorized Green IT Development
Recently I’ve been writing down some thoughts about business and IT agility: What they are, how they evidence themselves in the bottom line and in risk (or its proxy, public-company beta), and how to measure them. At the same time, in my study of “data usefulness” (how much potentially useful data actually gets used effectively by the appropriate target), I included a factor called ‘data agility,’ or the ability of the organization to keep up to date with new useful data sources. What I want to do now is consider a larger set of questions: what does agility mean in the context of the organizational process that ideally gathers all potentially useful information in a timely fashion and leverages it effectively, how can we measure it, and what offers the biggest “bang for the agility-investment buck”?
My initial pass at “information-handling agility” is there are four sources of change that are key: Unexpected changes in the environment, planned changes in the non-informational organization/process (which also should cover expected changes in the environment), unplanned changes in the non-informational organization, and new data sources/types. Therefore, information-handling agility includes the ability to react rapidly and effectively in supplying information about unexpected changes in the environment, the proactively planned but timely supply of information about expected changes in the environment, the ability to react rapidly and effectively by supplying different types of information due to an unexpected internal change, and the ability to proactively seek and effectively use new data sources.
Note that, strictly speaking, this doesn’t cover all cases. For example, it doesn’t cover outside change during the information-handling process – but that’s reasonable, if in most cases that change either doesn’t change the ultimate information use or it’s so important that it’s already handled by special “alerts”, as seems to be the case in real-world businesses. Likewise, the definition of data agility doesn’t include all changes in the data, rather than just the new data-source ones; again, in the real world this seems to be much less of a problem.
To see how this can be measured and what offers the biggest “bang for the buck,” let’s create a “thought experiment”. Let’s take Firm A, a typical firm in my “data usefulness” research, and apply the Agility From Investment (AFI) metric, defined as AFI = (1 + % change [revenues] – % change [development and operational change in costs]) * (1 + %change [upside risk] - % change [downside risk]) - 1. Let’s assume that Firm A invests in decreasing the time it takes to deliver data to the average user from 7 days to 3 ½ days – and ensures that the data can be used as effectively as before. Btw, the different types of agility won’t show up again, but they underlie the analysis.
We can see that in the long term, assuming its competitors don’t match it, the “timeliness” strategy will improve revenues by increasing the agility of new-product development – but only if new-product development is agile itself. If we assume an “average” NPD out there of ½ the firms being “agile enough”, then we have 15% improvement in ROI x ½ = 7 ½ % (the maximum change in long-term revenues). Since we have only improved timeliness by ½, the maximum change becomes 3 ¾ %; the typical data usefulness is about 1/3, taking it down to 1 ¼ %; and information’s share of this takes it below 1%.
Costs are seemingly a different story. Reducing time to deliver information affects not only the per-transaction IT costs of delivery, but also every business process that depends on that information. So it is reasonable to assume a 1% decrease in NPD costs, but also a 5% decrease in operational costs, for an average of 3%. Meanwhile, the increase in upside risk goes through a similar computation as for revenues, yielding less that a 1% increase in that type of risk.
That leaves downside risk. Those risks appear to be primarily failure to get information in time to react effectively to a disaster, and failure to get the right information to act effectively. Because the effect on risk increases as the response time gets closer to zero, it is reasonable to estimate the effect on downside risk at perhaps a 5% decrease; and since only 1/3 of the data is useful, that takes it down below 2%. Putting it all together, AFI = (1 + 1% + 3%) * (1 + 1% + 2%) – 1 = a 7% overall improvement in the corporation’s bottom line and risk – and that’s being optimistic.
Now suppose we invested in doubling the percentage of potentially useful data that is effectively used – i.e., not timeliness but accuracy, consistency, scope, fit with the needs of the user/business, and analyzeability. Performing the same computations, I come out with AFI = (1 + 1% + 1.5%) * (1 + 7.5% + 1%) – 1 = an 11% long-term agility improvement.
One more experiment: suppose we invested in immediately identifying key new data sources and pouring them into our information-handling process, rather than waiting ½ year or more. Again, applying the same computations, but with one more assumption (a high degree of rapid change in the sources of key data), AFI = (1 + 2% + 2%) * (1 + 7.5% + 8%) – 1 = a 20% improvement in long-term contribution to the company’s bottom line.
Now, mind you, I have carefully shaped my assumptions, so please do not assume that this analysis is exactly what any firm will experience over the long term. There are, however, two take-aways that do not seem to be part of the general consensus today.
First, firms are probably typically underestimating the long-term effects of efforts aimed at improving data usefulness (including timeliness, effectiveness, and data agility). Reasonably enough, they are concerned with immediate decision-making and strategies that affect information-handling tangentially and piecemeal. However, the result, as I have noted, is a “whack-a-mole” game in which no sooner is one information-handling problem tackled than another pops up.
Second, firms are also clearly underestimating the long-term benefits of improving data usefulness compared to improving timeliness, and of improving data agility compared to improving both timeliness and data usefulness. The reason for that appears to be that firms don’t appreciate the value for new-product development of inserting better and new data in the new-product development process, compared to more timely delivery of the same old data.
I offer up one action item: IT organizations should seriously consider adding a Data Agility role. The job would be monitoring all organizational sources of data from the environment – especially the Web – and ensuring that they are appropriately added to the information-handling inputs and process as soon as possible.