Blog


Your Categories
Information Infrastructure EII TCO/ROI Hardware Uncategorized Green IT Development

Development

Recently there passed in front of me two sets of thoughts: one, an article by Marc Andreessen (once a Web software developer, now a VC) in the Wall Street Journal about how software is “eating the world”; one by a small development tools/services company called Thoughtworks about a new technique called Continuous Delivery.

 

Guess which one I think is more important.

 

Still, both make important points about changes in computing that have fundamental effects on the global economy. So let’s take them one at a time, and then see why I think Continuous Delivery is more important – indeed, IMHO, far more important. A brief preview: I think that Continuous Delivery is the technique that takes us into the Third Age of Software Development – with effects that could be even more profound than the advent of the Second Age.

 

To Eat the World, First Add Spices

A very brief and cursory summary of Andreessen’s argument is that companies selling software are not only dominant in the computer industry, but have disrupted (Amazon for books, Netflix for movies) or are “poised to disrupt” (retail marketing, telecom, recruiting, financial services, health care, education, national defense) major additional industries. The result will be a large proportion of the world’s industries and economy in which software companies will be dominant – thus, software will “eat the world”.

 

My quarrel with this argument is not that it overstates the case, but rather that it understates it. What Andreessen omits, I assume in the interest of conciseness, is that in far more industries and companies, software now is the competitive differentiator between companies, even though the companies’ typical pedigree is primarily “hardware” or “services”, and the spending on software is a small percentage of overall costs.

 

My favorite examples, of course, are the company providing audio and video for stadiums, which relies on software to synchronize the feeds to the attendees better than the competition, and Boeing, which would not be able to provide the differentiating cost-saving use of composites were it not for the massive amounts of software code coordinating and monitoring the components of the Dreamliner. In health care, handling the medical health record and support for the new concept of the medical home requires differentiating software to glue together existing tools and health care specialists, and allow them to offer new types of services. It isn’t just the solo software company that is eating the world; it’s the spice of software differentiation in the mostly non-software businesses that is eating these companies from within.

 

So Andreessen’s article signals a growing awareness in the global economy of the importance not just of computing, but of software specifically, and not just as a separate element of the economy, but as a pervasive and thoroughly embedded aspect of the economy whose use is critical to global economic success or failure. How far we have come, indeed, from the point 30-odd years ago when I suggested to Prof. Thurow of MIT that software competitive advantage would allow the US to triumph over the Japanese economic onslaught, and he quite reasonably laughed the idea to scorn.

 

I do have one nit to pick with Andreessen. He trots out the assertion, which I have heard many, many times in the past 35 years, that companies just can’t find the programming skills they need – and this time, he throws in marketing. Every time people have said this, even at the height of the Internet boom, I have observed that a large body of programmers trained in a previous generation of the technology is being overlooked. That sounds as if it makes sense; but, as anyone should know by now, it is part of the DNA of any decent programmer to constantly acquire new knowledge within and beyond a skill set – good programmers are typically good programmers in any arena, from Web-site design to cloud “program virtualization”.

 

Meanwhile, I was hearing as late as a year ago about the college pipeline of new programmers being choked off because students saw no future jobs (except in mainframes!), and I can attest that available, technically savvy marketers in the Boston area – a well-known breeding ground – are still thick on the ground, while prospective employers in the Boston area continue to insist that program managers linking marketing and programming have at least five years of experience in that particular job. It just won’t wash, Mr. Andreessen. The fault, as is often the case, is probably not mainly in the lack of labor, but in companies’ ideological blinders.

 

The Idea of Continuous Delivery

Continuous Delivery, as Thoughtworks presents it, aims to develop, upgrade, and evolve software by constant, incremental bug fixes, changes, and addition of features. The example cited is that of Flickr, the photo sharing site, which is using Continuous Delivery to change its production web site at the rate of ten or more changes per day. Continuous Delivery achieves this rate not only by overlapping development of these changes, but also by modularizing them in small chunks that still “add value” to the end user and by shortening the process from idea to deployment to less than a day in many cases.

 

Continuous Delivery, therefore, is a logical end point of the whole idea of agile development – and, indeed, agile development processes are the way that Thoughtworks and Flickr choose to achieve this end point. Close, constant interaction with customers/end users is in there; so is the idea of changing directions rapidly, either within each feature’s development process or by a follow-on short process that modifies the original. Operations and development, as well as testing and development, are far more intertwined. The shortness of the process allows such efficiencies as “trunk-based development”, in which the process specifically forbids multi-person parallel development “branches” and thus avoids their inevitable communication and collaboration time, which in a short process turns out to be greater than the time saved by parallelization.

 

I am scanting the many fascinating details of Continuous Delivery in the real world in order to focus on my main point: it works. In fact, as agile development theory predicts, it appears to work better than even other agile approaches. Specifically, Thoughtworks and Flickr report:

 

  • ·         A much closer fit between user needs and the product over time, because it is being constantly refreshed, with end-user input an integral part of the process;
  • ·         Less risk, because the small changes, introduced into production through a process that automatically minimizes risk and involves operations, decrease both the number of operational bugs and the time to identify and fix each; and
  • ·         Lower development costs per value added, as loosely measured by the percentage of developed features used frequently as opposed to rarely or never.

 

 

 

The Ages of Software Development

I did my programming time during the First Age of software development, although I didn’t recognize it as such at the time. The dominant idea then was that software development was essentially a bottleneck. While hardware and network performance and price-performance moved briskly ahead according to Moore’s Law, software developer productivity on average barely budged.

 

That mattered because in order to get actual solutions, end users had to stand in line and wait. They stood in line in front of the data center, while IT took as long as two years to generate requested new reports. They stood in line in front of the relatively few packaged application and infrastructure software vendors, while new features took 9 months to arrive and user wish lists took  1 ½ years to satisfy. They stood in line in front of those places because there was nowhere else to turn, or so it seemed.

 

I noted the death of the First Age and the arrival of the Second Age in the late ‘90s, when a major Internet vendor (Amazon, iirc) came in and told me that more than half of their software was less than six months old.  Almost overnight, it seemed, the old bottlenecks vanished. It wasn’t that IT and vendor schedules were superseded; it was that companies began to recognize that software could be “disposable.” Lots of smaller-scale software could be developed independent of existing software, and its aggregate, made available across the Internet by communities, as well as mechanisms such as open source, meant that bunches of outside software could be added quickly to fill a user need. It was messy, and unpredictable, and much of the new software was “disposable” wasted effort; but it worked. I haven’t heard a serious end user complaint about 2-year IT bottlenecks for almost a decade.

 

Andreessen’s “software eating the world”, I believe, is a straightforward result of that Second Age. It isn’t just that faster arrival of needed software from the new software development approach allows software to start handling some new complex tasks faster than physical products by themselves – say, tuning fuel mixtures constantly for a car via software rather than waiting for the equivalent set of valves to be created and tested. It is also that the exponential leap in the amount of the resulting software means that for any given product or feature, software to do it is becoming easier to find than people or machines.

 

However, it appears clear that even in the Second Age, software as well as physical products retain their essential characteristics. Software development fits neatly into new product development. New products and services still operate primarily through more or less lengthy periods of development of successive versions. Software may take over greater and greater chunks of the world, and add spice to it; but it’s still the same world.

 

The real-world success of Continuous Delivery, I assert, signals a Third Age, in which software development is not only fast in aggregate, but also fast in unitary terms – so fast as to make the process of upgrade of a unitary application by feature additions and changes seem “continuous”. Because of the Second Age, software is now pervasive in products and services. Add the new capabilities, and all software-infused products/services -- all products/services – start changing constantly, to the point where we start viewing continuous product change as natural. Products and services that are fundamentally dynamic, not successions of static versions, are a fundamental, massive change to the global economy.

 

But it goes even further. These Continuous-Delivery product changes also more closely track changes in end user needs. They also increase the chances of success of introductions of the “new, new thing” in technology that are vital to a thriving, growing global economy, because those introductions are based on an understanding of end user needs at this precise moment in time, not two years ago. According to my definition of agility – rapid, effective reactive and proactive changes – they make products and services truly agile. The new world of Continuous Delivery is not just an almost completely dynamic world. It is an almost Agile World. The only un-agile parts are the rest of the company processes besides software development that continue, behind the scenes of rapidly changing products, to patch up fundamentally un-agile approaches in the same old ways.

And so, I admit, I think Thoughtworks’ news is more important than Andreessen’s. I think that the Third Age of Software Development is more important than the Second Age. I think that changing to an Agile World of products and services is far, far more important than the profound but more superficial changes that the software infused in products and services via the Second Age has caused and will cause.

Final Thoughts and Caveats

Having heralded the advent of a Third Age of Software Development and an Agile World, I must caution that I don’t believe that it will fully arrive any time in the next 2-3 years, and perhaps not for a decade, just as it took the Second Age more than a decade to reach the point of Andreessen’s pronouncements. There is an enormous amount of momentum in the existing system. It took agile development almost a decade, by my count, to reach the critical mass and experience-driven “best practices” it has achieved that made Continuous Delivery even possible to try out. It seems logical that a similar time period will be required to “crowd out” other agile new-product development processes and supersede yet more non-agile ones.

 

I should also add that, as it stands, it seems to me that Continuous Delivery has a flaw that needs to be worked on, although it does not detract from its superiority as a process. Continuous Delivery encourages breaking down features and changes into smaller chunks that reflect shorter-term thinking. This causes two sub-optimal tendencies: features that “look less far ahead”, and features that are less well integrated. To encapsulate this argument: the virtue of a Steve Jobs is that he has been able to see further ahead of where his customers were, and that he has been able to integrate all the features of a new product together exceptionally well, in the service of one, focused vision rather than many, diffused visions.

 

Continuous Delivery, as it stands, pushes the bar more towards more present-focused features that are integrated more as a politician aggregates the present views of his or her constituents. Somehow, Continuous Delivery needs to re-infuse the new-product development process with the ability to be guided at appropriate times by the strong views of an individual “design star” like Steve Jobs – else the Agile World will lose some of its ability to deliver the “new, new thing.” And it would be the ultimate irony if agile development, which aims to release programmers from the stultifying, counter-productive constraints of corporate development, wound up drowning their voices in a sea of other (end-user) views.

 

But who cares? In this latest of bad-news seasons, it’s really nice to look at something that is fundamentally, unreservedly good. And the Agile World brought about by Thoughtworks’ Continuous Delivery’s Third Age of software development, I wholeheartedly believe, is Good News.

OK, I just accessed a recent Forrester report on agile development tools, courtesy of IBM, and boy, am I steamed. What were they thinking?

Let me list the ways in which I appear to be in strong disagreement with their methods and conclusions:

1.       Let’s look at how Forrester decided organizations were doing agile development. Oh, “lean” development is agile, is it? No consideration of the arguments last year that lean NPD can have a counter-effect to agile development? … Interesting, “test-driven development” without anything else is agile. Hmm, no justification … Wow, Six Sigma is agile. Just by concentrating on quality and insisting that bugs get fixed at earlier stages, magically you are agile … Oh, here’s a good one. If your methodology isn’t formal, you aren’t agile. I bet all those folks who have been inventing agile processes over the last few years would agree with that one – not. … Oh, wow again, iterative is not the slightest bit agile, nor is spiral. But Six Sigma is. If I were a spiral methodology user, I would love to be in the same category as waterfall. The writers of the Agile Manifesto would be glad to hear about that one …

2.       Next weirdness: “development and delivery leaders must implement a “just-in-time” planning loop that connects business sponsors to project teams at frequent intervals to pull demand and, if necessary, reprioritize existing project tasks based on the latest information available.” Gee, it seems to me this stands agile on its head. Agile was supposed to use regular input from business users to drive changing the product in the middle of the process. Now we’re in the land of “just in time”, “sponsors”, and “if necessary [!-ed.], reprioritizing existing tasks”. Note how subtly we have shifted from “developer and user collaborate to evolve a product that better meets the user’s needs” to “if you want this produced fast, sign off on everything, because otherwise we’ll have to rejigger our schedule and things may get delayed -- and forget about changing the features.”3.       Oh, here’s a great quote: “Measurement and software development have historically been poor bedfellows; heated debates abound about the value of measuring x or y on development projects. Agile changes this with a clear focus on progress, quality, and status metrics.” On what planet are these people living? I have heard nothing else for the past two years but counter-complaints between IT managers who complain that developers ignore their traditional cost/time-based metrics and agile proponents who point out correctly that “time to value” is a better gauge of project success, and that attempting to fit agile into the straitjacket of rigid “finish the spec” measurements prevents agile development from varying the product to succeed better. Let’s be clear: all the evidence I have seen, including evidence from a survey I did at the beginning of 2009 while at Aberdeen Group, indicates that those organizations that focused on progress, status, or quality metrics (typically, those that did not do agile development) did far worse on average in development speed, development cost, and software quality than those who did not.4.       Here’s an idea that shows a lack of knowledge of history: “historic ALM (application lifecycle management) vendors.” ALM means that you extend application development into the phase after online rollout, delivering bug fixes and upgrades via the same process. By that criterion, there are no historic ALM vendors, because there was no significant market for linking development and application management until recently. Believe me, I spent the late 90s and early 00s trying to suggest to vendors that they link development and application monitoring tools, and getting nowhere.5.       Forrester is for vendors with a strong focus on “agile/lean development.” Here we are again with the “lean.” Tell me, just where in the Agile Manifesto is there any positive mention of “product backlog”?6.       I can’t see any sign anywhere that Forrester, in their vendor assessments, has asked the simple question: is a “practitioner tool” that can be or is integrated with the vendor’s toolset likely to increase or decrease agility? For example, many formal/engineering tools are known to constrain the ability to change the product in the middle of the process. The old Rational design toolsets were famous for it – the approach was reasonable for huge aerospace projects. Their descendants live on in the RUP, and one of the big virtues of Team Concert was that it initially did not attempt to impose those tools on the agile developer. So, are these new “task management” tools that IBM is adding built from the ground up for agile developers, or are they old Rational tools bolted on? Or did Forrester just assume that all “task management” tools are pretty much equally agile? If they don’t even acknowledge the question, it sure looks that way.7.       Yup, here’s their set of criteria. The main ones are management, running a project, analytics, and life-cycle support. Any assessment of the agility of the tool in there? How about collaboration support, ability to change schedules in midstream, connection of analytics to market data to drive product changes, and a process for feedback of production-software problems and user needs to the developer to allow incremental adaptation? How about even a mention that any of these are folded into the criteria?8.       There seems to be confusion about the differences between collaborative, open-source, and agile development creeping in here. Collaborative and open-source development can be used in concert with agile development; but both can be used with development processes that are the opposite of agile – or what do you think the original offshoring movement that required rigid, clear spec definition as well as frequent global communication was about? I like CollabNet a lot; but, having followed them since the beginning of the century, I can attest that in the early years of the decade, there was no mention of agile development in their offerings or strategies. Ditto IBM; Eclipse is open-source, and in its early years there was no mention of agile development.You know, I was aware that Forrester was losing historical perspective when it let stellar analysts like Merv Adrian go. But I never imagined this. I really think that IBM, who is now pointing to the Forrester report, should think again about citing this to show how good Team Concert is. To see why I’m still steamed, let me close with one quote from the Agile Manifesto:  We … value: individuals and interactions over processes and tools; working software over comprehensive documentation; customer collaboration over contract negotiation; responding to change over following a plan.” I could be wrong, but it seems to me that the whole tone of the Forrester Report is exactly the opposite. Heck of a job, Forrie. 


Green IT

At the IBM Systems and Technology Group analyst briefing two days ago, IBM displayed three notable statistics:

 

 

  1.        The global amount of information stored has been growing at 70-100% per year the last 5 years, with the result that the amount of storage has been growing by 20-40% per year;
  2. .       The amount of enterprise expenditures for datacenter power/cooling has grown by more than 10-fold over the last 15 years, with the result that these expenditures are now around 16% of system TCO – equal to the cost of the hardware, although well below the also-rising costs of administration;
  3. .       Datacenter energy usage has doubled over the last five years.

 

 

These statistics almost certainly underestimate the growth in computing’s energy usage, inside and outside IT. They focus on infrastructure in place 5 years ago, ignoring a highly likely shift to new or existing data centers in developing countries that are highly likely to be more energy-inefficient.  Also, they ignore the tendency to shift computing usage outside of the data center and into the small-form-factor devices ranging from the PC to the iPhone that are proliferating in the rest of the enterprise and outside its virtual walls. Even without those increases, it is clear that computing has moved from an estimated 2 % of global energy usage 5 years ago to somewhere between 3 and 4%.  Nor has more energy usage in computing led to a decrease in other energy usage – if anything, it has had minimal or no effect at all. In other words, computing has not been effectively used to increase energy efficiency or decrease energy use by more than marginal amounts – not because the tools are not beginning to arrive, but rather because they are not yet being used by enterprises and governments to monitor and improve energy usage in an effective enough way.

 

And yet, there have been voices – mine among them – pointing out that this was a significant problem, and that there were ways to move much more aggressively, since the very beginning.  I remember giving a speech in 2008 to IT folks, in the teeth of the recession, stressing that the problem would only get worse if ignored, that doing something about it would in fact have a short payback period, and that tools for making a major impact were already there. Here we are, and the reaction of the presenters and audience at the STG conference is that the rise in energy usage is no big deal, that datacenters are handling it just fine with a few tweaks, and that IT should focus almost exclusively on cutting administrative costs.

 

All this reminds me of a Robin Williams comedy routine after the Wall Street implosion.  Noting the number of people blindly investing with Bernard Madoff, pronounced “made off” as in “made off with your money”, Robin simply asked, “Was the name not a clue?” So, I have to ask, “energy usage”:  is the name not a clue?  What does it take to realize that this is a serious and escalating problem?

 

The Real Danger

 

Right now, it is all too easy to play the game of “out of sight, out of time, out of mind.” Datacenter energy usage seems as if it is easily handled over the next few years. Related energy usage is out of the sight of corporate. Costs in a volatile global economy that stubbornly refuses to lift off (except in “developing markets” with lower costs to begin with), not to mention innovations to attract increasingly assertive consumers, seem far more urgent than energy issues. 

 

However, the metrics we use to determine this are out of whack. Not only do they, as noted above, ignore the movement of energy usage to areas of lower efficiency, but they also ignore the impact of the Global 10,000 moving in lockstep to build on instead of replacing existing solutions.

 

Let’s see how it has worked up to now. Corporate demands that IT increase capabilities while not increasing costs. The tightness of the constraints and the existence of less-efficient infrastructure causes IT to increase wasteful scale-out computing almost as much as fast-improving scale-up computing, and also to move some computing outside the data center – e.g., Bring Your Own Device – or overseas – e.g., to an available facility in Manila that is cheaper to provision if it is not comparably energy-optimized at the outset. Next year, the same scenario plays out, only with even greater costs from rebuilding from scratch a larger amount of existing inefficient physical and hardware infrastructure. And on it goes.

 

But all this would mean little – just another little cost passed on to the consumer, since everyone’s doing it – were it not for two things; two Real Dangers. First, the same process impelling too-slow dealing with energy inefficiency is also impelling a decreasing ability of the enterprise to monitor and control energy usage in an effective way, once it gets around to it.  More of the energy usage that should be under the company’s eye is moving to developing countries and to employees/consumers using their own private energy sources inside the walls, so that the barriers to monitoring are greater and the costs of implementing monitoring are higher.

 

Second – and this is more long-term but far more serious – shifts to carbon-neutral economies are taking far too long, so that every government and economy faces an indefinite future of increasing expenditures to cope with natural disasters, decreasing food availability, steadily increasing human and therefore plant/office/market migration, and increasing energy inefficiency as heating/cooling systems designed for one balance of winter and summer are increasingly inappropriate for a new balance. While all estimates are speculative, the ones I think most realistic indicate that over the next ten years, assuming nothing effective is done, the global economy will reach underperformance by up to 1% per year due to these things, and up to double that by 2035.  That, in turn, translates into narrower profit margins due primarily both to consumer demand underperformance and rising energy and infrastructure maintenance costs, hitting the least efficient first, but hitting everyone eventually.

The Blame and the Task

While it’s easy to blame the vendors or corporate blindness for this likely outcome, in this case I believe that IT should take its share of the blame – and of the responsibility for turning things around. IT was told that this was a problem, five years ago.  Even had corporate been unwilling to worry about the future that far ahead, IT should at least have considered the likely effects of five years of inattention and pointed them out to corporate.

 

That, in turn, means that IT bears an outsized responsibility for doing so now. As I noted, I see no signs that the vendors are unwilling to provide solutions for those willing to be proactive.  In the last five years, carbon accounting, monitoring within and outside the data center, and “smart buildings” have taken giant leaps, while solar technologies at whatever cost are far more easily implemented and accessed if one doesn’t double down on the existing utility grid. Even within the datacenter, new technologies were introduced 4 years ago by IBM among others that should have reduced energy usage by around 80% out of the box – more than enough to deliver a decrease instead of a doubling of energy usage. The solutions are there. They should be implemented comprehensively and immediately, as, by and large, has not been done.

 

Alternate IT Futures

 

I am usually very reluctant to criticize IT.  In fact, I can’t remember the last time I laid the weight of the blame on them. In this case, there are many traditional reasons to lay the primary blame elsewhere, and simply suggest that IT look to neat new vendor solutions to handle urgent but misdirected corporate demands. But that begs the question: who will change the dysfunctional process?  Who will change a dynamic in which IT claims cost constraints prevent it from “nice to have” energy tools, while corporate’s efforts to respond to consumer “green” preferences only brush the surface of a sea of energy-usage embedded practices in the organization?

 

Suppose IT does not take the extra time to note the problem, identify solutions, and push for moderate-cost efforts even when strict short-term cost considerations seem to indicate otherwise. The history of the past five years suggests that, fundamentally, nothing will change in the next five years, just as in the past five, and the enterprise will be deeper in the soup than ever.

 

Now suppose IT is indeed proactive. Maybe nothing will happen; or maybe the foundation will be laid for a much quicker response when corporate does indeed see the problem.  In which case, in five years, the enterprise as a whole is likely to be on a “virtuous cycle” of increasing margin advantages over the passive-IT laggards.

 

Energy usage. Is the name not a clue? What will IT do? Get the clue or sing the blues?

A recent blog post by Carol Baroudi heralds a sea change in the responsibilities of IT – or, if you prefer, a complication in IT’s balancing act. She notes that “bring your own device”, the name given to the strategy of letting employees use their own smartphones and laptops at work, rather than insisting on corporate ones, may have major negatives if the enterprise is serious about recycling devices. In effect, Carol is pointing out that allowing employee computing to cross corporate boundaries may have bad effects on corporate efforts to achieve sustainability, and IT needs to consider that.

 

In my experience, these considerations are very similar to those of a previous IT balancing act: IT’s responsibility to provide support to its users balanced against the enterprise’s need to maintain the security of internal computing and data – security whose breaches may threaten the health or even existence of the enterprise. Thus, IT’s past experiences may help guide it in balancing sustainability and the other needs of IT.

 

However, I would assert that adding sustainability to IT’s balancing act should also require a real rethinking of existing balances between all three elements for which IT will be responsible:  support, security, and sustainability. Moreover, I would argue that the result of this rethinking should be a process redesign, not an architectural one, that makes all three elements more equal to each other than they have been before – in balance as an equilateral triangle, not a random intersection of three wildly unequal lines. Finally, I would claim that a best-practices redesign will deliver far more benefits to the enterprise than “business as usual.”

 

Below, I will briefly sketch out how I believe each element should change, and first steps.

 

Redesigning IT Support

 

Support is an often-underestimated part of IT’s job. Many surveys in the past found it useful to distinguish between three IT jobs: keeping the business running, supporting users (internal and external), and helping the corporation achieve competitive advantage. Over the last 10 years, as software has become become critical to competitive advantage across a wider and wider range of industries, IT “innovation for competitive advantage” has begun to put its other two jobs in the shade. However, an enormous piece of IT’s part in achieving “innovation for competitive advantage” is to support the developers, corporate strategists, and managers who are the ones designing and creating the product and business-process software that delivers the actual advantage. In other words, the support that IT provides to end users is key to achieving two out of three of its jobs.

 

On the other hand, experience tells us that support of internal end users without control over the computing they are doing is extremely difficult and also dangerous. The difficulty comes from the fact that the average employee spends little time making sure the organization knows what his or her computing devices (including smartphones), Web usage, and software is – and so support is usually guesswork. Today’s danger now comes from the fact that unexpected computing threatens to cause downtime and security leaks. Sustainability will add “carbon leakage” – the tendency of employees to shift to unregulated devices and software that produce greater emissions when controls that slow them down are placed on the data center.

 

To a certain extent, IT can piggyback on today’s security software in dealing with the new sustainability demands – by adding monitoring of “carbon leakage”, for example, to existing asset management protections against property theft. But IT support processes must also be redesigned to incorporate sustainability considerations. IT Developers must bear their share of “going sustainable” by tilting their development form factors towards devices with lower emissions. Product designers must be encouraged or restricted in the direction of sustainability when designing new products. Corporate strategists should be made to factor IT sustainability into their strategic decisions such as rightsourcing. End users should be encouraged and restricted likewise, both in their use of IT resources and in their uses of personal computing resources for corporate purposes – Carol’s example.

 

Such a process redesign demands as a prerequisite some sort of overall sense of what internal end user carbon emissions are (or whatever other sustainability metrics are appropriate), and how they are changing. My sense is that organizations now understand that they need to draw a line between a particular resource or process and its emissions, and have some handle on all on corporate assets in the data center and corporate headquarters countries (including IT asset management and disposal). The biggest needs right now are to understand IT and employee computing resources outside the data center, and to get IT’s hands around the corporation’s capital across geographical boundaries – how computing and heating relate to emissions in the developing countries, for example.

 

Rethinking IT Security

 

“Scare stories” like theft of a company’s private data are constantly in the news, making the importance of IT security relatively easy for corporate to understand – even if they don’t necessarily want to spend on it. At the same time, when security is implemented, its philosophy of “better safe than sorry” carries its own dangers. My favorite quote in that regard is Princess Leia’s remark in the original Star Wars movie: “The more you tighten your grip, Tarkin, the more star systems will slip through your fingers.” That kind of dynamic plays out in several ways: the inability of companies to see what’s going on outside, because they are not constantly, unconsciously, exchanging information for information; the lowered productivity of employees, as they fail to bring to bear on today’s problems the new technologies that IT could not possibly anticipate supporting, and are therefore excluded by security; and the tendency of employees when too much control is exerted over one form of computing to flow to others that are easier to use but harder to keep track of – such as personal laptops instead of network computing.

 

When it comes to sustainability, security cuts both ways.  On the one hand, as noted above, sustainability needs the kind of visibility into and control of emissions that security provides for corporate data and computing. On the other hand, sustainability badly needs to emphasize the carrot instead of security’s stick, else cultural resistance will make “carbon leakage” endemic. And the converse is also true: “bring your own device”, even if it can be made to incorporating personal recycling reliably, makes security’s job harder.

 

To be fair, IT security has made enormous strides over the last 20 years in its ability to achieve fine-grained availability of apps and data to the outside while protecting proprietary information. Still, I believe that the new equilateral triangle requires not only adjustment of security and sustainability to each other’s needs, but also a shift in the balance between IT’s support tasks and its security efforts. Today’s reactive, controlling approach to security simply hinders too much the organization’s ability to be agile in an environment that is far more uncertain and fast-moving than ever before, as well as the organization’s ability to respond to what are likely to be greater and greater demands for more and more sustainable business practices.

 

The change in the security component, therefore, should be threefold. First, security software should be made much more “virtual.” By that I don’t mean that the applications it monitors should become more “virtual” – that’s happening already. Rather, I mean that the security itself should as far as possible be protecting logical, not physical, objects. In a sense, that’s what already happens, when you talk about security in a service-oriented architecture: you monitor a particular cluster of apps as a whole, no matter what platforms they are split across. So, slowly but surely, organizations have begun to do so – and they should speed it up. However, I also mean that IT should apply the same thing to things like land, buildings, and equipment. IT support needs this, in order to support efficiently across geographies. IT sustainability needs this, in order to efficiently link people, corporate resources, and emissions. Above all, when disaster strikes, the “virtual office” needs instant security as it moves to another location.

 

The second security rethinking should, I would say, take its lead from the Arab Spring. An interesting article in the MIT Technology Review showed how rebels maintained their security in the face of intensive assaults by switching media rapidly – moving from cell phones to Facebook to face-to-face and back. Underlying the concept is the idea of “rolling” or “disposable” security, in which the organization is constantly adding new things to be protected and leaving others behind as less important. Obviously, this can’t be carried too far, as some run-the-business apps can never be unprotected. However, it does give less of a feeling of being controlled to the employee, as some things become less controlled – as long as the shifts are done automatically, as new versions of the security software arriving with Continuous Delivery development processes, and without creating “bloatware.” I am not talking about constant security patches; I am talking about constant changes in what is being protected.

 

The third security rethink is to incorporate the idea that sometimes sustainability may mandate less (controlling) security instead of more. Employees are often ahead of management in their enthusiasm for sustainability – witness IBM incorporating a sustainability strategy as one of the top four only after employees told them they wanted it. Therefore, security to ensure corporate sustainability initiatives are being followed will just have to take second place to IT support for corporate and employee sustainability efforts. In other words, security levels will have to be carefully dialed down, where possible, where sustainability is involved.

 

Reimagining IT Sustainability

 

In many ways, the sustainability component of our equilateral triangle has the least design adjustment to make. Mostly, that’s because so much of IT’s sustainability component has yet to be implemented (and in some cases, defined). Emissions metrics are still in their early stages of incorporation into IT-available software; the proper relationship between the carbon-emissions focus and other anti-pollution efforts is not clear; and sustainability of a “carbon-neutral” organization’s business and IT model is still more a matter of theory than of real-world best practices.

 

Nevertheless, I would still recommend an exercise in reimagining what IT sustainability should be and how it should relate to IT support and IT security, because I believe that the organizations I talk to continue to underestimate the wrenching changes that lie ahead. Certainly, as late as a year ago, few corporations were talking about the effects of massive drought in Texas (anticipated by global warming models) to their data centers there. They do not yet appear to be considering the effects on employee hiring of loss of flood zone home insurance as insurance companies decrease their coverage in those areas in anticipation of further climate effects such as the ones that have driven up their disaster coverage costs sharply over the last 5-10 years. And this is not to mention similar once-in-100-years occurrences that have been taking place all over the rest of the globe in the last year and a half. Enterprises in general and IT in particular are wrapping their heads around what has happened so far; they do not yet appear to have wrapped their heads around the likelihood of a twofold or tenfold increase in these occurrences’ impact on the organization over the next 10 years.

 

IT needs to reimagine sustainability as if these effects are already baked in – as indeed they appear to be – but future effects beyond that are not. To put it in sustainability jargon, IT needs to add adaptation to the mix, but without compromising the movement toward mitigation in the slightest. Effectively, in the middle of a near-recession, IT needs to add additional costs to implement virtual software and the “virtual office”, while maintaining or increasing present plans to spend on decreasing carbon footprint. Decreasing carbon footprint has a clear ROI; adaptation well ahead of time to future disasters does not. Still, as the saying goes, pay me now, or pay me a lot more later.

 

What reimaging sustainability means, concretely, is that IT sustainability itself should incorporate IT efforts to support a more agile software-driven enterprise via more rapid implementation of “virtual software” – and should point that software squarely at physical assets that are difficult to move, like offices, inventory, and tools. Also, IT sustainability software should incorporate security (and vice versa) in terms of roles instead of people, resource types instead of physical plant and equipment. As an old saying put it, “in danger, the poor man looks after his few possessions first; the rich man looks after himself,” knowing that equivalent possessions can be bought later in another place as long as he survives. Likewise, for the corporation with massive resources, IT sustainability wisdom lies in agilely adapting when disaster strikes as well as seeking to prevent further disasters, not betting everything on riding out the storm with possessions intact where you are.

 

The Triangle’s IT Bottom Line

 

The key benefits of setting up an equilateral triangle of IT support, security, and sustainability should be apparent from my discussion above:

 

1.       Improved IT and business agility, with its attendant improvements in competitive advantage and long-term margins;

2.       Improved insurance against disaster and attack risks;

3.       Overall, reduced costs, as energy and efficiency savings more than counterbalance the added costs of adaptation.

 

So my recommendation to IT is that they run, not walk, to the nearest recycling center and Recycle their old IT support-security act; then Reuse it in a new equilateral-triangle strategy that balances support, security, and sustainability more equally; and use the new strategy to Reduce costs, risks, and inflexibility. Reduce, Reuse, Recycle: I bet that strategy will be sustainable. 

I recently read a post by Jon Koomey, Consulting Professor at Stanford, at www.climateprogress.org, called “4 reasons why cloud computing is efficient”. He argues (along with some other folks) that cloud computing – by which he apparently means almost entirely public clouds – is much more beneficial for reducing computing’s carbon emissions than the real-world alternatives. As a computer industry analyst greatly concerned by carbon emissions, I'd like to agree with Jon; I really would.  However, I feel that his analysis omits several factors of great importance that lead to a different conclusion.

 

The study he cites compares the public cloud -- not a private or hybrid cloud -- to "the equivalent". It is clear from context that it is talking about a "scale-out" solution of hundreds and thousands of small servers, each with a few processors. This is, indeed, typical of most public clouds, and other studies have shown that in isolation, these servers do indeed have a utilization rate of perhaps 10-20%. However, the scale-up hundreds-of-processors servers that are a clear alternative, and which are typically not used in public clouds (but are often used in private clouds), have a far better record.  The most recent mainframe implementations, which support up to a thousand "virtual machines", achieve utilization rates of better than 90% -- a three times better carbon efficiency than the public cloud, right up front.

 

The second factor Jon omits is the location of the public cloud. According to Carol Baroudi, author of "Green IT For Dummies", only one public cloud site that she studied is located in an area that has a strong record of electricity that is carbon-emission-light (Oregon). The others are in areas where the energy is "cheaper" because of fossil fuel use. That may change; but you don't move a public cloud data center easily, because the petabytes of data stored there to deliver high performance to nearby customers doesn't move easily, even over short distances. Corporate data centers are more movable, because the data storage sizes are smaller and they have extensive experience with "consolidation". While until recently most organizations were not conscious of the carbon-emission effects of their location, it appears that companies like IBM are indeed more conscious of this concern than most public cloud providers.

 

The third factor that Jon omits is what I call "flight to the dirty". High up-front costs of more efficient scale-up servers leads unconsciously to use of less energy-efficient scale-out servers. Controls over access to public and private clouds and data centers, and visibility of their costs, moves consumer and local computing onto PCs and smartphones. Apparent cheapness of labor and office space in developing nations leads companies to rapidly implement data centers and computing there using existing energy-inefficient and carbon-wasting electrical supplies. All of these "carbon inefficiencies" are not captured in typical analyses.

 

Personally, I come to three different conclusions:

1. The most carbon-efficient computing providers use scale-up computing and integrated energy management, and so far most if not all of those are private clouds.

 

2. The  IT shops that are most effective at improving carbon efficiency in computing monitor energy efficiency and carbon emissions use not only inside but outside the data center, and those inevitably are not public clouds.

 

3. Public clouds, up to now, appear to be "throwing good money after bad" in investing in locations that will be slower to provide carbon-emission-light electricity -- so that public clouds may indeed slow the movement towards more carbon-efficient IT.

 

A better way of moving computing as a whole towards carbon-emission reductions is by embedding carbon monitoring and costing throughout the financials and computers of companies. Already, a few visionary companies are doing just that. Public cloud companies should get on this bandwagon, by making their share of carbon emissions transparent to these companies (and by doing such monitoring and costing themselves). This should lead both parties to the conclusion that they should either relocate their data centers or develop their own solar/wind energy sources, that they should move towards scale-up servers and integrated energy management, and that they should not move to less costly countries without achieving energy efficiency and carbon-emission reduction for their sites up front.

 

This post was originally written last fall, and set aside as being too speculative. I felt that there was too little evidence to back up my idea that “accepting limits” would pay off in business.

 

Since then, however, the Spring 2011 edition of MIT Sloan Management Review has landed on my desk.  In it, a new “sustainability” study shows that “embracers” are delivering exceptional comparative advantage, and that a key characteristic of “embracers” is that they view “sustainability” as a culture to be “wired into the business” – “it’s the mindset”, says Bowman of Duke Energy. According to Wikipedia, the term “sustainability” itself is fundamentally about accepting limits, including environmental “carrying capacity” limits, energy limits, and limits in which use rates don’t exceed regeneration rates.

 

This attitude is in stark contrast to the attitude pervading much of human history. I myself have grown up in a world in which one of the fundamental assumptions, one of the fundamental guides to behavior, is that it is possible to do anything. The motto of the Seabees in World War II, I believe, was “The difficult we do immediately; the impossible takes a little longer.” Over and over, we have believed adjustments in the market, inventions and advances, daring to try something else, an all-out effort, something, anything, can fix any problem.

 

In mathematics, they, too, believed at the turn of the century that any problem was solvable:  that any truth of any consistent, infinite mathematical system could be proved. And then Kurt Godel came along and showed that in every such system, either you could not prove all truths or you could also prove false things, one or the other. And over the next thirty years, mathematics applied to computing showed that some problems were unsolvable, and others had a fundamental lower limit on the time taken to solve the problem that meant that they could not be solved before the universe ended. By accepting these limits, mathematics and programming have flourished.

 

This mindset is fundamentally different from the “anything is possible” mindset. It says to work smarter, not harder, by not wasting your time on the unachievable. It says to identify the highly improbable up front and spend most of your time on solutions that don’t involve that improbability. It says, as agile programming does, that we should focus on changing our solutions as we find out these improbabilities and impossibilities, rather than piling on patch after patch. It also says, as agile programming does, that while by any short-run calculation the results of this mindset might seem worse than the results of the “anything is possible” mindset, over the long run – and frequently over the medium term – it will produce better results.

 

It seems more and more apparent to me that we have finally reached the point where the “anything is possible” approach is costing us dearly. I am speaking specifically about climate change – one key driver for the sustainability movement. The more I become familiar with the overwhelming scientific evidence for massive human-caused climate change and the increasing inevitability of at least some major costs of that change in every locality and country of the globe, the more I realize that an “anything is possible” mentality is a fundamental cause of most people’s failure to respond adequately so far, and a clear predictor of future failure.

 

Let me be more specific: as noted in the UN scientific conferences and recent additional data, “business as usual” is leading us to a carbon dioxide concentration of 1000 ppm in the atmosphere, of which about 450 ppm or 150-200 ppm over the natural amount is already “baked in”. This will result, at minimum, in global increases in temperature of 5-10 degrees Fahrenheit, which will result, among other things, in order-of-magnitude increases in the damage caused by extreme weather events, the extinction of many ecosystems supporting existing urban and rural populations – because many of these ecosystems are blocked from moving north or south by paved human habitations – so that food and shelter production must both change their location and find new ways to deliver to new locations, movement of all populations from locations on seacoasts up to 20 feet above existing sea level, and adjustment of a large proportion of heating and cooling systems to a new mix of the two – not to mention drought, famine, and economic stress. And these are just the effects over the next 60 or so years.

 

Adjusting to this will place additional costs on everyone, very possibly similar to a 10% tax yearly on every individual and business in every country for the next 50 years, no matter how wealthy or adept. Continuing “business as usual” for another 30 years would result in a similar, almost equally costly additional adjustment.

 

Our response to this so far has been in the finest tradition of “anything is possible”. We search for technological fixes under the belief that they will solve the problem, since they appear to have done so before. Most of us – except the embracers – assume that existing business incentives, focused on cutting costs – but these costs have not yet occurred – will somehow respond years before the impact begins to be felt. (Embracers, by the way, actively seek out new metrics to capture things like carbon emissions’ negative effects) We are skeptical and suspicious, since those who have predicted doom before, for whatever reason, have generally seemed to have turned out to be wrong. We hide our heads in the sand, because we have too much else to do and concerns that seem more immediate. We are distracted by possible fixes, and by their flaws.

 

The “embrace limits” mindset for climate changes makes one simple change: accept steady absolute reductions in carbon emissions as a limit. For example, every business, every country, every region, every county accepts that every year, its emissions are to be reduced by 1% in that year. If a business, that business also accepts that its products’ emissions are to be reduced by 1% in that year, no matter how successful the year has been.  If a locality does better one year, it still is expected not to increase emissions the next year. If a country rejects this idea, investments from conforming countries are reduced by 1% each year, and products accepted from that country are expected to comply.

 

But this is a crude, blunt-force suggested application of “embrace limits”. There are all sorts of other applications. Investors will no longer invest in equities that seem to promise 2% long-term returns above historical norms, and will limit the amount of their capital invested in “bets,” because those investments are overwhelmingly likely to be con jobs. Project managers will no longer use metrics like time to deployment, but rather “time to value” and “agility”, because there is a strong possibility that during the project, the team will discover a limit and need to change its objective.

 

Because, fundamentally, climate change is a final, clear signal that Godel has won. Whether we accept limits or not, they are there; and the less we accept them and use them to work smarter, the more it costs us. 

IBM’s launch of its new sustainability initiative on October 1 prompted the following thoughts: This is among the best-targeted, best-thought-out initiatives I have ever seen from IBM. It surprises me by dealing with all the recent reservations I have had about IBM’s green IT strategy. It’s all that I could have reasonably asked IBM to do. And it’s not enough.

Key Details of the Initiative

We can skip IBM’s assertion that the world is more instrumented and interconnected, and systems are more intelligent, so that we can make smarter decisions; it’s the effect of IBM’s specific solutions on carbon emissions that really matters. What is new – at least compared to a couple of years ago – is a focus on end-to-end solutions, and on solutions that are driven by extensive measurement. Also new is a particular focus on building efficiency, although IBM’s applications of sustainability technology extend far beyond that.

The details make it clear that IBM has carefully thought through what it means to instrument an organization and use that information to drive reductions in energy – which is the major initial thrust of any emission-reduction strategy. Without going too much into particular elements of the initiative, we can note that IBM considers the role of asset management, ensures visibility of energy management at the local/department level, includes trend analysis, aims to improve space utilization, seeks to switch to renewable energy where available, and optimizes HVAC for current weather predictions. Moreover, it partners with others in a Green Sigma coalition that delivers building, smart grid, and monitoring solutions across a wide range of industries, as well as in the government sector. And it does consider the political aspects of the effort. As I said, it’s very well targeted and very well thought out.

Finally, we may note that IBM has “walked the walk”, or “eaten its own dog food”, if you prefer, in sustainability. Its citation of “having avoided carbon emissions by an amount equal to 50% of our 1990 emissions” is particularly impressive.

The Effects

Fairly or unfairly, carbon emission reductions focus on reducing carbon emissions within enterprises, and emissions from the products that companies create. Just about everything controllable that generates emissions is typically used, administered, or produced by a company – buildings, factories, offices, energy, heating and cooling, transportation (cars), entertainment, and, of course, computing.  Buildings, as IBM notes, are a large part of that emissions generation, and, unlike cars and airplanes, can relatively easily achieve much greater energy efficiency, with a much shorter payback period. That means that a full implementation of building energy improvement across the world would lead to at least a 10% decrease in the rate of human emissions (please note the italics; I will explain later). It’s hard to imagine an IBM strategy with much greater immediate impact.

The IBM emphasis on measurement is, in fact, likely to have far more impact in the long run. The fact is that we are not completely sure how to break down human-caused carbon emissions by business process or by use. Therefore, our attempts to reduce them are blunt instruments, often hitting unintended targets or squashing flies. Full company instrumentation, as well as full product instrumentation, would allow major improvements in carbon-emission-reduction efficiency and effectiveness, not just in buildings or data centers but across the board.

These IBM announcements paint a picture of major improvements in energy efficiency leading, very optimistically, to 30% improvements in energy efficiency and increases in renewable energy over the next 10 years – beyond the targets of most of today’s nations seeking to achieve a “moderate-cost” ultimate global warming of 2 degrees centigrade, in their best-case scenarios.  In effect, initiatives like IBM’s plus global government efforts could reduce the rate of human emissions beyond existing targets. Meanwhile, Lester Brown has noted that from 2008 to 2009, measurable US human carbon emissions from fossil fuels went down 9 percent.

This should be good news. But I find that it isn’t. It’s just slightly less bad news.

Everybody Suffers

Everyone trying to do something about global warming has been operating under a set of conservative scientific projections that, for the most part, correspond to the state of the science in 2007. As far as I can tell, here’s what’s happened since, in a very brief form:

1.       Sea rise projections have doubled, to 5 feet of rise in 80 years. In fact, more rapid than expected land ice loss means that 15 feet of rise may be more likely, with even more after that.

2.       Scientists have determined that “feedback loops” such as loss of the ability of ice to reflect back light and therefore decrease ocean heat, which loss in turn increases global temperature, are in fact “augmenting feedbacks”, meaning that they will contribute to additional global warming even if we decrease emissions to near zero right now.

3.       Carbon in the atmosphere is apparently headed still towards the “worst case” scenario of 1100 ppm. That, in turn, apparently means that the “moderate effect” scenario underlying all present global plans for mitigation of climate change with moderate cost (450 ppm) will in all likelihood not be achieved[1]. Each doubling of ppm leads to 3.5 degrees centigrade or 6 degrees Fahrenheit average rise in temperature (in many cases, more like 10 degrees Fahrenheit in summer), and the start level was about 280 ppm, so we are talking 12 degrees Fahrenheit rise from reaching 1100 ppm[2], with follow-on effects and costs that are linear up to 700-800 ppm and difficult to calculate but almost certainly accelerating beyond that.

4.       There is growing consensus that technologies to somehow sequester atmospheric carbon or carbon emissions in the ground, if feasible, will not be operative for 5-10 years, not at full effectiveness until 5-10 years after that, and not able to take us back to 450 ppm for many years after that – and not able to end the continuing effects of global warming for many years after that, if ever[3].

Oh, by the way, that 9 % reduction in emissions in the US? Three problems. First, that was under conditions in which GNP was mostly going down. As we reach conditions of moderate or fast growth, that reduction goes to zero. Second, aside from recession, most of the reductions achieved up to now come from low-cost-to-implement technologies. That means that achieving the next 9%, and the next 9% after that, becomes more costly and politically harder to implement. Third, at least some of the reductions come from outsourcing jobs and therefore plant and equipment to faster-growing economies with lower costs. Even where IBM is applying energy efficiencies to these sites, the follow-on jobs outside of IBM are typically less energy-efficient. The result is a decrease in the worldwide effect of US emission cuts. As noted above, the pace of worldwide atmospheric carbon dioxide rise continued unabated through 2008 and 2009. Reducing the rate of human emissions isn’t good enough; you have to reduce the absolute amount of human, human-caused (like reduced reflection of sunlight by ice) and follow-on (like melting permafrost, which in the Arctic holds massive amounts of carbon and methane) emissions.

That leaves adaptation to what some scientists call climate disruption. What does that mean?

Adaptation may mean adapting to a rise in sea level of 15 feet in the next 60 years and an even larger rise in the 60 years after that. Adaptation means adapting to disasters that are 3-8 times more damaging and costly than they are now, on average (a very rough calculation, based on the scientific estimate that a 3% C temperature rise doubles the frequency of category 4-5 hurricanes; the reason is that the atmosphere involved in disasters such as hurricanes and tornados can store and release more energy and water with a temperature rise). Adaptation means adjusting to the loss of food and water related to ecosystems that cannot move north or south, blocked by human paved cities and towns. Adaptation means moving to lower-cost areas or constantly revising heating and cooling systems in the same area, as the amount of cooling and heating needed in an area changes drastically. Adaptation means moving food sources from where they are in response to changing climates that make some areas better for growing food, others worse. Adaptation may mean moving 1/6 of the world’s population from one-third of the world’s cultivable land which will become desert[4]. In other words, much of this adaptation will affect all of us, and the costs of carrying out this adaptation will fall to some extent on all of us, no matter how rich. And we’re talking the adaptation that, according to recent posts[5], appears to be already baked into the system. Moreover, if we continue to be ineffectual at reducing emissions, each decade will bring additional adaptation costs on top of what we are bound to pay already.

Adaptation will mean significant additional costs to everyone – because climate disruption brings costs to everyone in their personal lives. It is hard to find a place on the globe that will not be further affected by floods, hurricanes, sea-level rise, wildfires, desertification, heat that makes some places effectively unlivable, drought, permafrost collapse, or loss of food supplies. Spending to avoid those things for one’s own personal home will rise sharply – well beyond the costs of “mitigating” further climate disruption by low-cost or even expensive carbon-emission reductions.

What Does IBM Need To Do?

Obviously, IBM can’t do much about this by itself; but I would suggest two further steps.

First, it is time to make physical infrastructure agile. As the climate in each place continually changes, the feasible or optimum places for head offices, data centers, and residences endlessly change. It is time to design workplaces and homes that can be inexpensively transferred from physical location to physical location. Moving continually is not a pleasant existence to contemplate; but virtual infrastructure is probably the least-cost solution.

Second, it is time to accept limits.  The effort to pretend that we do not need to accept the need to reduce emissions in absolute, overall terms, because technology, economics, or sheer willpower will save us, as we have practiced it since our first warning in the 1970s, is failing badly. Instead of talking in terms of improving energy efficiency, IBM needs to start talking in terms of absolute carbon emissions reduction every year, for itself, for its customers, and for use of its products, no matter what the business’ growth rate is.

One more minor point: because climate will be changing continually, adjusting HVAC for upcoming weather forecasts, which only go five days out, is not enough. When a place that has seen four days of 100 degree weather every summer suddenly sees almost 3 months of it, no short-term HVAC adjustment will handle continual brownouts adequately. IBM needs to add climate forecasts to the mix.

Politics, Alas

I mention this only reluctantly, and in the certain knowledge that for some, this will devalue everything I have said. But there is every indication, unfortunately, that without effective cooperation from governments, the sustainability goal that IBM seeks, and avoidance of harms beyond what I have described here, are not achievable.

Therefore, IBM membership in an organization (the US Chamber of Commerce) that actively and preferentially funnels money to candidates and legislators that deny there is a scientific consensus about global warming and its serious effects undercuts IBM’s credibility in its sustainability initiative and causes serious damage to IBM’s brand. Sam Palmisano as Chairman of the Board of a company (Exxon Mobil) that continues to fund some “climate skeptic” financial supporters (the Heritage Foundation, at the least) and preferentially funnels money to candidates and legislators that deny the scientific consensus does likewise.

Summary

IBM deserves enormous credit for creating today comprehensive and effective efforts to tackle the climate disruption crisis as it was understood 3 years ago. But they are three years out of date. They need to use their previous efforts as the starting point for creating new solutions within the next year, solutions aimed at a far bigger task: tackling the climate disruption crisis as it is now.



[1] Recent studies suggest that in order to limit warming to 5 degrees centigrade or 9 degrees Fahrenheit (via the 450 ppm atmospheric carbon dioxide long-term limit), carbon emissions must be limited to an average of 11 billion tons per year, perhaps less. The only scenario under which that clearly happens is global implementation of supplying a majority of energy needs from non-fossil fuels, almost immediately. Few if any countries presently have in place a plan that will make that happen within the next ten years. And most models generating scenarios do not take into account positive feedback loops.

[2] I am talking here about the US; the worldwide rise will be slightly lower. In 2009, atmospheric carbon dioxide reached 395 ppm at maximum, up about 41% from 150 years ago, and at its present rate of increase of over 2 ppm per year, should reach 400 ppm in 2011.  Because of feedback effects, scientists predict that this rate of increase will continue to grow. Growth in 2008, at 2.93 ppm, was the highest on record.

[3] For example, one scientist studying one of the most promising types of “geo-engineering” indicates that it will have little if any impact unless emissions are dramatically reduced before the geo-engineering is applied.

[4] I have left out a host of other adaptations with less obvious effects, such as wildfires, floods, destruction of more than 70% of species, and so on.

[5] See, for example, Heidi Cullen, “The Weather of the Future”, although she is overly optimistic, since her data goes only to the beginning of 2009 and her conclusions are scientifically conservative.

Green IT Revisited
04/12/2010

Paul Krugman has just posted an alarming article updating the consensus of reputable scientists and economists on global warming. Along with a book about which I recently did a blog post, it provides a 20,000-foot view of the increasing clarity of global warming’s likely future effects and of the paths that the world will take as they seek (or fail to seek) to mitigate (not eliminate) those effects.

 

A very broad-brush summary is that spending on “green” carbon-dioxide-emission reduction can take one of three paths:

 

  1. Do nothing. In that case, businesses of all stripes will need to decide what to do about urban areas no longer supported by their physical infrastructure (because the traditional climate will have migrated northward/southward with little ecosystem/farm ability to adapt); about serious droughts and more extreme weather events with much greater damage to the economy, especially in emerging markets; and about much greater political friction, as haves and have-nots quarrel more bitterly over changes that will disproportionately affect the have-nots.
  2. What Krugman calls the gradualist approach, in which government-led cap-and-trade or carbon-tax initiatives are slowly phased in, probably with loopholes, over the next 15-30 years. In that case, businesses should anticipate spending much energy ensuring that the resulting phase-in is as favorable as possible for a given industry – but also, there is a good possibility that they will still face the dilemmas outlined in Path 1, in slightly less dire form, over the next 30 years.
  3. What Krugman has dubbed the “big bang” approach, in which cap-and-trade or carbon taxes go into full force in the next 2-3 years, worldwide, with special attention paid to reducing coal emissions and production sharply. This might result in a global temperature increase of “only” 4 degrees Fahrenheit, which would avoid most of the dire effects cited in case 1. However, businesses will find it less productive to advocate for particularly industries in that case, as their time will be fully taken up by figuring out how to handle the new industry pecking order and minimizing the additional “tragedy of the commons” costs.

 

But what does this mean for IT, here and now, in the middle of struggling to get out from under the worst economic crisis since the Great Depression?

 

Let’s start with the way these three approaches will affect what business requires of IT, over, say, the next five years. If the “do-nothing” approach is adopted, and carbon emissions continue to increase as in the last 40 years, it is likely that the worst consequences will not take place until after 5 years’ time. Thus, businesses content to maximize profits in the short run will demand little change from IT in the next five years, while proactive businesses will seek to virtualize their IT infrastructure as rapidly and comprehensively as possible, to reduce its dependence on a particularly global physical location.  IT should also consider decreasing outsourcing to particular emerging-country political hot spots, as these will be increasingly risky places to do jobs such as programming.

The “gradualist” approach shares with the “do-nothing” approach a significant likelihood that the worst will happen – but probably not until after 5 years’ time.  Therefore, as before, pure short-run businesses will not demand climate-related changes in their IT, while proactive ones will seek faster virtualization. However, businesses will also have to compete with businesses in other industries to minimize the relative effect of caps or taxes. IT will play a key role in that effort, since it is the most effective place to monitor (and possibly adjust) emissions to minimize these effects. That is, IT will provide the emission sensors; the business will decide what to do, based on the sensor data. Thus, IT should anticipate a greater need for green-specialized data-analysis software, energy-efficient data centers, and grid-based physical-plant monitoring software and hardware.

 

In the “big bang” approach, there is much less need to prepare for climate change, but much greater need to prepare for changes in the industry pecking order that cannot be avoided. IT should be prepared for much more rapid transition of the business into less energy-intensive or carbon-dioxide-releasing fields than we have ever seen before. Increased agility, even as a coal business transforms itself into a water-power one or an oil company into a solar/wind-power colossus, is demanded of IT. That means applicability of business-critical software to new accounting methods and decreases in the cost of merging with other companies’ IT.

 

The flip side of what business requires of IT is what IT can supply to the business in order to aid carbon-dioxide-emission reduction. As noted above, there is no immediate need to carry out these “reduction actions,” and the “tragedy of the commons” ensures that the business will bear few of the long-term costs. However, failure to carry the reduction actions out now will mean some additional costs later, as well as a clear handicap in the marketplace compared to smarter rivals.

 

The key fact to keep in mind in this area is that while industries are nominally not the source of increasing carbon emissions – at least not lately (according to some figures, emissions from companies’ internal processes, overall, have been increasing by about 0.6-1% per year, far less than “consumer” emissions) – they are also judged by consumer usage of their services; and that varies by industry. An oil company’s manufacturing processes may be far more energy-efficient than 10 years ago; but lower relative oil costs and economic growth translate into rapidly increasing use of gas in transportation, which in turn leads to a strong political focus on automotive gas mileage and on the role oil companies play in auto emissions.  Internal IT, by contrast, has seen a much faster ramp-up in energy use in the typical company; but, so far, it remains a far less visible symbol of consumer excess.

 

IT’s role in being part of the solution, not part of the problem, is therefore the same no matter what political approach is adopted. The future holds increasing carbon-dioxide concentrations in the atmosphere, whatever the approach; and, no matter what the approach, every business with consumer-emission-affecting products and services needs to do everything possible in mitigating the effects of these products.

 

IT’s approach to helping a business do something about climate change, therefore, is (a) focused on consumer products and (b) industry-specific. If an industry such as aerospace, travel, or utilities has a large impact on consumer emission-affecting behavior, IT needs to help the business help the customer to reduce emissions, as part of the products and services. Thus, IT needs to help set up customer-emission-monitoring software, ways of making the product more energy-efficient (rationalizing and fine-tuning the electrical grid, or figuring out more energy-efficient vacations that are still satisfactory to the consumer), and measurements of energy emissions that enable both internal and external improvements.  Moreover, these software solutions need to be far more global and extra-organizational than ever before, because it is very easy to “flow emissions to the least regulated spot”, which accomplishes nothing while seeming to the business to solve its problem.

 

One final point: even during the recession, total computing-related emissions appear to have continued to climb at a rapid rate – somewhere between 20% and 50% yearly worldwide. While some of this climb can be excused as a byproduct of moves to decrease other/larger emissions, somewhere around 10 years down the line IT itself will see more stringent energy limits that cannot be outsourced to developing countries or traded for advances in other areas such as transportation. A global, cross-organizational and cross-market approach to a business’ emission profile may not pay dividends until then; but it will indeed pay dividends.

 

Overall, then, this economic “perfect storm” that IT and the business are now seeing as a reason to downplay green efforts is, in fact, more like the fable of the mythical frog who is placed in water that is gradually heated, and who fails to notice the increase at any point until it is dead and cooked. Whatever the political approach, climate change pain is already baked in, and quicker IT adjustment is better than failure to notice the increasing heat over the next 5 years.  What the IT strategy should be, depends on the political approach, industry, and consumer; that there should be an IT green strategy, does not.


Hardware

Thinking about Intel’s announcement on Friday that it will acquire Wind River Systems, it occurs to me that this move syncs up nicely with a trend that I feel is beginning to surface: a global network of sensors of various types (call it the Grid) to complement the Web. But the connection isn’t obvious; so let me explain.

The press release from Intel emphasized Wind River’s embedded-software development and testing tools. Those are only a part of its product portfolio – its main claim to fame over the last two decades has been its proprietary real-time operating system/RTOS, VxWorks (it also has a Linux OS with real-time options). So Intel is buying not only software for development of products such as cars and airplanes that have software in them; it is buying software to support applications that must respond to large numbers of inputs (typically from sensors) in a fixed amount of time, or else catastrophe ensues. Example: a system keeps track of temperatures in a greenhouse, with ways to seal off breaches automatically; if the application fails to respond to a breach in seconds, the plants die.

Originally, in the early development of standardized Unix, RTOSs were valued for their robustness; after all, not only do they have to respond in a fixed time, but they also have to make sure that no software becomes unavailable. However, once Open Software Foundation and the like had added enough robustness to Unix, RTOSs became a side-current in the overall trend of computer technology, of no real use to the preponderance of computing. So why should RTOSs matter now?

What Is the Grid?

Today’s major computing vendors, IBM among the foremost, are publicizing efforts to create the Smart Grid, software added to the electrical-power “grid” in the United States that will allow users to monitor and adapt their electricity usage to minimize power consumption and cost. This is not to be confused with grid computing, which created a “one computer” veneer over disparate, distributed systems, typically to handle one type of processing. The new Smart Grid marries software to sensors and a network, with the primary task being effective response to a varying workload of a large number of sensor inputs.

But this is not the only example of global, immediate sensor-input usage – GPS-based navigation is another. And this is not the only example of massive amounts of sensor data – RFID, despite being slow to arrive, now handles RFID-reader inputs by the millions.

What’s more, it is possible to view many other interactions as following the same global, distributed model. Videos and pictures from cell phones at major news events can, in effect, be used as sensors. Inputs from sensors at auto repair shops can not only be fed into testing machines; they can be fed into global-company databases for repair optimization. The TV show CSI has popularized the notion that casino or hospital video can be archived and mined for insights into crimes and hospital procedures, respectively.

Therefore, it appears that we are trending towards a global internetwork of consumer and company sensor inputs and input usage. That global internetwork is what I am calling the Grid. And RTOSes begin to matter in the Grid, because an RTOS such as VxWorks offer a model for the computing foundations of the Grid.

The Grid, the Web, and the RTOS

The model for the Grid is fundamentally different from that of the Web (which is not to say that the two cannot be merged). It is, in fact, much more like that of an RTOS. The emphasis in the Web is of flexible access to existing information, via searches, URLs, and the like. The emphasis in the Grid is on rapid processing of massive amounts of distributed sensor input, and only when that requirement has been satisfied does the Grid turn its attention to making the resulting information available globally and flexibly.

This difference, in turn, can drive differences in computer architecture and operating software.  The typical server, PC, laptop, or smartphone assumes that it the user has some predictable control over the initiation and scheduling of processes – with the exception of networking. Sensor-based computing is much more reactive: it is a bit like having one’s word processing continually interrupted by messages that “a new email has arrived”. Sensors must be added; ways must be found to improve the input prioritization and scheduling tasks of operating software; new networking standards may need to be hardwired to allow parallel handling of a wide variety of sensor-type inputs plus the traditional Web feeds.

In other words, this is not just about improving the embedded-software development of large enterprises; this is about creating new computing approaches that may involve major elaborations of today’s hardware. And of today’s available technologies, the RTOS is among the most experienced and successful in this type of processing.

Where Intel and Wind River Fit

Certainly, software-infused products that use Intel chips and embedded software are a major use case of Intel hardware. And certainly, Wind River has a market beyond sensor-based real-time processing, in development of embedded software that does not involve sensors, such as networking software and cell-phone displays. So it is reasonable for Intel to use Wind River development and testing tools to expand into New Product Development for software-infused products like audio systems; and it is reasonable for commentators to wonder if such a move trespasses on the territory of vendors such as IBM, which has recently been making a big push in software-infused NPD.

What I am suggesting, however, is that in the long run, Wind River’s main usefulness to Intel may be in the reverse direction: providing models for implementing previously software-based sensor-handling in computing hardware. Just as many formerly software-only graphics functions have moved into graphics chips with resulting improvements in the gaming experience and videoconferencing, so it can be anticipated that moving sensor-handling functions into hardware can make significant improvements in users’ experience of the Grid.

Conclusions

If it is indeed true that a greater emphasis on sensor-based computing is arriving, how much effect does this trend have on IT?  In the short run, not much. The likely effect of Intel’s acquisition of Wind River over the next year, for example, will be on users’ embedded software development, and not on providing new avenues to the Grid.

In the long run, I would anticipate that the first Grid effects from better Intel (or other) solutions would show up in an IT task like power monitoring in data centers. Imagine a standardized chip for handling distributed power sensing and local input processing across a data center, wedded to today’s power-monitoring administrative software. Extended globally across the enterprise, supplemented by data-mining tools, used to provide up-to-date data to regulatory agencies, extended to clouds to allow real-time workload shifting, supplemented by event-processing software for feeding corporate dashboards, extended to interactions with the power company for better energy rates, made visible to customers of the computing utility as part of the Smart Grid – there is a natural pathway from sensor hardware in one machine to a full Grid implementation.

And it need not take Intel beyond its processor-chip comfort zone at all.


TCO/ROI

An increasing number of commentators on both lean New Product Development (NPD) and agile software development have noted efforts to combine the two. A key reason for the interest in the combination is that lean has long been entrenched in the manufacturing process (e.g., via Kanban and ERP), while agile development has recently established itself as an effective software development strategy, and a major segment of NPD now involves software-only products, software-infused products, or software-plus-hardware solutions. A recent Aberdeen Group study notes that 66% of today’s NPD uses software to drive innovation, and the number is rising rapidly. Meanwhile, several efforts are being made to apply lean-only to software development (cf. Hibbs, Jewett, Sullivan, “The Art of Lean Software Development”).

At first glance, lean is a very good complement for agile. Users have concerns about agile’s ability to scale; lean makes sure that developer “resources” are available for each iterative step in a flexible and cost-effective way. Lean allows a different process time-line each time depending on what resources are available when; agile “spirals” in towards a solution, ensuring a different process timeline for each iteration of a solution “guesstimate”.  Both aim to increase value to the customer; both emphasize incremental improvements.

However, a reading of the blogs on lean and agile betrays a fundamental misunderstanding of the agile mindset. Here’s a quote from Martin Heller (http://www.infoworld.com/d/developer-world/should-software-development-be-lean-or-agile-284): “They both strive to improve software quality, reduce waste, increase developer productivity, accept changes to requirements, and prize meeting the customer's real needs.” Agile doesn’t strive to improve software quality; it strives to improve software usefulness. It doesn’t aim to reduce waste; it aims to increase “wasteful” experimentation, which turns out actually to reduce the time-to-value, as a side-effect. Its way of increasing developer productivity – ensuring that development is more in sync with customer needs – is the opposite of lean’s attempt to improve developer productivity by eliminating bottlenecks (and customer interference is potentially a bottleneck). Lean accepts changes to requirements as a necessary part of incremental manufacturing process improvement; agile welcomes and emphasizes changes to requirements as part of improving product design. Lean views customer needs as consisting primarily of quality; agile views customer needs as consisting primarily of rapidly-changing functionality.

Moreover, we have already seen how an emphasis on Deming-like quality can retard product development to the detriment of a firm, as when Motorola lost its race to dominate processors to Intel by being late and customer-insensitive with the design of a “quality”-manufactured product. I can’t quote figures from my recent study under Aberdeen auspices of agile software development vs. other processes; but I can certify that a focus on software quality improvement was far less effective in improving product TCO, ROI, customer value, and ongoing agility than an agile process; in fact, there was no clear long-term benefit to ROI of quality-improving processes at all.

I wish I could say that this is a side-issue, and that once agile is properly understood, lean and agile are indeed complements. After all, some of the confusion about agile stems from a lean mindset that views everything from the lens of the manufacturing process. In many companies, integrating NPD and the manufacturing process has indeed reaped rewards, and lean has been a part of that improvement. Agile, by contrast, is effective at NPD and R&D/innovation, and will probably never be applicable to manufacturing – imagine asking the NC machine to “spiral in on” the correct product each time! Surely, a compromise can be worked out in which lean resource allocation is applied to agile software development and lean manufacturing becomes much more nimble at producing prototypes. However, I regretfully conclude that in the real world, mixing lean and agile is likely to produce worse results than applying agile alone.

Consider the idea of applying just-in-time resource allocation to an agile process. In each iteration, the agile process is going to change the specs during the “sprint.” Thus, half the time, allocation of resources will be inadequate, causing more bottlenecks than a “wasteful” strategy.

More fundamentally, lean increases the need for tight control over and visibility into costs and resources. This kind of oversight, as developers well know, slows things down. As per my study, it also yields nothing in improved productivity, TCO, ROI, or customer satisfaction.

Above all, the mindset of lean and agile are antithetical. Lean is reactive, prescriptive, and cost-oriented, despite its attempts to connect to the customer. It works where requirements change little from iteration to iteration, as in a manufacturing process. Agile is proactive, creative, and customer-value-oriented. It works where customer needs are constantly changing – which is a much better description of the real world of customers, as opposed to the artificial “turn off the spigot until we’ve finished” world of much of today’s NPD.

In the end, however, the conflict of lean and agile is only one instance of the difficulties companies have with processes like agile when the command-and-control approach has become entrenched. As I noted in a recent post, it may well be caused by budgeting processes that constrain business flexibility. But whatever the reason, there is strong cultural resistance to increasing corporate results long-term by adopting processes that increase business agility, such as agile development.

In an old story whose author I cannot unfortunately remember, a manager is called in to turn the efforts of R&D types into profit. The first thing he notices was that they seem to spend a lot of time with their feet on their desks, thinking – a highly wasteful practice. How can he change that?  He puts his legs up on his desk and leans back, thinking … In this story, as in the real world, lean is more the enemy than the partner of agile; and trying to constrain agile via lean will cost the company much more in the long run than it seems to gain according to short-term cost measures.

 

Speed vs. Agility
01/17/2010

A recent product announcement by IBM and a series of excellent (or at least interesting) articles in Sloan Management Review have set me to musing on one unexamined assumption in most assessments: that increased process speed equals increased business agility. My initial take: this is true in most cases, but not in all, and can be misleading as a cookie-cutter strategy.

The IBM announcement centered around integration of their business-process management (BPM) capabilities, in order to achieve agility by speeding up business processes. What was notably missing was integration with IBM’s capabilities for New Product Development (NPD) – Rational and the like. However, my initial definition and application of KAIs (key agility indicators) at a business level suggests that speeding up NPD, including development of new business processes, has far more of an impact on long-term business agility than speeding up existing processes. To put it another way, increasing the Titanic’s ability to turn sharply is far more likely to avert disaster than increasing its top speed charging straight ahead – in fact, increasing its speed makes it more likely to crash into an iceberg.

A similar assumption seems to have been made in SMR’s latest issue, in the article entitled “Which Innovation Efforts Will Pay?” The message of this article appears to be that improving innovation efforts is primarily a matter of focusing more on the “healthy innovation” middle region of internally-developed modest “base hits”, with little or no effect from speeding up internal innovation processes or expanding them to include outside innovation. By contrast, the article “Does IP Strategy Have to Cripple Open Innovation?” suggests that collaborative strategies across organizational lines focused on NPD make users far more agile and businesses far better off, despite requiring as much (or more) time to implement as in-house efforts. And finally, we might cite a study in SMR suggesting that users estimating inventory-refill needs were more likely to make sub-optimal decisions when fed data daily than when fed a weekly summary, or the recent book on system dynamics by Donella Meadows that argued that increasing the speed of a process was often accomplished by increasing its rigidity (constraining the process in order to optimize the typical case right now), which made future disasters, as the system inevitably grows, less avoidable and more life-threatening.

All of this suggests that (a) people are assuming that increased process speed automatically translates to increased business agility, and (b) on the contrary, in many cases it translates to insignificant improvements or significant decreases in agility. But how do we tell when speed equals agility, and when not? What rules of thumb are there to tell us when increased speed does not positively impact business agility, and, in those cases, what should we do?

I don’t pretend to have the final answers for these questions. But I do have some initial thoughts on typical situations that lead to increased speed but decreased agility, and on how to assess and improve business investment strategies in those cases.

If it isn’t sustainable it isn’t agile. Let us suppose that we improve a business process by applying technology that speeds the process by decreasing the need for human resources. Further, suppose that the technology involves increased carbon or energy use – a 1/1 replacement of people-hours by computing power, say. Over the long run, this increased energy use will need to be dealt with, adding costs for future business-process redesign and decreasing the money available for future innovation. The obvious rejoinder is that cost savings will fund those future costs; except that today, most organizations are still digging themselves deeper into an energy hole, while operational IT costs, driven by an increased need for storage, continue to exert upward pressure and crowd out new-app development.

As the most recent SMR issue notes, the way to handle this problem is to build sustainability into every business process. If lack of sustainability decreases agility, then the converse is also true: building sustainability into the company, including both ongoing processes and NPD, increases revenues, decreases costs – and increases agility.

If it’s less open it’s less agile. In some ways, this is a tautology: if an organization changes a business process so as to preclude some key inputs in the name of speed, it will be less successful in identifying problems that call for adaptation. However, it does get at one of the subtler causes of organizational rigidity: the need to do something, anything, quickly in order to survive. A new online banking feature may make check processing much more rapid, but if customers are not listened to adequately, it may be rejected in the marketplace, or cost the business market share.

Detecting and correcting this type of problem is hard, because organizational politics points everyone to the lesson only after the process has already been implemented and has gained a toehold – and because businesses may draw the wrong conclusion (e.g., it’s about better design, not about ensuring open collaboration). The best fix is probably a strong, consistent, from-the-top emphasis on collaboration, agile processes, and not shooting the messenger.

Those are the main ways I have seen in which increased speed can actually make things worse. I want to add one more suggestion, which affects not so much situations where speed has negative consequences, but rather cases in which speed and agility can be improved more cost-effectively: upgrading the development process is better. That is, even if you are redesigning an existing business process like disaster recovery for better speed, you get a better long-term bang for your buck by also improving the agility of the process by which you create and implement the speedier solution. Not only does this have an immediate impact in making the solution itself more agile; it also bleeds over into the next project to improve a business process, or a product or service. And the best way I’ve found so far to improve development-process speed, quality, and effectiveness is an agile process.

TCO/ROI Methodology
05/03/2009

I frequently receive questions about the TCO/ROI studies that I conduct, and in particular about the ways in which they differ from the typical studies that I see.  Here’s a brief summary:

 
  • I try to focus on narrower use cases. Frequently, this will involve a “typical” small business and/or a typical medium-sized business – 10-50 users at a single site, or 1000 users in a distributed configuration (50 sites in 50 states, 20 users at each site). I believe that this approach helps to identify situations in which a typical survey averaging over all sizes of company obscures the strengths of a solution for a particular customer need.
  • I try to break down the numbers into categories that are simple and reflect the user’s point of view. I vary these categories slightly according to the type of user (IT, ISV/VAR).  Thus, for example, for IT users I typically break TCO down into license costs, development/installation costs, upgrade costs, administration costs, and support/maintenance contract costs. I think that these tend to be more meaningful to users than, say, “vendor” and “operational” costs.
  • In my ROI computation, I include “opportunity cost savings”, and use a what-if number based on organization size for revenues, rather than attempting to determine revenues ex ante. Opportunity cost savings are estimated as TCO cost savings of a solution (compared to “doing nothing”) reinvested in a project with a 30% ROI. Considering opportunity cost savings gives a more complete picture of (typically 3-year) ROI. Comparing ROIs when revenues are equal allows the user to zero in on how faster implementation and better TCO translate into better profits.
  • My numbers are more strongly based on qualitative data from in-depth, open-ended user interviews. Open-ended means that the interviewee is asked to “tell a story” rather than answer “choose among” and “on a scale of” questions, thus giving the interviewee every opportunity to point out flaws in initial research assumptions. I have typically found that a few such interviews yield numbers that are as accurate as, if not more accurate than, 100-respondent surveys.
 

Let me now, at the end of this summary, dwell for a moment on the advantages of open-ended user interviews. They allow me to focus on a narrower set of use cases without worrying as much about smaller survey size. By avoiding constraining the user to a narrow set of answers, they make sure that I am getting accurate data, and all the data that I need. They allow me to fine-tune and correct the survey as I go along. They surface key facts not anticipated in the survey design. They motivate the interviewee, and encourage greater honesty – everyone likes to “tell their story.” They also provide additional advice to other users – advice of high value to readers that gives additional credibility to the study conclusions.


EII

A couple of recent posts noted the advantages and disadvantages of a policy of customer support involving BYOD, or Bring Your Own Device. The discussion revealed that support involved not only allowing the laptop to hook into the corporate network and access corporate apps, but also allowing “non-standard” software inside corporate boundaries: BYOS, or Bring Your Own Software. However, no one asked the obvious question:  what about the data on that laptop, or the data on the Web that the laptop could access but corporate did not?  Should IT support Bring Your Own Data?

This is not just a theoretical question. A recent missive from BI supplier MicroStrategy invited prospective customers to load their own spreadsheets on its cloud offering, and play around. I doubt that MicroStrategy would have suggested this if there were not significant numbers of business users out there with their own business-related data who would like to analyze it with corporate-style BI tools. One can think of other applications: OLAP (online analytical processing) on spreadsheets, beloved of CFOs and CMOs; business performance management, with corporate “what-if” scenario data lugged around by mobile executives; or simple, cheap cloud sales tools for direct sales, syncing with corporate databases.

 

And that may be the tip of the iceberg.  After all, knowledge workers today are typically not encouraged to go out there and collect data to be analyzed. And yet, the amount of data in the Web about competitors and customers not captured by corporate data stores continues to grow. One survey a while ago estimated that 1/3 of relevant new data on the Web is not ingested  by IT data stores for a year or more after it arrives. Proactive encouragement might allow “worker crowdsourcing” of this type of data. For example, communities within a computer vendor for finding out about customer computer performance would allow employees to bring their own customer-report data inside the enterprise.

 

So, IT support of BYOData is worth considering. But that leads to the next point:  BYOData is significantly different from all previous data sources.  It is consumer data; it is of widely differing types; it has typically not been supported to the same level before; and a greater proportion of it comes from consumer apps – which may or may not use the same databases or file managers as businesses do, and which often are not set up to export data in common formats. We’re not talking spreadsheets here; we’re talking GPS data collected by iPhone apps.

 

Data integration tools are the obvious tools for allowing careful merging of existing enterprise data and BYOData. And among those tools, I would argue, Data Virtualization (DV) tools are the best of the best.

 

Data Virtualization vs. Other Alternatives

Data virtualization tools are an obvious candidate to handle unusual data types and allow querying across BYOData and relational or semi-structured (emails, corporate documents) corporate data. After all, DV tools were designed from the beginning to be highly flexible in the types of data they supported, and to provide a user interface that allowed meaningful combination of multiple data types on-the-fly (i.e., for real-time querying). But there are two other obvious candidates for combining BOYData and corporate relational data warehouses: Enterprise Application Integration (EAI) and ETL (Extract, Transform, Load) tools.

 

EAI tools are best at combining data from multiple enterprise databases, such as Oracle Apps, SAP, Oracle Peoplesoft, and Oracle Siebel. Over the years, their abilities have been extended to other corporate data, such as IBM’s use of Ascential for its Master Data Management (MDM) product. However, they are much less likely to support non-corporate Web data, and they have less experience in delivering user-friendly interfaces to combined relational and semi-structured or unstructured data.

 

ETL tools are highly skilled at taking operational enterprise data that has already been massaged into a corporate-standard format and merging it for use in a data warehouse. This also means careful protection of the data warehouse from poor-quality data – not an insignificant concern when one considers the poor quality of much BYOData. However, ETL tools are also unlikely to provide full support for non-corporate Web data, and are not tasked at all with delivering user-friendly interfaces to non-relational data outside of the data warehouse.

 

Two concepts from programming appear relevant here:  multi-tenancy, and the “sandbox.” A multi-tenant application provides separate corporate “states” over common application code, ensuring not only security but effective use of common code. A “sandbox” is a separated environment for programmers so that they can test programs in the real world without endangering operational software. In the case of BYOData, the data warehouse could be thought of as containing multiple personal “tenant data stores” that appear to each end user as if they are part of the overall data-warehouse data store, but which are actually separated into “data sandboxes” until administrators decide they are safe to incorporate into the corporate warehouse.

 

Assuming that IT takes this approach to handling BYOData, DV tools are the logical tools to bridge each “data sandbox” with corporate data, whether said corporate data resides in operational data stores, line-of-business file collections, or data-mart data stores.  At the same time, ETL tools are the logical place to start when IT starts to plunder these BYOData sets for enterprise insights by staging the personal data into corporate data stores. When run-the-business applications can benefit from BYOData, EAI and MDM tools are the logical place to start in staging this data into application data stores.

 

But whether IT takes this approach or not, DV tools are the logical overall organizer of BYOData – because they are so flexible. BYOData types will be evolving rapidly over the next couple of years, as social media continue to jump from fad to fad. Only tools that are cross-organizational, like DV tools, can hope to keep up.

 

The IT Bottom Line

The idea of BYOData is a bit speculative. Nothing says that either software vendors or IT shops will embrace the concept – although it appears to offer substantial benefits to the enterprise.

 

However, if they do follow the idea of Bring Your Own Device to its logical conclusion … then BYOData via DV tools is an excellent way to go.

When IBM announced that the next rev of DB2 would support Oracle PL/SQL stored procedures, native, it seemed like a good deal.  As I understand it, suppose I want to move an Oracle database function onto DB2. Before, I would have had to rewrite the PL/SQL stored procedure for the business rule in DB2’s language; now, I just recompile the SP I had on the Oracle machine. Then I can redirect any apps that use the Oracle SP to DB2, which is less of a modification than also changing the SP call in the app. It’s native, so there’s little degradation in performance. Neat, yes?

 

Well, yes, except that (a) in many cases, I’ve just duplicated much of the functionality of a similar existing DB2 SP, and (b) I don’t care how native it is, it doesn’t run as well as the DB2-language SP – after all, that language was designed for DB2 25 years ago.  If we do this a lot, we create a situation in which we have SP A on Oracle, plus SP A and A’ (the DB2-language one) on DB2, all of which have to be upgraded together, for lots and lots of As. We’ve already got SP proliferation, with no records of which apps are calling the SP and hence a difficult app modification when we change the SP; now you want to make it worse?

 

This kind of thinking is outdated.  It assumes the old 1990s environments of one app per database and one database vendor per enterprise. Those days are gone; now, in most situations, you have multiple databases from multiple vendors, each servicing more than one app (and composite app!).

 

So what’s the answer?  Well, in cases like this, I tend to support the idea of “one interface to choke”.  The idea is that there should be one standard SP language, with adapters to convert to multiple database SP source and compiled code – a “many to one to many” model. I send a PL/SQL SP to the converter, it puts it in standard language, then converts it to all “target” languages.  It then sends it down to all target databases, where each compiles it and has it available.

 

Now suppose I invoke an SP originally written in PL/SQL, but it is to be applied to a DB2 database. The converter intercepts the PL/SQL-type invocation and converts it to a DB2-type format, and then (because I have told it to) redirects the invocation to the DB2 database. The database runs the DB2-type SP version, and returns the result to the converter, which performs a reverse conversion to deliver the result back to the invoking app.  

 

The cool thing about this is that when you migrate or copy an SP, no app needs to be changed – remember, you simply tell the converter to redirect the invocation.  The usual objection is, yes, but the converted SP doesn’t run as well as one written for the target database.  True (actually, in the real world sometimes not true!), but the difference is marginal, and, in case you hadn’t noticed, there’s less pressure these days to get the absolute maximum out of a SP, with so many other performance knobs to twiddle.

 

Where’s the best place to set up such a converter?  Why, an EII (Enterprise Information Integration) tool! It has the architecture – it takes SQL/XQL and translates/adapts for various databases, then reverses the translation for the results. Yes, we have to add stored procedure conversion to its tasks, but at least the architecture is in place, and we can integrate SP invocation with general SQL-handling.

 

And, by the way, we will have a metadata repository of stored procedures, all served up and ready for use in event-driven architectures.

 

What do you think? 

 


Information Infrastructure

This blog post highlights a software company and technology that I view as potentially useful to organizations investing in business intelligence (BI) and analytics in the next few years. Note that, in my opinion, this company and solution are not yet typically “top of the mind” when we talk about BI today.

The Importance of the DataRush Software Technology to BI

The basic idea of DataRush, as I understand it, is to superimpose a “parallel dataflow” model on top of typical data management code, in order to improve the performance (and therefore scalability) of the data-processing operations used by typical large-scale applications. Right now, your processing in general and your BI querying in particular are typically done either by “query optimization” within a “database engine” that takes one stream of “basic” instructions and parallelizes it by figuring out (more or less) how to run each step in parallel on separate chunks of data, or by programmer code that attempts a wide array of strategies for speeding things up further, ranging from “delayed consistency” (in cases where lots of updates are also happening) to optimization for the special case of unstructured data (e.g., files consisting of videos or pictures). “Parallel dataflow” instead requires that particular types of querying/updates be separated into multiple streams depending on the type of operation.  This is done up front, as a specification by a programmer of a dataflow “model” that applies across all applications with the same types of operation.

 

There is good reason to believe, as I do, that this approach can yield major, ongoing performance improvements in a wide variety of BI areas. In the first place, the approach should deliver performance improvements over and beyond existing engines and special-case solutions, and not force you into supporting yet another alternate technology path. The idea of dataflow is not new, but for various historical reasons this variant has not been the primary focus of today’s database engines, and so the job of retrofitting to support “parallel dataflow” is nowhere near completion in most database engines. That means that, potentially, using “parallel dataflow” on top of these engines can squeeze out additional parallelism, due to the increased number and sophistication of the streams, especially on massively parallel architectures such as today’s multicore-chip server farms.

 

At the same time, the increasing importance of unstructured and semi-structured data has created something of a “green field” in processing this data, especially in areas such as health care’s handling of CAT scans, vendors streaming video over the Web, and everyone querying social-media Big Data. Where existing data-processing techniques are not set in concrete, “parallel dataflow” is very likely to yield outsized performance gains when applied, because it operates at a greater level of abstraction than most database engines and special-case file handlers like Hadoop/MapReduce, and so can be customized more effectively to new data transaction mixes and data types.

 

There is always a caveat in dealing with “new” software technologies that are really an evolution of techniques whose time has come. In this case, the caveat concerns the fact that, as noted, programmers or system designers need to specify the dataflows, rather than the database engine, and this dataflow “model” is not a general case for all data processing. That, in turn, means that at least some programmers need to understand dataflows on an ongoing basis.

 

It is my guess that this is a task that users of “parallel dataflow” and DataRush should embrace. There is a direct analogy here between agile development and DataRush-based development.  The usefulness of agile development lies not only in the immediate speedup of application development, but also in the way that agile development methodologies embed end-user knowledge in the development organization, with all sorts of positive follow-on effects on the organization as a whole.  In the same way, setting up dataflows for a particular application leads typically to a new way of thinking about applications as dataflows, and that improves the quality and often the performance of every application that the organization handles, whether it is optimizable by “parallel dataflow” or not.

 

In other words, in my opinion, developers’ knowledge of data-driven programming is increasingly inadequate in many cases. Automating this programming in the database engine and user interface can only do so much to make up for the lack.  It is more than worth the pain of additional ongoing dataflow programming to reintroduce the skill of programming based on a data “model” to today’s generation of developers.

The Relevance of Pervasive Software to BI

Let me state my conclusion up front:  I view investment in Pervasive Software’s DataRush technology as every bit as safe as investment in an IBM or Oracle product. Why do I say this?

 

Let’s start with Pervasive Software’s “DNA.” Originally, more than 15 years ago, I ran across Pervasive Software as a spin-off of Novell’s Windows database of the 1980s. Over time, as databases almost always do, the solution that has become Pervasive PSQL has provided a stable source of ongoing revenue. More importantly, it has centered Pervasive Software from the very start in Windows, PC-server, and distributed database technologies servicing the SMB/large-enterprise-department market. In other words, Pervasive has demonstrated over 15 years of ups and downs that it is nowhere near failure, and that it knows the world even of the Windows/PC-server side of the Global 10,000 quite well.

 

At the same time, having followed the SMB/departmental market (and especially the database side) for more than 15 years, I am struck by the degree to which, now, software technologies move “bottom-up” from that market to the large enterprise market. Software as a Service, the cloud, and now some of the latest capabilities in self-service and agile BI are all taking their cue from SMB-style operations and technologies. Thus, in the Big Data market in particular and in data management in general, Pervasive is one leading-edge vendor well in tune with an overall movement of SMB-style open-source and other solutions centered around the cloud and Web data. I therefore see the risks of Pervasive Software DataRush vendor lock-in and technology irrelevance over the next few years as minimal. And, of course, participation in the cloud open-source “movement” means crowd-sourced support as effective as IT’s existing open-source software product support.

 

Aren’t there any risks? Well, yes, in my opinion, there are the product risks of any technology, i.e., that technology will evolve to the point where “parallel dataflow” or its equivalent is better integrated into another company’s product.  However, if that happens, dollars to doughnuts there will be a straightforward path from a DataRush dataflow model to that product’s data-processing engine – because the open-source market, at the very least, will provide it.

Potential Uses of DataRush for IT

The obvious immediate uses of DataRush in IT are, as Pervasive Software has pointed out, in Big Data querying and pharmaceutical-company grid searches. In the case of Big Data, DataRush front-ending Hadoop for both public and hybrid clouds is an interesting way to both reduce the number of instances of “eventual consistency” turning into “never consistent” and to increase the depth of analytics by allowing a greater amount of Big Data to be processed in a given length of time, either on-site at the social-media sites or in-house as part of handling the “fire hose” of just-arrived Big Data from the public cloud.

However, I don’t view these as the most important use cases for IT to keep an eye on. Ideally, IT could infuse the entire Windows/PC-server part of its enterprise architecture with “parallel dataflow” smarts, for a semi-automatic ongoing data-processing performance boost. Failing that, IT should target the Windows/small-server information handling in which increased depth of analytics of near-real-time data is of most importance – e.g., agile BI in general.

These suggestions come with the usual caveats. This technology is more likely than most to require initial experimentation by internal R&D types, and some programmer training, as well. Finding the initial project with the best immediate value-add is probably not going to be as straightforward as in some other cases, as the exact performance benefit of this technology for any kind of database architecture is apparently not yet fully predictable. Effectively, these caveats say: if you don’t have the IT depth or spare cash to experiment, just point the technology at a nagging BI problem and odds are very good that it’ll pay off – but it may not be a home run the first time out.

The Bottom Line for IT Buyers

Really, Pervasive DataRush is one among several performance-enhancing approaches that offer potential additional analytical power in the next few years, and so if IT passes this one up and opts for another, they may well keep pace with the majority of their peers.  However, in an environment that most CEOs seem to agree is unusually uncertain, out-performing the majority, and extreme IT smarts in order to do so, are more frequently becoming necessary.  At the least, therefore, IT buyers in medium-sized and large organizations should keep Pervasive DataRush ready to insert in appropriate short lists over the next two years. Preferably, they should also start the due diligence now.

The key to getting the maximum out of DataRush, I think, will be to do some hard thinking about how one’s BI and data-processing applications “group” into dataflow types. Pervasive Software, I am sure, can help, but you also need to customize for the particular characteristics of your industry and business. Doing that near the beginning will make extension of DataRush’s performance benefits to all kinds of existing applications far quicker, and thus will deliver far wider-spread analytical depth to your BI.

How will a solution like DataRush impact the organization’s bottom line? The same as any increase in the depth of real-time analysis – and right now that means that, over time, it will improve the bottom line substantially. For that reason, at the very least, Pervasive Software’s DataRush is an Other BI solution that is worth the IT buyer’s attention.

One of the more interesting features of vendors’ recent marketing push to sell BI and analytics is the emphasis on the notion of Big Data, often associated with NoSQL, Google MapReduce, and Apache Hadoop – without a clear explanation of what these are, and where they are useful. It is as if we were back in the days of “checklist marketing”, where the aim of a vendor like IBM or Oracle was to convince you that if competitors’ products didn’t support a long list of features, that those competitors would not provide you with the cradle-to-grave support you needed to survive computing’s fast-moving technology. As it turned out, many of those features were unnecessary in the short run, and a waste of money in the long run; remember rules-based AI? Or so-called standard UNIX? The technology in those features was later to be used quite effectively in other, more valuable pieces of software, but the value-add of the feature itself turned out to be illusory.

 

As it turns out, we are not back in those days, and Big Data via Hadoop and NoSQL does indeed have a part to play in scaling Web data. However, I find that IT buyer misunderstandings of these concepts may indeed lead to much wasted money, not to mention serious downtime.  These misunderstandings stem from a common source: marketing’s failure to explain how Big Data relates to the relational databases that have fueled almost all data analysis and data-management scaling for the last 25 years. It resembles the scene in Wizard of Oz where a small man, trying to sell himself as a powerful wizard by manipulating stage machines from behind a curtain, becomes so wrapped up in the production that when someone notes “There’s a man behind the curtain” the man shouts “Pay no attention to the man behind the curtain!” In this case, marketers are shouting about the virtues of Big Data related to new data management tools and “NoSQL” that they fail to note the extent to which relational technology is complementary to, necessary to, or simply the basis of, the new features.

 

So here is my understanding of the present state of the art in Big Data, and the ways in which IT buyers should and should not seek to use it as an extension of their present (relational) BI and information management capabilities. As it turns out, when we understand both the relational technology behind the curtain and the ways it has been extended, we can do a much better job of applying Big Data to long-term IT tasks.

 

NoSQL or NoREL?

The best way to understand the place of Hadoop in the computing universe is to view the history of data processing as a constant battle between parallelism and concurrency.  Think of the database as a data store plus a protective layer of software that is constantly being bombarded by transactions – and often, another transaction on a piece of data arrives before the first is finished. To handle all the transactions, databases have two choices at each stage in computation: parallelism, in which two transactions are literally being processed at the same time, and concurrency, in which a processor switches between the two rapidly in the middle of the transaction. Pure parallelism is obviously faster; but to avoid inconsistencies in the results of the transaction, you often need coordinating software, and that coordinating software is hard to operate in parallel, because it involves frequent communication between the parallel “threads” of the two transactions.

 

At a global level (like that of the Internet) the choice now translates into a choice between “distributed” and “scale-up” single-system processing. As it happens, back in graduate school I did a calculation of the relative performance merits of tree networks of microcomputers versus machines with a fixed number of parallel processors, which provides some general rules. There are two key factors that are relevant here:  “data locality” and “number of connections used” – which means that you can get away with parallelism if, say, you can operate on a small chunk of the overall data store on each node, and if you don’t have to coordinate too many nodes at one time.

 

Enter the problems of cost and scalability. The server farms that grew like Topsy during Web 1.0 had hundreds and thousands of PC-like servers that were set up to handle transactions in parallel. This had obvious cost advantages, since PCs were far cheaper; but data locality was a problem in trying to scale, since even when data was partitioned correctly in the beginning between clusters of PCs, over time data copies and data links proliferated, requiring more and more coordination. Meanwhile, in the High Performance Computing (HPC) area, grids of PC-type small machines operating in parallel found that scaling required all sorts of caching and coordination “tricks”, even when, by choosing the transaction type carefully, the user could minimize the need for coordination.

 

 For certain problems, however, relational databases designed for “scale-up” systems and structured data did even less well. For indexing and serving massive amounts of “rich-text” (text plus graphics, audio, and video) data like Facebook pages, for streaming media, and of course for HPC, a relational database would insist on careful consistency between data copies in a distributed configuration, and so could not squeeze the last ounce of parallelism out of these transaction streams. And so, to squeeze costs to a minimum, and to maximize the parallelism of these types of transactions, Google, the open source movement, and various others turned to MapReduce, Hadoop, and various other non-relational approaches.

 

These efforts combined open-source software, typically related to Apache, large amounts of small or PC-type servers, and a loosening of consistency constraints on the distributed transactions – an approach called eventual consistency. The basic idea was to minimize coordination by identifying types of transactions where it didn’t matter if some users got “old” rather than the latest data, or it didn’t matter if some users got an answer but others didn’t. As a communication from Pervasive Software about an upcoming conference shows, a study of one implementation finds 60 instances of unexpected unavailability “interruptions” in 500 days – certainly not up to the standards of the typical business-critical operational database, but also not an overriding concern to users.

 

The eventual consistency part of this overall effort has sometimes been called NoSQL. However, Wikipedia notes that in fact it might correctly be called NoREL, meaning “for situations where relational is not appropriate.” In other words, Hadoop and the like by no means exclude all relational technology, and many of them concede that relational “scale-up” databases are more appropriate in some cases even within the broad overall category of Big Data (i.e., rich-text Web data and HPC data). And, indeed, some implementations provide extended-SQL or SQL-like interfaces to these non-relational databases.

 

Where Are the Boundaries?

The most popular “spearhead” of Big Data, right now, appears to be Hadoop. As noted, it provides a distributed file system “veneer” to MapReduce for data-intensive applications (including Hadoop Common that divides nodes into a master coordinator and slave task executors for file-data access, and Hadoop Distributed File System [HDFS] for clustering multiple machines), and therefore allows parallel scaling of transactions against rich-text data such as some social-media data. It operates by dividing a “task” into “sub-tasks” that it hands out redundantly to back-end servers, which all operate in parallel (conceptually, at least) on a common data store.

 

As it turns out, there are also limits even on Hadoop’s eventual-consistency type of parallelism. In particular, it now appears that the metadata that supports recombination of the results of “sub-tasks” must itself be “federated” across multiple nodes, for both availability and scalability purposes. And Pervasive Software notes that its own investigations show that using multiple-core “scale-up” nodes for the sub-tasks improves performance compared to proliferating yet more distributed single-processor PC servers. In other words, the most scalable system, even in Big Data territory, is one that combines strict and eventual consistency, parallelism and concurrency, distributed and scale-up single-system architectures, and NoSQL and relational technology.

 

Solutions like Hadoop are effectively out there “in the cloud” and therefore outside the enterprise’s data centers. Thus, there are fixed and probably permanent physical and organizational boundaries between IT’s data stores and those serviced by Hadoop. Moreover, it should be apparent from the above that existing BI and analytics systems will not suddenly convert to Hadoop files and access mechanisms, nor will “mini-Hadoops” suddenly spring up inside the corporate firewall and create havoc with enterprise data governance. The use cases are too different.

 The remaining boundaries – the ones that should matter to IT buyers – are those between existing relational BI and analytics databases and data stores and Hadoop’s file system and files. And here is where “eventual consistency” really matters. The enterprise cannot treat this data as just another BI data source. It differs fundamentally in that the enterprise can be far less sure that the data is up to date – or even available at all times. So scheduled reporting or business-critical computing based on this data is much more difficult to pull off.

On the other hand, this is data that would oth

erwise be unavailable – and because of the low-cost approach to building the solution, should be exceptionally low-cost to access. However, pointing the raw data at existing BI tools is like pointing a fire hose at your mouth. The savvy IT organization needs to have plans in place to filter the data before it begins to access it.

 

The Long-Run Bottom Line

The impression given by marketers is that Hadoop and its ilk are required for Big Data, where Big Data is more broadly defined as most Web-based semi-structured and unstructured data. If that is your impression, I believe it to be untrue. Instead, handling Big Data is likely to require a careful mix of relational and non-relational, data-center and extra-enterprise BI, with relational in-enterprise BI taking the lead role. And as the limits to parallel scalability of Hadoop and the like become more evident, the use of SQL-like interfaces and relational databases within Big Data use cases will become more frequent, not less.

 Therefore, I believe that Hadoop and its brand of Big Data will always remain a useful but not business-critical adjunct to an overall BI and information management strategy. Instead, users should anticipate that it will take its place alongside relational access to other types of Big Data, and that the key to IT success in Big Data BI will be in intermixing the two in the proper proportions, and with the proper security mechanisms. Hadoop, MapReduce, NoSQL, and Big Data, they’re all useful – but only if you pay attention to the relational technology behind the curtain.

On Monday, Pentaho, an open source BI vendor, announced Pentaho BI 4.0, its new release of its “agile BI” tool. To understand the power and usefulness of Pentaho, you should understand the fundamental ways in which the markets that we loosely call SMB have changed over the last 10 years.

 

First, a review. Until the early 1990s, it was a truism that computer companies in the long run would need to sell to central IT at large enterprises, eventually – else the urge of CIOs to standardize on one software and hardware vendor would favor larger players with existing toeholds in central IT. This was particularly true in databases, where Oracle sought to recreate the “nobody ever got fired for buying IBM” hardware mentality of the 1970s in software stacks. It was not until the mid-1990s that companies such as Progress Software and Sybase (with its iAnywhere line) showed that databases delivering near-lights-out administration could survive the Oracle onslaught. Moreover, companies like Microsoft showed that software aimed at the SMB could over time accumulate and force its way into central IT – not only Windows, Word, and Excel, but also SQL Server.

 

As companies such as IBM discovered with the bursting of the Internet bubble, this “SMB” market was surprisingly large. Even better, it was counter-cyclical: when large enterprises whose IT was a major part of corporate spend cut IT budgets dramatically, SMBs kept right on paying the yearly license fees for the apps on which they ran, which in turn hid the brand on the database or app server. Above all, it was not driven by brand or standards-based spending, nor even solely by economies of scale in cost.

 

In fact, the SMB buyer was and is distinctly and permanently different from the large-enterprise IT buyer.  Concern for costs may be heightened, yes; but also the need for simplified user interfaces and administration that a non-techie can handle. A database like Pervasive could be run by the executive at a car dealership, who would simply press a button to run backup on his or her way out on the weekend, or not even that. The ability to fine-tune for maximum performance is far less important than the avoidance of constant parameter tuning. The ability to cut hardware costs by placing apps in a central location matters much less than having desktop storage to work on when the server goes down.

 

But in the early 2000s, just as larger vendors were beginning to wake up to the potential of this SMB market, a new breed of SMB emerged. This Web-focused SMB was and is tech-savvy, because using the Web more effectively is how it makes its money.  Therefore, the old approach of Microsoft and Sybase when they were wannabes – provide crude APIs and let the customer do the rest – was exactly what this SMB wanted. And, again, this SMB was not just the smaller-sized firm, but also the skunk works and innovation center of the larger enterprise.

 

It is this new type of SMB that is the sweet spot of open source software in general, and open source BI in particular. Open source has created a massive “movement” of external programmers that have moved steadily up the software stack from Linux to BI, and in the process created new kludges that turn out to be surprisingly scalable: MapReduce, Hadoop, noSQL, and Pentaho being only the latest examples. The new SMB is a heavy user of open source software in general, because the new open source software costs nothing, fits the skills and Web needs of the SMB, and allows immediate implementation of crude solutions plus scalability supplied by the evolution of the software itself. Within a very few years, many users, rightly or wrongly, were swearing that MySQL was outscaling Oracle.

 

Translating Pentaho BI 4.0

 

The new features in Pentaho BI can be simply put, because the details simply show that they deliver what they promise:

 

 

  • ·         Simple, powerful interactive reporting – which apparently tends to be used more for ad-hoc reporting that the traditional enterprise reporting, but can do either;
  • ·         A more “usable” and customizable user interface with the usual Web “sizzle”;
  • ·         Data discovery “exploration” enhancements such as new charts for better data visualization.

 

 

These sit atop a BI tool that distinguishes itself by “data integration” that handles an exceptional number of input data warehouses and data stores for inhaling to a temporary “data mart” for each use case.

 

With these features, Pentaho BI, I believe, is valuable especially to the new type of SMB. For the content-free buzz word “agile BI”, read “it lets your techies attach quickly to your existing databases as well as Big Data out there on the Web, and then makes it easy for you to figure out how to dig deeper as a technically-minded user who is not a data-mining expert.” Above all, Pentaho has the usual open source model, so it’s making its money by services and support – allowing the new SMB to decide exactly how much to spend. Note also Pentaho’s alliance not merely with the usual cloud open source suspects like Red Hat but also with database vendors with strong BI-performance technology such as Vertica.

 

The BI Bottom Line

 

No BI vendor is guaranteed a leadership position in cloud BI these days – the field is moving that fast. However, Pentaho is clearly well suited to the new SMB, and also understands the importance of user interfaces, simplicity for the administrator, ad hoc querying and reporting, and rapid implementation to both new and old SMBs.

 

Pentaho therefore deserves a closer look by new-SMB IT buyers, either as a cloud supplement to existing BI or as the core of low-cost, fast-growing Web-focused BI.  And, remember, these have their counterparts in large enterprises – so those should take a look as well.  Sooner than I expected, open source BI is proving its worth.

And so, another founder of the computing industry as we know it today officially bites the dust. A few days ago, Attachmate announced that it was acquiring Novell – and the biggest of the PC LAN companies will be no more.

I have more fond memories of Novell than I do of Progress, Sun, or any of the other companies that have seen their luster fade over the last decade. Maybe it was the original facility in Provo, with its Star Trek curving corridors and Moby Jack as haute cuisine, just down the mountain from Robert Redford’s Sundance. Maybe it was the way that when they sent us a copy of NetWare, they wrapped it in peanuts instead of bubble wrap, giving us a great wiring party. Or maybe it was Ray Noorda himself, with his nicknames (Pearly Gates and Ballmer the Embalmer) and his insights (I give him credit for the notion of coopetition).

But if Novell were just quirky memories, it wouldn’t be worth the reminiscence. I firmly believe that Novell, more than any other company, ushered out the era of IBM and the Seven Dwarves, and ushered in the world of the PC and the Internet.

Everyone has his or her version of those days. I was at Prime at the time, and there was a hot competition going on between IBM at the high end and DEC, Wang, Data General, and Prime at the “low end”. Even with the advent of the PC, it looked as if IBM or DEC might dominate the new form factor; Compaq was not a real competitor until the late 1980s.

And then along came the PC LAN companies: 3Com, Banyan, Novell. While IBM and the rest focused on high end sales, and Sun and Apollo locked up workstations, the minicomputer makers’ low ends were being stealthily undercut by PC LANs, and especially from the likes of Novell. The logic was simple: the local dentist, realtor, or retailer bought a PC for personal use, brought it to the business, and then realized that it was child’s play – and less than $1K – to buy LAN software to hook the PCs in the office together. It meant incredibly cheap scalability, and when I was at Prime it gutted the low end of our business, squeezing the mini makers from above (IBM) and below (Novell).

There was never a time when Novell could breathe easily. At first, there were Banyan and 3Com; later, the mini makers tried their hand at PC LANs; then came the Microsoft partnership with IBM to push OS/2 LAN Manager; and finally, in the early 1990s, Microsoft took dead aim at Novell, and finally managed to knock them off their perch.  However, until the end, NetWare had two simple ideas to differentiate it, well executed by the “Magic 8” (the programmers doing fundamental NetWare design, including above all Drew Major):  the idea that to every client PC, the NetWare file system should look like just another drive, and the idea that frequently accessed files should be stored in main memory on the server PC, so that, as Novell boasted, you could get a file faster from NetWare than you could from your own PC’s hard drive.

Until the mid 1990s, analysts embarrassed themselves by predicting rapid loss of market share to the latest competitor. Every year, surveys showed that purchasing managers were planning to replace their NetWare with LAN Server, with LAN Manager, with VINES; and at the end of the year, the surveys would show that NetWare had increased its hold, with market share in the high 70s. Why? Because what drove the market was purchases below the purchasing manager’s radar screen (less than the $10K that departments were supposed to report upstairs). One DEC employee told me an illustrative story: while DEC was trying to mandate in-house purchase of its PC LAN software, the techies at DEC were expanding their use of NetWare by leaps and bounds, avoiding official notice by “tunneling” NetWare communications as part of the regular DEC network. The powers that be finally noticed what was going on because the tunneled communications became the bulk of all communications across the DEC network.

In the early 1990s, Microsoft finally figured out what to do about this. Shortly after casting off OS/2 and LAN Manager, Microsoft developed its own, even more basic, PC LAN software that at first simply allowed sharing across a couple of “peer” PCs. Using this as a beachhead, Microsoft steadily developed Windows’ LAN capabilities, entirely wrapped in the Windows PC OS, so that it cost practically nothing to buy both the PC and the LAN. This placed Novell in an untenable position, because what was now driving the market was applications developed on top of the PC and LAN OS, and NetWare had never paid sufficient attention to LAN application development; it was easy for Microsoft to turn Windows apps into Windows plus LAN apps, while it was very hard for Novell to do so.

Nevertheless, Novell’s core market made do with third-party Windows apps that could also run on NetWare, until the final phase of the tragedy: Windows 2000. You see, PC LANs always had the limitation that they were local. The only way that PC LAN OSs could overcome the limitations of geography was to provide real-time updates to resource and user data stored in multiple, geographically separate “directories”: in effect, to carry out scalable multi-copy updates on data. Banyan had a pretty good solution for this, but Microsoft created an even better one in Windows 2000, well before Novell’s solution; and after that, as the world shifted its attention to the Internet, Novell was not even near anyone’s short list for distributed computing.

Over the last decade, Novell has not lacked good solutions; its own directory product, administrative and security software, virtualization software, and most recently what I view as a very nice approach to porting Windows apps to Linux and mainframes. Still, a succession of CEOs failed to turn around the company, and, in the ultimate irony, Attachmate, with strengths and a long history itself in remote PC software, has decided to take on Novell’s assets.

I think that the best summing up of Novell’s ultimate strategic mistake was the remark of one of its CEOs shortly after assuming command: “Everyone thinks about Microsoft as the biggest fish in the ocean. It is the ocean.” In other words, Novell would have done better by treating Microsoft as the vendor of the environment that Novell had to support, and aiming to service that market, rather than trying to out-feature Microsoft. But everyone else made that mistake; why should Novell have been any different?

We are left not only with Novell’s past contributions to computing, but also with the contributions of its alumni. Some fostered the SMB market with products like the Pervasive database; some were drivers of the UNIX standards push and later the TP monitors that led to today’s app servers. One created the Burton Group, a more technically-oriented analyst firm that permanently improved the quality of the analyst industry.

And we are also left with an enthusiasm that could not be contained by traditional firms, and that moved on to UNIX, to the Web, to open source. The one time, in the late 1980s, I went to Novell’s user group meeting, it was clearly a bit different. After one of the presentations, a LAN servicer rose to ask a question. “So-and-so, LANs Are My Life”, he identified himself. That was the name of his firm: LANs Are My Life, Inc. It’s not a bad epitaph for a computer company: we made a product so good that for some people – not just Novell employees – it was our life. Rest in peace, Novell.

There are certain vendors of infrastructure software who deliver long-term value-add to their customers by walking the narrow line between the innovative and the proprietary exceptionally well. Over its long history, InterSystems has parlayed an innovative database that could be fully integrated into existing data centers into an innovative middleware suite that could be fully integrated into existing data architectures, and then into innovative health care applications that could be fully integrated into existing health care systems and networks, delivering value-add at every step. Now, InterSystems has announced a new generation of its database/development platform, Caché 2010, with Caché Database Mirroring and Caché eXtreme for Java. Surprise, surprise: the new features are innovative, integrated out of the box with existing IT strategies and systems, and very useful.

InterSystems has long been known as the vendor of Caché, a “post-relational” object database that has proven its E-business prowess in real-world business-critical situations such as health care applications.  Caché combines object, multidimensional, and SQL technologies to handle content-heavy OLTP, decision support, and “mixed” transaction streams effectively.  More recently, InterSystems has also become known as the supplier of Ensemble, a Caché-based integration and development platform that allows access to a wide array of data types, plus data transmission from application to application, especially as part of business process integration.  InterSystems has a position of strength in the health care market, with widespread Caché use by best-practices hospitals and labs.

Caché Database Mirroring

Due to a recent TCO study, I have become aware of just how expensive maintaining two or three redundant data centers for full global active/active rapid-recovery can be. As I understand it, Caché provides reduces costs by increasing the flexibility of replication of Caché data. Specifically, Caché Database Mirroring allows “warm” (not completely up to date) mirroring in certain circumstances, and “logical” (which some might call “virtual”) replication that does not have to be to a physically separate or remote system. The resulting decrease in load on both ends of a mirroring process, as well as the automation of Caché Database Mirroring deployment and operation, lowers contention for shared resources by the replication process and allows use of inexpensive PC servers and the like instead of expensive, dedicated Storage Area Network software and systems.

Caché eXtreme for Java

As CEP use increases, it has become clear that “contextual” data able to be accessed in “near real time” is needed to scale these solutions. While Caché users have found it particularly effective in accessing the object-type and XML data that CEP engines typically process, due to its object support and strong performance, the lingua franca of such engines is often Java, for better or worse. Caché eXtreme for Java provides direct access to Caché operations and data stores from Java, enabling this large class of developers to rapidly develop more scalable CEP applications.

Conclusions

Where similar infrastructure software companies have faltered or been acquired in the recent deep recession, Intersystems appears to be continuing to strike out in new directions. Some of that may come from the relative resilience of the health care market that was once its historical strength. However, it seems clear that much of its success comes from continuing to deliver “innovation with a difference” that fits with customer environments and also adds immediately useful features improving the customer’s cost effectiveness and flexibility.

Also notable is that these improvements involve both new and old products. Intersystems has been smart not to treat Caché like a cash cow, as the market’s focus switched to Internet middleware these last few years – other vendors seem to have fallen into that trap, and may be paying the price.

 

The new announcements, as ever, make Intersystems worth the IT buyer’s close attention, and especially in such areas as CEP and development.

I was reading a Business Intelligence (BI) white paper feed – I can’t remember which – when I happened across one whose title was, more or less, “Data Visualization: Is This the Way To Attract the Common End User?” And I thought, boy, here we go again.

 

You see, the idea that just a little better user interface will finally get Joe and Jane (no, not you, Mr. Clabby) to use databases dates back at least 26 years. I know, because I had an argument with my boss at CCA, Dan Ries iirc (a very smart fellow), about it. He was sure that with a fill-out-the-form approach, any line employee could do his or her own ad-hoc queries and reporting. Based on my own experiences as a naïve end user, I felt we were very far from being able to give the average end user an interface that he or she would be able or motivated to use. Here we are, 26-plus years later, and all through those years, someone would pipe up and say, in the immortal words of Bullwinkle, “This time for sure!” And every time, it hasn’t happened.

 

I divide the blame for this equally between vendor marketing and IT buying. Database and BI vendors, first and foremost, look to extend the ability of specific targets within the business to gain insights. That requires ever more sophisticated statistical and relationship-identifying tools. The vendor looking to design a “common-person” user interface retrofits the interface to these tools. In other words, the vendor acts like it is selling to a business-expert, not a consumer, market.

 

Meanwhile, IT buyers looking to justify the expense of BI try to extend its use to upper-level executives and business processes, not demand that it extend the interface approach of popular consumer apps to using data, or that it give the line supervisor who uses it at home a leg up at work. And yet, that is precisely how Word, Excel, maybe PowerPoint, and Google search wound up being far more frequently used than SQL or OLAP.

 

I have been saying things like this for the last 26 years, and somehow, the problem never gets solved. At this point, I am convinced that no one is really listening. So, for my own amusement, I give you three ideas – ideas proven in the real world, but never implemented in a vendor product – that if I were a user I would really like, and that I think would come as close as anything can to achieving “BI for the masses.”

 Idea Number 1: Google Exploratory Data Analysis 

I’m reading through someone’s blog when they mention “graphical analysis.” What the hey? There’s a pointer to another blog, where they make a lot of unproven assertions about graphical analysis. Time for Google: a search on graphical analysis results in a lot of extraneous stuff, some of it very interesting, plus Wikipedia and a vendor who is selling this stuff. Wikipedia is off-topic, but carefully reading the article shows that there are a couple of articles that might be on point. One of them gives me some of the social-networking theory behind graphical analysis, but not the products or the market. Back to Google, forward to a couple of analyst market figures. They sound iffy, so I go to a vendor site and get their financials to cross-check. Not much in there, but enough that I can guesstimate. Back to Google, change the search to “graphical BI.” Bingo, another vendor with much more market information and ways to cross-check the first vendor’s claims. Which products have been left out? An analyst report lists the two vendors, but in a different market, and also lists their competitors. Let’s take a sample competitor: what’s their response to “graphical analysis” or graphical BI? Nothing, but they seem to feel that statistical analysis is their best competitive weapon. Does statistical analysis cover graphical analysis? The names SAS and SPSS keep coming up in my Google searches. It doesn’t seem as if their user manuals even mention the word “graph”. What are the potential use cases? Computation of shortest path. Well, only if you’re driving somewhere. Still, if it’s made easy for me … Is this really easier than Mapquest? Let’s try a multi-step trip. Oog. It definitely could be easier than Mapquest. Can I try out this product? All right, I’ve got the free trial version loaded, let’s try the multi-step trip. You know, this could do better for a sales trip than my company’s path optimization stuff, because I can tweak it for my personal needs. Combine with Google Maps, stir … wouldn’t it be nice if there was a Wikimaps, so that people could warn us about all these little construction obstructions and missing signs? Anyway, I’ve just given myself an extra half-hour on the trip to spend on one more call, without having to clear it.

 

Two points about this. First, Google is superb at free-association exploratory analysis of documents. You search for something, you alter the search because of facts you’ve found, you use the results to find other useful facts about it, you change the topic of the search to cross-check, you dig down into specific examples to verify, you even go completely off-topic and then come back. The result is far richer, far more useful to the “common end user” and his or her organization, and far more fun than just doing a query on graphical data in the company data warehouse.

 

Second, Google is lousy at exploratory data analysis, because it is “data dumb”: It can find metadata and individual pieces of data, but it can’t detect patterns in the data, so you have to do it yourself. If you are searching for “graphical analysis” across vendor web sites, Google can’t figure out that it would be nice to know that 9 of 10 vendors in the market don’t mention “graph” on their web sites, or that no vendors offer free trial downloads.

 

The answer to this seems straightforward enough: add “guess-type” data analysis capabilities to Google. And, by the way, if you’re at work, make the first port of call your company’s data-warehouse data store, full of data you can’t get anywhere else. You’re looking for the low-priced product for graphical analysis? Hmm, your company offers three types through a deal with the vendor, but none is the low-cost one. I wonder what effect that has had on sales? Your company did a recent price cut; sure enough, it hasn’t had a big effect. Except in China: does that have to do with the recent exchange rate manipulations, and the fact that you sell via a Chinese firm instead of on your own? It might indeed, since Google tells you the manipulations started 3 weeks ago, just when the price cut happened.

 

You get the idea? Note that the search/analysis engine guessed that you wanted your company’s data called out, and that you wanted sales broken down by geography and in a monthly time series. Moreover, this is exploratory data analysis, which means that you get to see both the summary report/statistics and individual pieces of raw data – to see if your theories about what’s going on make sense.

 

In Google exploratory data analysis, the search engine and your exploration drive the data analysis; the tools available don’t. It’s a fundamental mind shift, and one that explains why Excel became popular and in-house on-demand reporting schemes didn’t, or why Google search was accepted and SQL wasn’t. One’s about the features; the other’s about the consumer’s needs.

 

Oh, by the way, once this takes off, you can start using information about user searches to drive adding really useful data to the data warehouse.

 Idea Number 2: The Do The Right Thing Key 

Back in 1986, I loved the idea behind the Spike Lee movie title so much that I designed an email system around it.  Here’s how it works:

 

You know how when you are doing a “replace all” in Word, you have to specify an exact character string, and then Word mindlessly replaces all occurrences, even if some should be capitalized and some not, or even if you just want whole words to be changed and not character strings within words? Well, think about it. If you type a whole word, 90% of the time you want only words to be replaced, and capitals to be added at the start of sentences. If you type a string that is only part of a word, 90% of the time you want all occurrences of that string replaced, and capitals when and only when that string occurs at the start of a sentence. So take that Word “replace” window, and add a Do the Right Thing key (really, a point and click option) at the end. If it’s not right, the user can just Undo and take the long route.

 

The Do The Right Thing key is a macro; but it’s a smart macro. You don’t need to create it, and it makes some reasonable guesses about what you want to do, rather than you having to specify what it should do exactly. I found when I designed my email system that every menu, and every submenu or screen, would benefit from having a Do The Right Thing key. It’s that powerful an idea.

 

How does that apply to BI?  Suppose you are trying to track down a sudden drop in sales one week in North America. You could dive down, layer by layer, until you found that stores in Manitoba all saw a big drop that week. Or, you could press the Break in the Pattern key, which would round up all breaks in patterns of sales, and dig down not only to Manitoba but also to big offsetting changes in sales in Vancouver and Toronto, with appropriate highlighting. 9 times out of ten, that will be the right information, and the other time, you’ll find out some other information that may prove to be just as valuable. Now do the same type of thing for every querying or reporting screen …

 

The idea behind the Do The Right Thing key is actually very similar to that behind Google Exploratory Data Analysis. In both cases, you are really considering what the end user would probably want to do first, and only then finding a BI tool that will do that. The Do The Right Thing key is a bit more buttoned-up: you’re probably carrying out a task that the business wants you to do. Still, it’s way better than “do it this way or else.”

 Idea Number 3: Build Your Own Data Store 

Back in the days before Microsoft Access, there was a funny little database company called FileMaker. It had the odd idea that people who wanted to create their own contact lists, their own lists of the stocks they owned and their values, their own grades or assets and expenses, should be able to do so, in just the format they wanted. As Oracle steadily cut away at other competitors in the top end of the database market, FileMaker kept gaining individual customers who would bring FileMaker into their local offices and use it for little projects. To this day, it is still pretty much unique in its ability to let users quickly whip up small-sized, custom data stores to drive, say, class registrations at a college.

 

To my mind, FileMaker never quite took the idea far enough. You see, FileMaker was competing against folks like Borland in the days when the cutting edge was allowing two-way links between, let’s say, students and teachers (a student has multiple teachers, and teachers have multiple students). But what people really want, often, is “serial hierarchy”. You start out with a list of all your teachers; the student is the top level, the teachers and class location/time/topic the next level. But you next want to see if there’s an alternate class; now the topic is the top level, the time at the next level, the students (you, and if the class is full) at a third level. If the number of data items is too small to require aggregation, statistics, etc.; you can eyeball the raw data to get your answers. And you don’t need to learn a new application (Outlook, Microsoft Money, Excel) for each new personal database need.

 

The reason this fits BI is that, often, the next step after getting your personal answers is to merge them with company data. You’ve figured out your budget, now do “what if”: does this fit with the company budget? You’ve identified your own sales targets, so how do these match up against those supplied by the company? You download company data into your own personal workspace, and use your own simple analysis tools to see how your plans mesh with the company’s. You only get as complex a user interface as you need.

 Conclusions 

I hope you enjoyed these ideas, because, dollars to doughnuts, they’ll never happen. It’s been 25 years, and the crippled desktop/folder metaphor and its slightly less crippled cousin, the document/link browser metaphor, still dominate user interfaces. It’s been fifteen years, and only now is Composite Software’s Robert Eve getting marketing traction by pointing out that trying to put all the company’s data in a data warehouse is a fool’s errand. It’s been almost 35 years, and still no one seems to have noticed that seeing a full page of a document you are composing on a screen makes your writing better. At least, after 20 years, Google Gmail finally showed that it was a good idea to group a message and its replies. What a revelation!

 

No, what users should really be wary of is vendors who claim they do indeed do any of the ideas listed above. This is a bit like vendors claiming that requirements management software is an agile development tool. No; it’s a retrofitted, slightly less sclerotic tool instead of something designed from the ground up to serve the developer, not the process.

 

But if you dig down, and the vendor really does walk the walk, grab the BI tool. And then let me know the millennium has finally arrived. Preferably not after another 26 years.

ABI: WTF Or WTFN?
09/05/2010

Two weeks ago, Merv Adrian’s blog was filled with analysis of the recent TDWI conference, which had as a theme “Agile Business Intelligence.” Merv’s initial reaction was the same as mine: what does BI have to do with the agile development movement

[1] In the title, ABI is short for Agile Business Intelligence, and WTF, as every fan of the TV show Battlestar Galactica knows, is short for What The Frak, while WTFN stands for Why The Frak Not.

My confusion deepened as I tracked down the BI companies that he cited:  It appeared that only one, Endeca, was marketing its solution as “agile BI” (Wherescape simply notes that its data-warehouse-building solution is increasing its built-in support for agile development practices). Endeca’s definition of agile BI appears from its web site to boil down to: BI is agile if it speeds ad-hoc querying, because that allows changes in pre-decision analysis that lead to better and quicker business decisions. It isn’t intuitively obvious that such a definition corresponds to development agility as defined by the Agile Manifesto or to the various definitions of business agility that have recently surfaced.

Definitions really matter in this case, because, as I have argued in previous articles, improved agility (using the correct definition) has a permanent positive impact on the top line, the bottom line, and business risk. Data from my Aberdeen Group 2009 survey of local and global firms of a range of sizes and verticals suggests that increased agility decreases costs in the long term, on average, by at least 10% below their previous trend line, increases revenues by at least a similar 10% above trend, and decreases the risk of negative surprises by at least 5%. And, according to the same study, the only business/IT process being tried that clearly increased agility and produced such effects was agile development as defined by the Manifesto (“hybrid” open-source development and collaborative development may also improve agility, to a much smaller extent).

On further reflection, I have decided that agile BI is indeed a step forward in overall business agility. However, it is a very small step. It is quite possible for a smart organization to take what’s out there, combine it in new ways, and make some significant gains in business agility. But it’s not easy to do, and right now, they won’t get much help from any single vendor.

Key Points in the Definition of Agility

I define agility as the ability of an organization to handle events or implement strategies that change the functioning of key organizational processes. Agility can be further categorized as proactive and reactive; anticipated and unanticipated; internally or externally caused; new-product, operational, and disaster. That is, improved agility is improvement in one or all of these areas.Initial data suggest that improvements in new-product development (proactive, unanticipated, externally caused) have the greatest impact, since they have spill-over effects on the other categories (anticipated, internally-caused, operational, and disaster). However, improvements in operational and disaster agility can also deliver significant bottom-line long-term increases. Improved agility can be measured and detected from its effects on organizational speed, effectiveness, and “follow-on” metrics (TCO, ROI, customer satisfaction, business risk).The implications for Agile BI are:

  • Unless improved BI agility improves new-product development, its business impact is smaller.
  • Increased speed (faster reporting of results) without increased effectiveness (i.e., a more agile business decision-making process) has minimal impact on overall agility.
  • Improvements to “reactive” decision-making deliver good immediate results, but have less long-term impact than improvements to “proactive” decision-making that anticipates rather than reacting to key environmental changes.

In summary, agile BI that is part of an overall agile decision-making and new-product-strategy-driving business process, and that emphasizes proactive search for extra-organizational data sources, should produce much better long-term bottom-line results than today’s reactive BI that depends on relatively static and intra-organizational data sources.

The Fundamental Limit to Today’s Agile Decision-Making via BI

Question: Where do the greatest threats to the success of the organization lie, in its internal business processes or in external changes to its environment and markets? Answer:  In most cases, external. Question: Which does better at allowing the business person to react fast to, and even anticipate, external changes – internally gathered data alone, or internal data plus external data that appears ahead of or gives context to internal data? Answer:  Typically, external. Question: What percentage of BI data is external data imported immediately, directly to the data store? Answer: Usually, less than 0.1 %. Question: What is the average time for the average organization from when a significant new data source shows up on the Web to when it begins to be imported into internal databases, much less BI? Answer: more than half a year[1].

The fundamental limit to the agility and effectiveness of BI therefore lies not in any inability to speed up analysis, but in the fact that today’s BI and the business processes associated with it are designed to focus on internal data. Increasingly, your customers are moving to the Web; your regulatory environment is moving to the Web; mobile devices are streaming data across the Web; new communications media like Facebook and Twitter are popping up; and businesses are capturing a very small fraction of this data, primarily from sources (long-time customers) that are changing the least. As a result, the time lost from deducing a shift in customer behavior from weekly or monthly per-store buying instead of social-network movement from one fad to another dwarfs the time saved when BI detects the per-store shift in a day instead of a weekend; and a correct reaction to the shift is far less likely without external contextual data.

This is an area where agile new product development is far ahead of BI. Where is the BI equivalent of reaching out to external open-source and collaborative communities? Of holding “idea jams” across organizations? Of features/information as a Web collaboration between an external user and a code/query creator? Of “spiraling in on” a solution? Of measuring effectiveness by “time to customer value” instead of “time to complete” or “time to decide”?

A simple but major improvement in handling external data in BI is pretty much doable today. It might involve integrating RSS feeds as pop-ups and Google searches as complements to existing BI querying. But if a major BI vendor features this capability on the front page of its Web site, I have yet to find that vendor.

Action Items

In the long run, therefore, users should expect that agile BI that delivers major bottom-line results will probably involve:

  1. Much greater use of external data to achieve more proactive decision-making.
  2. Major changes to business processes involving BI to make them more agile.
  3. Constant fine-tuning of the querying that BI offers, customized to the needs of the business, rather than feature addition and decision-process change gated by the next BI vendor release.
  4. Integration with New Product Development, so that customer insights based on historical context can supplement agile development’s right-now interaction with its Web communities.

Here are a few suggestions:

  1. Look at a product like the joint Composite Software/Kapow Technologies Composite Application Data Services for Web Content to semi-automatically inhale new Web-based external data.
  2. Look for major BI vendors that “walk the walk” in agile development, such as IBM with its in-house-used Jazz development environment, as a good indicator that the vendor’s BI services arm is up to the job of helping improving the agility of BI-related business processes; but be sure to check that the BI solution is also being developed that way.
  3. Look for BI vendor support for ad-hoc querying (as noted above, kudos to Endeca in this area), as this will likely make it easier to fine-tune querying constantly.
  4. Look for a BI vendor that can offer, in its own product line or via a third party, agile NPD software that includes collaborative tools to pass data between BI and the NPD project. Note that in most if not all cases you will still need to implement the actual BI-to-NPD link for your organization, and that if your organization does not do agile NPD you won’t get the full benefit of this. Also note that agile plus lean NPD, where the emphasis is on lean, does not qualify[2].
  5. Above all, change your metrics for agile BI success, from “increased speed” to “time to value”.

Today’s agile BI as touted by BI vendors is a very small, very delayed piece of a very good idea. Rather than patting them and yourself on the back for being five years behind development-tool vendors and three years behind NPD software vendors, why don’t you get moving on more ambitious stuff with real business impact? If not, WTFN?



[1] Answers based on Aberdeen Group data usefulness study, used by permission of Aberdeen Group.

[2] I am in disagreement with other commentators on this matter. I believe that lean cost-focused just-in-time processes work against agility as much as they work for it, since if product specs change there is less resource “slack” to accommodate the change.

Recently, I received a blurb from a company named 1010data, claiming that its personnel had been doing columnar databases for more than 30 years. As someone who was integrally involved at a technical level in the big era of database theory development (1975-1988), when everything from relational to versioning to distributed to inverted-list technology (the precursor to much of today’s columnar technology) first saw the light, I was initially somewhat skeptical. This wariness was heightened by receiving marketing claims that performance in data warehousing was better than not only relational databases but also than competitors’ columnar databases, even though 1010data does little in the way of indexing; and this performance improvement applied not only to ad-hoc queries with little discernable pattern, but also to many repetitive queries for which index-style optimization was apparently the logical thing to do.

 1010data’s marketing is not clear as to why this should be so; but after talking to them, and reading their technical white paper, I have come up with a theory as to why it might be so.  The theory goes like this: 1010data is not living in the same universe.

That sounds drastic. What I mean by this is, while the great mass of database theory and practice went one way, 1010data went another, back in the 1980s, and by now, in many cases, they really aren’t talking the same language. So what follows is an attempt to recast 1010data’s technology in terms familiar to me. Here’s the way I see it:

Since the 1980s, people have been wrestling with the problem of read and write locks on data. The idea is that if you decide to update a datum while another person is attempting to read it, each of you will see a different value, or the other person can’t predict which value he/she will see. To avoid this, the updater can block all other access via a write lock – which in turn slows down the other person drastically; or the “query from hell” can block updaters via a read lock on all data. In a data warehouse, updates are held and then rushed through at certain times (end of day/week) in order to avoid locking problems. Columnar databases also sometimes provide what is called “versioning”, in which previous values of a datum are kept around, so that the updater can operate on one value while the reader can operate on another.

1010data provides a data warehouse/business intelligence solution as a remote service – the “database as a service” variant of SaaS/public cloud. However, 1010data’s solution does not start by worrying about locking. Instead, it worries about how to provide each end user with a consistent “slice of time” database of his/her own. It appears to do this as follows: all data is divided up into what they call “master” tables (as in “master data management” of customer and supplier records), which are smaller, and time-associated/time-series “transactional” tables, which are the really large tables.

Master tables are more rarely changed, and therefore a full copy of the table after each update (really, a “burst” of updates) can be stored on disk, and loaded into main memory if needed by an end user, with little storage and processing overhead. This isn’t feasible for the transactional tables; but 1010data sees old versions of these as integral parts of the time series, not as superseded data; so the actual amount of “excess” data “appended” to a table, if maximum session length for an end user is a day, is actually small in all realistic circumstances. As a result, two versions of a transactional table include a pointer to a common ancestor plus a small “append”. That is, the storage overhead of additional versioning data is actually small compared to some other columnar technologies, and not that much more than row-oriented relational databases.

Now the other shoe drops, because, in my very rough approximation, versioning entire tables instead of particular bits of data allows you to keep those bits of data pretty much sequential on disk – hence the lack of need for indexing. It is as if each burst of updates comes with an online reorganization that restores the sequentiality of the resulting table version, so that reads during queries are potentially almost eliminating seek time. The storage overhead means that more data must be loaded from disk; but that’s more than compensated for by eliminating the need to jerk from one end of the disk to the other in order to inhale all needed data.

So here’s my take: 1010data’s claim to better performance, as well as to competitive scalability, is credible. Since we live in a universe in which indexing to minimize disk seek time plus minimizing added storage to minimize disk accesses in the first place allows us to push against the limits of locking constraints, we are properly appreciative of the ability of columnar technology to provide additional storage savings and bit-mapped indexing to store more data in memory. Since 1010data lives in a universe in which locking never happens and data is stored pretty sequentially, it can happily forget indexes and squander a little disk storage and still perform better.

1010data Loves Sushi

At this point, I could say that I have summarized 1010data’s technical value-add, and move on to considering best uses. However, to do that would be to ignore another way that 1010data does not operate in the same universe: it loves raw data. It would prefer to operate on data before any detection of errors and inconsistencies, as it views these problems as important data in their own right.

As a strong proponent of improving the quality of data provided to the end user, I might be expected to disagree strongly. However, as a proponent of “data usefulness”, I feel that the potential drawbacks of 1010data’s approach are counterbalanced by some significant advantages in the real world.

In the first place, 1010data is not doctrinaire about ETL (Extract, Transform, Load) technology. Rather, 1010data allows you to apply ETL at system implementation time or simply start with an existing “sanitized” data warehouse (although it is philosophically opposed to these approaches), or apply transforms online, at the time of a query. It’s nice that skipping the transform step when you start up the data warehouse will speed implementation. It’s also nice that you can have the choice of going raw or staying baked.

In the second place, data quality is not the only place where the usefulness of data can be decreased. Another key consideration is the ability of a wide array of end users to employ the warehoused data to perform more in-depth analysis. 1010data offers a user interface using the Excel spreadsheet metaphor and supporting column/time-oriented analysis (as well as an Excel add-in), thus providing better rolling/ad-hoc time-series analysis to a wider class of business users familiar with Excel. Of course, someone else may come along and develop such a flexible interface, although 1010data would seem to have a lead as of now; but in the meanwhile, the wider scope and additional analytic capabilities of 1010data appear to compensate for any problems with operating on incorrect data – and especially when there are 1010data features to ensure that analyses take into account possible incorrectness.

Caveat

To me, some of continuing advantages of 1010data’s approach depend fundamentally on the idea that users of large transactional tables require ad-hoc historical analysis.  To put it another way, if users really don’t need to keep historical data around for more than an hour in their databases, and require frequent updates/additions for “real-time analysis” (or online transaction processing), then tables will require frequent reorganizing and will include a lot of storage-wasting historical data, so that 1010data’s performance advantages will decrease or vanish.

However, there will always be ad-hoc, in-depth queryers, and these are pretty likely to be interested in historical analysis. So while 1010data may or may not be the be-all, end-all data-warehousing database for all verticals forever, it is very likely to offer distinct advantages for particular end users, and therefore should always be a valuable complement to a data warehouse that handles vanilla querying on a “no such thing as yesterday” basis.

Conclusion

Not being in the mainstream of database technology does not mean irrelevance; not being in the same database universe can mean that you solve the same problems better. It appears that taking the road less travelled has allowed 1010data to come up with a new and very possibly improved solution to data warehousing, just as inverted list resurfaced in the last few years to provide new and better technology in columnar databases. And it is not improbable that 1010data can continue to maintain any performance and ad-hoc analysis advantages in the next few years.

Of course, proof of these assertions in the real world is an ongoing process. I would recommend that BI/data warehousing users in large enterprises in all verticals kick the tires of 1010data – as noted, testbed implementation is pretty swift – and then performance test it and take a crack at the really tough analyst wish lists. To misquote Santayana, those who do not analyze history are condemned to repeat it – and that’s not good for the bottom line.

What, to my mind, was the biggest news out of EMC World? The much-touted Private Cloud? Don’t think so. The message that, as one presenter put it, “Tape Sucks”? Sorry. FAST Cache, VPLEX, performance boosts, cost cuts? Not this time. No, what really caught my attention was a throw-away slide showing that almost a majority of EMC customers have already adopted some form of deduplication technology, and that in the next couple of years, probably a majority of all business storage users will have done so.

Why do I think this is a big deal? Because technology related to deduplication holds the potential of delivering benefits greater than cloud; and user adoption of deduplication indicates that computer markets are ready to implement that technology. Let me explain.

First of all, let’s understand what “dedupe”, as EMC calls it, and a related technology, compression, mean to me. In its initial, technical sense, deduplication means removing duplicates in data. Technically, compression means removing "noise" -- in the information-theory sense of removing bits that aren’t necessary to convey the information. Thus, for example, removing all but one occurrence of the word “the” in storing a document would be deduplication; using the soundex algorithm to represent “the” in two bytes would be compression. However, today popular “compression” products often use technical-deduplication as well; for example, columnar databases compress the data via such techniques as bit-mapped indexing, and also de-duplicate column values in a table. Likewise, data deduplication products may apply compression techniques to shrink the storage size of data blocks that have already been deduplicated. So when we refer to “dedupe”, it often includes compression, and when we refer to compressed data, it often has been deduplicated as well.  To try to avoid confusion, I refer to “dedupe” and “compress” to mean the products, and deduplication and compression to mean the technical terms.When I state that there is an upcoming “dedupe revolution”, I really mean that deduplication and compression combined can promise a new way to improve not only backup/restore speed, but also transaction processing performance. Because, up to now, “dedupe” tools have been applied across SANs (storage area networks), while “compress” tools are per-database, “dedupe” products simply offer a quicker path than “compress” tools to achieving these benefits globally, across an enterprise.

These upcoming “dedupe” products are perhaps best compared to a sponge compressor that squeezes all excess “water” out of all parts of a data set. That means not only removing duplicate files or data records, but also removing identical elements within the data, such as all frames from a loading-dock video camera that show nothing going on. Moreover, it means compressing the data that remains, such as documents and emails whose verbiage can be encoded in a more compact form. When you consider that semi-structured or unstructured data such as video, audio, graphics, and documents makes up 80-90% of corporate data, and that the most “soggy” data types such as video use up the most space, you can see why some organizations are reporting up to 97% storage-space savings (70-80%, more conservatively) where “dedupe” is applied. And that doesn’t include some of the advances in structured-data storage, such as the columnar databases referred to above that “dedupe” columns  within tables.

So, what good is all this space saving? Note the fact that the storage capacity that users demand has been growing by 50-60 % a year, consistently, for at least the last decade. Today’s “dedupe” may not be appropriate for all storage; but where it is, it is equivalent to setting back the clock 4-6 years. Canceling four years of storage acquisition is certainly a cost-saver. Likewise, backing up and restoring “deduped” data involves a lot less to be sent over a network (and the acts of deduplicating and reduplicating during this process add back only a fraction of the time saved), so backup windows and overhead shrink, and recovery is faster. Still, those are not the real reasons that “dedupe” has major long-term potential.

No, the real long-run reason that storage space saving matters is that it speeds retrieval from disk/tape/memory, storage to disk/tape/memory, and even processing of a given piece of data. Here, the recent history of “compress” tools is instructive. Until a few years ago,the effort of compressing and uncompressing tended to mean that compressed data actually took longer to retrieve, process, and re-store; but, as relational and columnar database users have found out, certain types of “compress” tools allow you to improve performance – sometimes by an order of magnitude. For example, recently, vendors such as IBM are reporting that relational databases such as DB2 benefit performance-wise from using “compressed” data. Columnar databases are showing that it is possible to operate on data-warehouse data in “compressed” form, except when it actually must be shown to the user, and thereby get major performance improvements.

So what is my vision of the future of “dedupe”? What sort of architecture are we talking about, 3-5 years from now? One in which the storage tiers below fast disk (and, someday, all the tiers, all the way to main memory) have “dedupe”-type technology added to them. In this context, it was significant that EMC chose at EMC World to trumpet “dedupe” as a replacement for Virtual Tape Libraries (VTL). Remember, VTL typically allows read/query access to older, rarely accessed data within a minute; so, clearly, deduped data on disk can be reduped and accessed at least as fast. Moreover, as databases and applications begin to develop the ability to operate on “deduped” data without the need for “redupe”, the average performance of a “deduped” tier will inevitably catch up with and surpass that of one which has no deduplication or compression technology.

Let’s be clear about the source of this performance speedup. Let us say that all data is deduplicated and compressed, taking up 1/5 as much space, and all operations can be carried out on “deduped” data instead of its “raw” equivalents. Then retrieval from any tier will be 5 times as fast and 5 times as much data can be stored in the next higher tier for even more performance gains. Processing this smaller data will take ½ to 1/5 as much time. Adding all three together and ignoring the costs of “dedupe”/”redupe”, a 50% speedup of an update and an 80% performance speedup of a large query seems conservative. Because the system will only need “dedupe”/”redupe” rarely, “dedupe” when the data is first stored and “redupe” whenever the data is displayed to a user in a report or query response, and because the task could be offloaded to specialty “dedupe”/”redupe” processors, on average “dedupe”/redupe” should add only minimal performance overhead to the system, and should subtract less than 10% from the performance speedup cited above. So, conservatively, I estimate the performance speedup from this future “dedupe” at 40-70%.   

What effect is this going to have on IT, assuming that “the dedupe revolution” begins to arrive 1-2 years from now? First, it will mean that, 3-5 years out, the majority of storage, rather than a minority replacing some legacy backup, archive, or active-passive disaster recovery storage, will benefit from turning the clock back 4-6 years via “dedupe.” Practically, performance will improve dramatically and storage space per data item will shrink drastically, even as the amount of information stored continues its rapid climb – and not just in data access, but also in networking. Data handled in deduped form everywhere within a system also has interesting effects on security: the compression within “dedupe” is a sort of quick-and-dirty encryption that can make data pilferage by those who are not expert in “redupe” pretty difficult. Storage costs per bit of information stored will take a sharp drop; storage upgrades can be deferred and server and network upgrades slowed. When you add up all of those benefits, from my point of view, “the dedupe revolution” in many cases does potentially more for IT than the incremental benefits often cited for cloud.

Moreover, implementing “dedupe” is simply a matter of a software upgrade to any tier: memory, SSD, disk, or tape. So, getting to “the dedupe revolution” potentially requires much less IT effort than getting to cloud.

One more startling effect of dedupe: you can throw many of your comparative TCO studies right out the window. If I can use “dedupe” to store the same amount of data on 20% as much disk as my competitor, with 2-3 times the performance, the TCO winner will not be the one with the best processing efficiency or the greatest administrative ease of use, but the one with the best-squeezing “dedupe” technology.

What has been holding us back in the last couple of years from starting on the path to dedupe Nirvana, I believe, is customers’ wariness of a new technology. The EMC World slide definitively establishes that this concern is going away, and that there’s a huge market out there. Now, the ball is in the vendors’ court. This means that all vendors, not just EMC, will be challenged to merge storage “dedupe” and database “compress” technology to improve the average data “dry/wet” ratio, and “dedupify” applications and/or I/O to ensure more processing of data in its “deduped” state. (Whether EMC’s Data Domain acquisition complements its Avamar technology completely or not, the acquisition adds the ability to apply storage-style “dedupe” to a wider range of use cases; so EMC is clearly in the hunt). Likewise, IT will be challenged to identify new tiers/functions for “dedupe,” and implement the new “dedupe” technology as it arrives, as quickly as possible. Gentlemen, start your engines, and may the driest data win!

Oracle’s announcement of its “hybrid columnar compression” option for its Exadata product last summer clearly relates to the renewed attention paid to columnar databases over the last year by columnists such as Merv Adrian and Curt Monash. This example of a “hybrid” between columnar and row-oriented technology makes life yet more complicated for the IT buyer of data warehousing and business intelligence (BI) solutions. However, there does seem to be some agreement between the advocates of columnar and row-oriented that sheds some light on the appropriate places for each – and for hybrid.

Daniel Abadi in an excellent post summarizes the spectrum. If I understand his taxonomy correctly, row-oriented excels in “random reads” where a single record with more than 2-3 fields is accessed (or for single inserts and updates); columnar excels for queries across a large number of records whose columns (or “fields”) contain some commonality that makes for effective compression. The hybrids attempt to achieve 80% (my figure, plucked from the air) of the performance advantages of both. 

To follow this thought a bit further, Daniel divides the hybrid technologies into block-oriented or “PAX”, fractured mirror, and fine-grained. The PAX approach stores in column format for particular disk blocks; the fractured mirror operates in a real-world disaster recovery environment and treats one copy of the data store as row-oriented, the other as column-oriented, sending transactions to either as appropriate; the fine-grained hybrid is similar to PAX, but compresses the columns of particular fields of particular tables, not a whole disk block. Oracle appears to be an example of the PAX approach, while Vertica has some features that appear to implement the fine-grained approach.

I would argue that the future belongs more to pure columnar or fractured mirror than to row-oriented or the other two flavors of hybrid. Here is my reasoning: data stores continue to scale in size by 50% a year; the proportion of storage devoted to main memory and solid-state devices (SSDs) is likewise going up. The disadvantage of columnar in “random reads” is therefore decreasing, as databases are increasingly effective at ensuring that the data accessed is already in fast-response storage. In other words, it’s not just the number of I/Os, it’s what you are inputting from. 

There is another factor: as database size goes up, the disadvantages of row-oriented queries increase. As database size increases, the number of “random reads” does not necessarily increase, but the amount of data that must be accessed in the average query does necessarily increase. Compression applied across all columns and indexes increases its advantage over selective compression and no compression at all in this case, because there is less data to upload. And the “query from hell” that scans all of a petabyte data warehouse is not only the extreme case of this increasing advantage; scaling the system’s ability to handle such queries is often a prime concern of IT. 

I would also argue that the same trends make the pure or fractured-mirror columnar database of increasing interest for operational data stores that combine online transaction processing with lots of updates and continuous querying and reporting. For updates, the competition between columnar and row-oriented is getting closer, as many of these updates involve only 1-3 fields/columns of a row, while updates are most likely to affect the most recent data and therefore the data increasingly likely to be in main-memory cache. For inserts/deletes, updating in-memory indexes immediately along with “write to disk later” means that the need of columnar for multiple I/Os need not be a major disadvantage in many cases. And for online backup, the compressed data of the columnar approach wins hands down. 

My takeaway with regard to Oracle Hybrid Columnar Compression, therefore, is that over time your mileage may vary.  I would not be surprised if, someday, Oracle moved beyond “disk-block hybrid” to a fractured mirror approach, and that such an approach took over many of the high-end tasks for which vanilla row-oriented Oracle Database 11g r2 is now the database of choice.

 

Full disclosure: Dave is a friend, and a long-time colleague. He has just written an excellent book on Data Protection; hence the following musings.

 

As I was reading (a rapid first scan), I tried to pin down why I liked the book so much. It certainly wasn’t the editing, since I helped with that. The topic is reasonably well covered, albeit piecemeal, by vendors, industry associations, and bloggers. And while I have always enjoyed Dave’s stories and jokes, the topic does not lend itself to elaborate stylistic flourishes.

 

After thinking about it some more, I came to the conclusion that it’s Dave’s methodology that I value. Imho, Dave in each chapter will lay out a comprehensive and innovative classification of the topic at hand – data governance, information lifecycle management, data security – and then use that classification to bring new insight into a well-covered topic. The reason I like this approach is that it allows you to use the classification as a springboard, to come to your own conclusions, to extend the classification and apply it in other areas. In short, I found myself continually translating classifications from the narrow world of storage to the broad world of “information”, and being enlightened thereby.

 

One area in particular that called forth this type of analysis was the topic of cloud computing and storage. If data protection, more or less, involves considerations of compliance, operational/disaster recovery, and security, how do these translate to a cloud external to the enterprise? And what is the role of IT in data protection when both physical and logical information are now outside of IT’s direct control?

 

But this is merely a small part of the overall question of the future of IT, if external clouds take over large chunks of enterprise software/hardware. If the cloud can do it all cheaper, because of economies of scale, what justification is there for IT to exist any longer? Or will IT become “meta-IT”, applying enterprise-specific risk management, data protection, compliance, and security to their own logical part of a remote, multi-tenant physical infrastructure?

 

I would suggest another way of slicing things. It is reasonable to think of a business, and hence underlying IT, as cost centers, which benefit from commodity solutions provided externally, and competitive-advantage or profit centers, for which being like everything else is actually counter-productive. In an ideal world, where the cloud can always underprice commodity hardware and software, IT’s value-add lies where things are not yet commodities. In other words, in the long run, IT should be the “cache”, the leading edge, the driver of the computing side of competitive advantage.

 

What does that mean, practically? It means that the weight of IT should shift much more towards software and product development and initial use. IT’s product-related and innovative-process-related software and the systems to test and deploy them are IT’s purview; the rest should be in the cloud. But this does not make IT less important; on the contrary, it makes IT more important, because not only does IT focus on competitive advantage when things are going well, it also focuses on agile solutions that pay off in cost savings by more rapid adaptation when things are going poorly. JIT inventory management is a competitive advantage when orders are rising; but also a cost saver when orders are falling.

 

I realize that this future is not likely to arrive any time soon. The problem is that in today’s IT, maintenance costs crowd out new-software spending, so that the CEO is convinced that IT is not competent to handle software development. But let’s face it, no one else is, either. Anyone following NPD (new product development) over the last few years realizes that software is an increasing component in an increasing number of industries. Outsourcing competitive-advantage software development is therefore increasingly like outsourcing R&D – it simply doesn’t work unless the key overall direction is in-house. Whether or not IT does infrastructure governance in the long run, it is necessarily the best candidate to do NPD software-development governance.

 

So I do believe that IT has a future; but quite a different one from its present. As you can see, I have wandered far afield from Data Protection, thanks to Dave Hill’s thought-provoking book.The savvy reader of this tome will, I have no doubt, be able to come up with other, equally fascinating thoughts.

There is a wonderful short story by Jorge Luis Borges ("Pierre Menard, Author of the Quixote") that, I believe, captures the open source effort to come to terms with Windows – which in some quarters is viewed as the antithesis of the philosophy of open source. In this short story, a critic analyzes Don Quixote as written by someone four hundred years later – someone who has attempted to live his life so as to be able to write the exact same words as in the original Don Quixote. The critic’s point is that even though the author is using the same words, today they mean something completely different.

 

In much the same way, open source has attempted to mimic Windows on “Unix-like” environments (various flavors of Unix and Linux) without triggering Microsoft’s protection of its prize operating system. To do this, they have set up efforts such as Wine and ReactOS (to provide the APIs of Windows from Win2K onwards) and Mono (to provide the .NET APIs). These efforts attempt to support the same APIs as Microsoft’s, but with no knowledge of how Microsoft created them. This is not really reverse engineering, as the aim of reverse engineering is usually to figure out how functionality was achieved. These efforts don’t care how the functionality was achieved – they just want to provide the same collection of words (the APIs and functionality).

 

But while the APIs are the same, the meaning of the effort has changed in the twenty-odd years since people began asking how to make moving programs from Wintel to another platform (and vice versa) as easy as possible. Then, every platform had difficulties with porting, migration, and source or binary compatibility. Now, Wintel and the mainframe, among the primary installed bases, are the platforms that are most difficult to move to or from. Moreover, the Web, or any network, as a distinct platform did not exist; today, the Web is increasingly a place in which every app and most middleware must find a way to run. So imitating Windows is no longer so much about moving Windows applications to cheaper or better platforms; it is about reducing the main remaining barrier to being able to move any app or software from any platform to any other, and into “clouds” that may hide the underlying hardware, but will still suffer when apps are platform-specific.

 

Now, “moving” apps and “easy” are very vague terms. My own hierarchy of ease of movement from place to place begins with real-time portability. That is, a “virtual machine” on any platform can run the app, without significant effects on app performance, robustness, and usability (i.e., the user interface allows you to do the same things). Real-time portability means the best performance for the app via load balancing and dynamic repartitioning. Java apps are pretty much there today. However, apps in other programming languages are not so lucky, nor are legacy apps.

 

The next step down from real-time portability is binary compatibility. The app may not work very well when moved in real time from one platform to another, but it will work, without needing changes or recompilation. That’s why forward and backward compatibility matter: they allow the same app to work on earlier or later versions of a platform. As time goes on, binary compatibility gets closer and closer to real-time portability, as platforms adapt to be able to handle similar workloads. Windows Server may not scale as well as the mainframe, but they both can handle the large majority of Unix-like workloads. It is surprising how few platforms have full binary compatibility with all the other platforms; it isn’t just Windows to the mainframe but also compatibility between different versions of Unix and Linux. So we are a ways away from binary compatibility, as well.

 

The next step down is source-code compatibility. This means that in order to run on another platform, you can use the same source code, but it must be recompiled. In other words, source-code but not binary compatibility seems to rule out real-time movement of apps between platforms. However, it does allow applications to generate a version for each platform, and then interoperate/load balance between those versions; so we can crudely approximate real-time portability in the real world. Now we are talking about a large proportion of apps on Unix-like environments (although not all), but Windows and mainframe apps are typically not source-code compatible with the other two environments. Still, this explains why users can move Linux apps onto the mainframe with relative ease.

 

There’s yet another step down: partial compatibility. This seems to come in two flavors: higher-level compatibility (that is, source-code compatibility if the app is written to a higher-level middleware interface such as .NET) and “80-20” compatibility (that is, 80% of apps are source-code incompatible in only a few, easily modified places; the other 20% are the nasty problems). Together, these two cases comprise a large proportion of all apps; and it may be comforting to think that legacy apps will sunset themselves so that eventually higher-level compatibility will become de facto source-code compatibility. However, the remaining cases include many important Windows apps and most mission- and business-critical mainframe apps. To most large enterprises, partial compatibility is not an answer. And so we come to the final step down: pure incompatibility, only cured by a massive portation/rewrite effort that has become much easier but is still not feasible for most such legacy apps.

Why does all this matter? Because we are closer to Nirvana than we realize. If we can imitate enough of Windows on Linux, we can move most Windows apps to scale-up servers when needed (Unix/Linux or mainframe). So we will have achieved source-code compatibility from Windows to Linux, Java real-time portability from Linux to Windows, source-code compatibility for most Windows apps from Windows to Linux on the mainframe, and Linux source-code compatibility and Java real-time portability from Linux to the mainframe and back. It would be nice to have portability from z/OS apps to Linux and Windows platforms; but neither large enterprises nor cloud vendors really need this – the mainframe has that strong a TCO/ROI and energy-savings story for large-scale and numerous (say, more than 20 apps) situations.

 

So, in an irony that Borges might appreciate, open-source efforts may indeed allow lower costs and greater openness for Windows apps; but not because open source free software will crowd out Windows. Rather, a decent approximation of cross-platform portability with lower per-app costs will be achieved because these efforts allow users to leverage Windows apps on other platforms, where the old proprietary vendors could never figure out how to do it. The meaning of the effort may be different than it would have been 15 years ago; but the result will be far more valuable. Or, as Borges’ critic might say, the new meaning speaks far more to people today than the old. Sometimes, Don Quixote tilting at windmills is a useful thing.

A recent Techtarget posting by the SearchSOA editor picks up on the musings of Miko Matsumura of Software AG, suggesting that because most new apps in the cloud can use data in main memory, there’s no need for the enterprise-database SQL API; rather, developers should access their data via Java. OK, that’s a short summary of a more nuanced argument. But the conclusion is pretty blunt: “SQL is toast.”

I have no great love for relational databases – as I’ve argued for many years, “relational” technology is actually marketing hype about data management that mostly is not relational at all. That is, the data isn’t stored as relational theory would suggest. The one truly relational thing about relational technology is SQL: the ability to perform operations on data in an elegant, high-level, somewhat English-like mini-language.

What’s this Java alternative that Miko’s talking about? Well, Java is an object-oriented programming (OOP) language. By “object”, OOP means a collection of code and the data on which it operates. Thus, an object-oriented database is effectively chunks of data, each stored with the code to access it.

So this is not really about Larry Ellison/Oracle deciding the future, or the “network or developer [rather] than the underlying technology”, as Miko puts it. It’s a fundamental question: which is better, treating data as a database to be accessed by objects, or as data within objects?

Over the last fifteen years, we have seen the pluses and minuses of “data in the object”. One plus is that there is no object-relational mismatch, in which you have to fire off a SQL statement to some remote, un-Java-like database like Oracle or DB2 whenever you need to get something done. The object-relational mismatch has been estimated to add 50% to development times, mostly because developers who know Java rarely know SQL.

Then there are the minuses, the reasons why people find themselves retrofitting SQL invocations to existing Java code. First of all, object-oriented programs in most cases don’t perform well in data-related transactions. Data stored separately in each object instance uses a lot of extra space, and the operations on it are not optimized. Second, in many cases, operations and the data are not standardized across object classes or applications, wasting lots of developer time. Third, OOP languages such as Java are low-level, and specifically low-level with regard to data manipulation. As a result, programming transactions on vanilla Java takes much longer than programming on one of the older 4GLs (like, say, the language that Blue Phoenix uses for some of its code migration).

So what effect would storing all your data in main memory have on Java data-access operations? Well, the performance hit would still be there – but would be less obvious, because of the overall improvement in access speed. In other words, it might take twice as long as SQL access, but since we might typically be talking about 1000 bytes to operate on, we still see 2 microseconds instead of 1, which is a small part of response time over a network. Of course, for massive queries involving terabytes, the performance hit will still be quite noticeable.

What will not go away immediately is the ongoing waste of development time. It’s not an obvious waste of time, because the developer either doesn’t know about 4GL alternatives or is comparing Java-data programming to all the time it takes to figure out relational operations and SQL. But it’s one of the main reasons reason that adopting Java actually caused a decrease in programmer productivity compared to structured programming, according to some user feedback I once collected, 15 years ago.

More fundamentally, I have to ask if the future of programming is going to be purely object-oriented or data-oriented. The rapid increase in networking speed of the Internet doesn’t make data processing speed ignorable; on the contrary, it makes it all the more important as a bottleneck. And putting all the data in main memory doesn’t solve the problem; it just makes the problem kick in at larger amounts of data – i.e., for more important applications. And then there’s all this sensor data beginning to flow across the Web …

So maybe SQL is toast. If what replaces it is something that Java can invoke that is high-level, optimizes transactions and data storage, and allows easy access to existing databases – in other words, something data-oriented, something like SQL – then I’m happy. If it’s something like storing data as objects and providing minimal, low-level APIs to manipulate that data – then we will be back to the same stupid over-application of Java that croaked development time and scalability 15 years ago.

 
In one of my favorite sleazy fantasy novels (The Belgariad, David Eddings) one of the characters is attempting to explain to another why reviving the dead is not a good idea. “You have the ability to simplify, to capture a simple visual picture of something complex [and change it],” the character says.  “But don’t over-simplify. Dead is dead.”
 
In a recent white paper on cloud computing, in an oh-by-the-way manner, Sun mentions the idea of data locality. If I understand it correctly, virtual “environments” in a cloud may have to physically move not only from server to server, but from site to site and/or from private data center to public cloud server farm=2 0and back. More exactly, the applications don’t have to move (just their “state”), and the virtual machine software and hardware doesn’t have to move (it can be replicated or emulated in the target machine; but the data may have to be moved or copied in toto (or continue to access the same physical data store, remotely – which would violate the idea of cloud boundaries, among other problems [like security and performance]). To avoid this, it is apparently primarily up to the developer to keep in mind data locality, which seems to mean avoiding moving the data where possible by keeping it on the same physical server-farm site.
 
Data locality will certainly be a quick fix for immediate problems of how to create the illusion of a “virtual data center.” But is it a long-term fix? I think not. The reason, I assert, is that cloud computing is an over-simplification – physically distributed data is not virtually unified data -- and our efforts to patch it to approximate the “ideal cloud” will result in unnecessary complexity, cost, and legacy systems.
 
Consider the most obvious trend in the computing world in the last few years: the inexorable growth in storage of 40-60% per year, continuing despite the recession. The increase in storage reflects, at least partly, an increase in data-store size per application, or, if you wish, per “data center”. It is an increase that appears faster than Moore’s Law, and faster than the rate of increase in communications bandwidth.  If moving a business-critical application’s worth of data right now from secondary to primary site for disaster-recovery purposes takes up to an hour, it is likely that moving it two years from now will take 1 ½-2 hours, and so on. Unless this trend is reversed, the idea of a data center that can move or re-partition in minutes between public and private cloud (or even between Boston and San Francisco in a private cloud) is simply unrealistic.
 
Of course, since the unrealistic doesn’t happen, what will probably happen is that developers will create kludges, one for each application that is20“cloud-ized”, to ensure that data is “pre-copied” and periodically “re-synchronized”, or that barriers are put in the way of data movement from site to site within the theoretically virtual public cloud. That’s the real danger – lots of “reinventing the wheel” with attendant long-term unnecessary costs of administering (and developing new code on top of) non-standardized data movements and the code propelling it, database-architecture complexity, and unexpected barriers to data movement inside the public cloud.
 
What ought to provide a longer-term solution, I would think, is (a) a way of slicing the data so that only the stuff needed to “keep it running” is moved – which sounds like Information Lifecycle Management (ILM), since one way of doing this is to move the most recent data, the data most likely to be accessed and updated – and (b) a standardized abstraction-layer interface to the data that enforces this. In this way, we will at least have staved off data-locality problems for a few more years, and we don’t embed kludge-type solutions in the cloud infrastructure forever.
 
However, I fear that such a solution will not arrive before we have created another complicated administrative nightmare. On the one hand, if data locality rules, haven’t we just created a more complicated version of SaaS (the application can’t move because the data can’t?) On the other hand, if our kludges succeed in preserving the illusion of the dynamic application/service/data-center by achieving some minimal remote data movement, how do we scale cloud server-farm sites steadily growing in data-store size by load-balancing hundreds of undocumented hard-coded differing pieces of software accessing data caches that are pretending to be exabytes of physically-local data and are actually accessing remote data during a cache miss?
 
A quick search of Google finds no one raising this particular point.  Instead, the concerns relating to data locality seem to be about vendor lock-in, compliance with data security and privacy regulations, and the difficulty of moving the data for the first time.  Another commentator notes the absence of standardized interfaces for cloud computing.
 
But I say, dead is dead, not alive by another name. If data is always local, that’s SaaS, not cloud by another name. And when you patch to cover up over-simplification, you create unnecessary complexity. Remember when simple-PC server farms were supposed to be an unalloyed joy, before the days of energy concerns and recession-fueled squeezes to high distributed-environment administrative IT costs? Or when avoidance of vendor lock-in was worth the added architectural complexity, before consolidation showed that it wasn’t? I wonder, when this is all over, will IT echo Oliver Hardy, and say to vendors, “Well, Stanley, here’s another fine mess you’ve gotten me into”?  

I was listening in on a discussion of a recent TPC-H benchmark by Sun (hardware) and its ParAccell columnar/in-memory-technology database (cf recent blog posts by Merv Adrian and Curt Monash), when a benchmarker dropped an interesting comment.  It seems that ParAccell used 900-odd TB of storage to store 30 TB of data, not because of inefficient storage or to “game” the benchmark, but because disks are now so large that in order to gain the performance benefits of streaming from multiple spindles into main memory, ParAccell had to use that amount of storage to allow parallel data streaming from disks to main memory. Thus, if I understand what the benchmarker said, in order to maximize performance, ParAccell had to use 900-odd 1-terabyte disks simultaneously. What I find interesting about that comment is the indication that queuing theory still means something when it comes to database performance. According to what I was taught back in 1979, I/Os pile up in a queue when the number of requests is greater than the number of disks, and so at peak load, 20 500-MB disks can deliver a lot better performance than 10 1-GB disks – although they tend to cost a bit more.  The last time I looked, at list price 15 TB of 750-GB SATA drives cost $34,560, or 25% more than 15 TB of 1-TB SATA drives.  The commenter then went on to note that, in his opinion, solid-state disk would soon make this kind of maneuver passé. I think what he’s getting at is that solid-state disk should be able to provide parallel streaming from within the “disk array”, without the need to go to multiple “drives”.  This is because solid-state disk is main memory imitating disk:  that is, the usual parallel stream of data from memory to processor is constrained to look like a sequential stream of data from disk to main memory. But since this is all a pretence, there is no reason that you can’t have multiple disk-memory “streams” in the same SSD, effectively splitting it into 2, 3, or more “virtual disks” (in the virtual-memory sense).  It’s just that SSDs were so small in the old days, there didn’t seem to be any reason to bother. To me, the fact that someone would consider using 900 TB of storage to achieve better performance for 30 TB of data is an indication that (a) the TPC-H benchmark is too small to reflect some of the user data-processing needs of today, and (b) memory size is reaching the point at which many of these needs can be met just with main memory. A storage study I have been doing recently suggests that even midsized firms now have total storage needs in excess of 30 TB, and in the case of medium-sized hospitals (with video-camera and MRI/CAT scan data) 700 TB or more.  To slice it finer: structured-data database sizes may be growing, but not as fast as memory sizes, so many of these (old-style OLTP) can now be done via main memory and (as a stopgap for old-style programs) SSD. Unstructured/mixed databases, as in the hospital example, still require regular disk, but now take up so much storage that it is still possible to apply queuing theory to them by streaming I/O in parallel from data striped on 100s of disks. Data warehouses fall somewhere in between: mostly structured, but still potentially too big for memory/SSD. But data warehouses don’t exist in a vacuum:  the data warehouse is typically physically in the same location as unstructured/mixed data stores.  By combining data warehouse and unstructured-data storage and striping across disks, you can improve performance and still use up most of your disk storage – so queuing theory still pays off.  How about the next three years? Well, we know storage size is continuing to grow, perhaps at 40-50%, despite the re cession, as regulations about email and video data retention continue to push the unstructured-data “pig” through the enterprise’s data-processing “python.” We also know that Moore’s Law may be beginning to break down, so that memory size may be on a slower growth curve.  And we know that the need for real-time analysis is forcing data warehouses to extend their scope to updatable data and constant incremental OLTP feeds, and to relinquish a bit of their attempt to store all key data (instead, allowing in-situ querying across the data warehouse and OLTP).  So if I had to guess, I would say that queuing theory will continue to matter in data warehousing, and that fact should be reflected in any new or improved benchmark. However, SSDs will indeed begin to impact some high-end data-warehousing databases, and performance-tuning via striping will become less important in those circumstances – that also should be reflected in benchmarks.  However, it is plain that in such a time of transition, benchmarks such as TPC-H cannot fully and immediately reflect each shift in the boundary between SSD and disk.  Caveat emptor:  users should begin to make finer-grained decisions about which applications belong with what kind of storage tiering.

Yesterday, I participated in Microsoft’s grand experiment in a “virtual summit”, by installing Microsoft LiveCam on my PC at home and then doing three briefings by videoconferencing (two user briefings lacked video, and the keynote required audio via phone). The success rate wasn’t high; in two of the three briefings, we never did succeed in getting both sides to view video, and in one briefing, the audio kept fading in and out. From some of the comments on Twitter, many of my fellow analysts were unimpressed by their experiences.

However, in the one briefing that worked, I found there was a different “feel” to the briefing. Trying to isolate the source of that “feel” – after all, I’ve seen jerky 15-fps videos on my PC before, and video presentations with audio interaction – I realized that there was one aspect to it that was unique: not only did I (and the other side) see each other; we also saw ourselves. And that’s one possibility of videoconferencing that I’ve never seen commented on (although see http://www.editlib.org/p/28537).

The vendor-analyst interaction, after all, is an alternation of statements meant to convince: the vendor, about the value of the solution; the analyst, about the value of the analysis. Each of those speaker statements is “set up” immediately previously by the speaker acting as listener. Or, to put it very broadly, in this type of interaction a good listener makes a good convincer.

So the key value of a videoconference of this type is that instant feedback about how one is coming across as both a listener and speaker is of immense value. With peripheral vision the speaker can adjust his or her style so he/she appears more convincing to himself/herself; and the listener can adjust his or her style so as to emphasize interest in the points that he/she will use as a springboard to convince in his/her next turn as speaker. This is something I’ve found to work in violin practice as well: it allows the user to move quickly to playing with the technique and expression that one is aiming to employ.

So, by all means, criticize the way the system works intermittently and isn’t flexible enough to handle all “virtual summit” situations, the difficulties in getting it to work, and the lack of face-to-face richer information-passing. But I have to tell you, if all of the summit had been like that one brief 20 minutes where everything worked and both sides could see the way they came across, I would actually prefer that to face-to-face meetings.

“O wad some God the giftie gie us,” said my ancestors’ countryman, Scotsman Robbie Burns, “To see ourselves as others see us.” The implication, most have assumed, is that we would be ashamed of our behavior. But with something like Microsoft’s LiveCam, I think the implication is that we would immediately change our behavior so we liked what we saw; and would be the better for our narcissism.

 

It seems as if I’m doing a lot of memorializing these days – first Sun, now Joseph Alsop, CEO of Progress Software since its founding 28 years ago. It’s strange to think that Progress started up shortly before Sun, but took an entirely different direction: SMBs (small-to-medium-sized businesses) instead of large enterprises, software instead of hardware.  So many database software companies since that time that targeted large enterprises have been marginalized, destroyed, crowded out, or acquired by IBM, CA (acting, in Larry Ellison’s pithy phrase, as “the ecosystem’s needed scavenger”), and Oracle.

 

Let’s see, there’s IDMS, DATACOM-DB, Model 204, and ADABAS from the mainframe generation (although Cincom with TOTAL continues to prosper), and Ingres, Informix, and Sybase from the Unix-centered vendors. By contrast, Progress, FileMaker, iAnywhere (within Sybase), and Intersystems (if you view hospital consortiums as typically medium-scale) have lasted and have done reasonably well. Of all of those SMB-focused database and development-tool companies, judged in terms of revenues, Progress (at least until recently) has been the most successful. For that, Joe Alsop certainly deserves credit.

 

But you don’t last that long, even in the SMB “niche”, unless you keep establishing clear and valuable differentiation in customers’ minds. Looking back over my 16 years of covering Progress and Joe, I see three points at which Progress made a key change of strategy that turned out to be right and valuable to customers.

 

First, in the early ‘90s, they focused on high-level database-focused programming tools on top of their database. This was not an easy thing to do; some of the pioneers, like Forte (acquired by Sun) and PowerBuilder (acquired by Sybase), had superb technology that was difficult to adapt to new architectures like the Web and low-level languages like Java.  But SMBs and SMB ISVs continue to testify to me that applications developed on Progress deliver SMB TCO and ROI superior to the Big Guys.

 

Second, they found the SMB ISV market before most if not all other ISVs. I still remember a remarkable series of ads shown in one of their industry analyst days featuring a small shop whose owner, moving as slow as molasses, managed to sell one product to one customer during the day – by instantly looking up price and inventory and placing the order using a Progress-ISV-supplied customized application. That was an extreme; but it captured Progress’ understanding that the way to SMBs’ hearts was no longer just directly or through VARs, but also through a growing cadre of highly regional and niche-focused SMB ISVs. By the time SaaS arrived and folks realized that SMB ISVs were particularly successful at it, Progress was in a perfect position to profit.

 

Third, they home-grew and took a leadership position in ESBs (Enterprise Service Buses). It has been a truism that SMBs lag in adoption of technology; but Progress’ ESB showed that SMBs and SMB vendors could take the lead when the product was low-maintenance and easily implemented – as opposed to the application servers large-enterprise vendors had been selling.

 

As a result of Joe Alsop and Progress, not to mention the mobile innovations of Terry Stepien and Sybase, the SMB market has become a very different place – one that delivers new technology to large enterprises as much as large-enterprise technology now “trickles down” to SMBs. The reason is that what was sauce for the SMB goose was also sauce for the workgroup and department in the large enterprise – if it could be a small enough investment to fly under the radar of corporate standards-enforcers. Slowly, many SMBs have grown into “small large” enterprises, and many workgroups/departments have persuaded divisions, lines of business, and even data centers in large enterprises to see the low-cost and rapid-implementation benefits of an SMB-focused product. Now, big vendors like IBM understand that they win with small and large customers by catering to the needs of regional ISVs instead of the enterprise-app suppliers like SAP and Oracle. Now, Progress does a lot of business with large enterprises, not just SMBs.

 

Running a company focused on SMB needs is always a high-wire act, with constant pressure on the installed base by large vendors selling “standards” and added features, lack of visibility leading customers to worry about your long-term viability (even after the SMB market did far better in the Internet bust than large-enterprise vendors like Sun!), and constant changes in the technology that bigger folk have greater resources to implement. To win in the long term, you have to be like Isaiah Berlin’s hedgehog – have one big unique idea, and keep coming up with a new one – to counter the large-vendor foxes, who win by amassing lots of smaller ideas. Many entrepreneurs have come up with one big idea in the SMB space; but Joe Alsop is among the few that have managed to identify and foster the next one, and the one after that. And he managed to do it while staying thin.

 

But perhaps the greatest testimony to Joe Alsop is that I do not have to see his exit from CEO-ship as part of the end of an era.  With Sun, with CA as Charles Wang left, with Compuware, the bloom was clearly off the old business-model rose. Progress continues to matter, to innovate, and to be part of an increase in importance of the SMB market. In fact, this is a good opportunity to ask yourself, if you’re an IT shop, whether cloud computing means going to Google, Amazon, IBM, and the like, or the kind of SMB-ISV-focused architecture that Progress is cooking up. Joe Alsop is moving on; the SMB market lives long and prospers!

Yesterday, I had a very interesting conversation with Mike Hoskins of Pervasive about his company’s innovative DataRush product.  But this blog post isn’t about DataRush; it’s about the trends in the computer industry that I think DataRush helps reveal. Specifically, it’s about why, despite the fact that disks remain much slower than main memory, most processes, even those involving terabytes of data, are CPU-bound, not I/O-bound.

 

Mike suggested, iirc, that around 2006 Moore’s Law – in which every 2 years, approximately, the bit capacity of a computer chip doubled, and therefore processor speed correspondingly increased – began to break down.  As a result, software written to assume that increasing processor speed would cover all programming sins against performance – e.g., data lockup by security programs when you start up your PC --  is now beginning to break down, as inevitable scaling of demands on the program are not met by scaling of program performance.

 

However, thinking about the way in which DataRush, or Vertica, achieve higher performance – in the first case by achieving higher parallelism within a process, in the second case by slicing relational data by columns of same-type data instead of rows of different-sized data – suggests to me that more is going on than just “software doesn’t scale any more.”  At the very high end of the database market, which I follow, the software munching on massive amounts of data has been unable to keep up with disk I/O for the last 15 years, at least.

 

Thinking about CPU processing versus I/O, in turn, reminded me of Andrew Tanenbaum, the author of great textbooks on Structured Computer Organization and Computer Networks in the late 1970s and 1980s.  Specifically, in one of his later works, he asserted that the speed of networks was growing faster than the speed of processors.  Let me restate that as a Law: the speed of data in motion grows faster than the speed of computing on data at rest.

 

The implications of Tanenbaum’s Law and the death of Moore’s Law are, I believe, that most computing will be, for the foreseeable future, CPU-bound. Think of it in terms of huge query processing that reviews multiple terabytes of data.  Data storage grows by 60% a year, and we would anticipate that the time to get a certain percent of that data off the disk to send to main memory would be greater each year, if networking speed was growing as fast as processor speed, and therefore slower than stored data.  Instead, even today’s basic SATA drives can deliver multiple gigabytes/second – faster than the clock speeds of today’s microprocessors. To me, this says that disks are shoving the data at processors faster than they can process it. And the death of Moore’s Law just makes things worse.

 

The implications are that the fundamental barriers to scaling computing are not processor geometry, but the ability to parallelize the two key “at rest” tasks of the processor: storing the data in main memory, and operating on it. In order to catch up to storage growth and network speed growth, we have to throw as many processors as we can at a task in parallel. And that, in turn, suggests that the data-flow architecture needs to be looked at again.

 

The concept of today’s architecture is multiple processors running multiple processes in parallel, each process operating on a mass of (sometimes shared) data.  The idea of the data-flow architecture is to split processes into unitary tasks, and then flow parallel streams of data under processors which carry out each of those tasks.  The distinction here is that in one approach, the focus is in parallelizing multi-task processes that the computer carries out on a chunk of data at rest; in the other the focus is on parallelizing the same task carried out on a stream of data.

 

Imagine, for instance, that we were trying to find the best salesperson in the company in the last month, with a huge sales database not already prepared for the query.  In today’s approach, one process would load the sales records into main memory in chunks, and for each chunk, maintain a running count of sales for every salesman in the company. Yes, the running count is to some extent parallelized. But the record processing is often not.

 

Now imagine that multiple processors are assigned the task of looking at each record as it arrives, with each processor keeping a running count for one salesperson.  Not only are we speeding up the access to the data uploaded from disk by parallelizing that; we are also speeding up the computation of running counts beyond that of today’s architecture, by having multiple processors performing the count on multiple records at the same time. So the two key bottlenecks involving data at rest – accessing the data, and performing operations on the data – are lessened.

 

Note also that the immediate response to the death of Moore’s Law is the proliferation of multi-core chips – effectively, 4-8 processors on a chip. So a simple way of imposing a data-flow architecture over today’s approach is to have the job scheduler in a symmetric multiprocessing architecture break down processes into unitary tasks, then fire up multiple cores for each task, operating on shared memory.  If I understand Mike Hoskins, this is the gist of DataRush’s approach.

 

But I would argue that if I am correct, programmers also need to begin to think of their programs as optimizing processing of data flows. One could say that event-driven programming does something similar; but so far, that’s typically a special case, not an all-purpose methodology or tool.

 

Recently, to my frustration, a careless comment got me embroiled again in the question of whether Java or Ruby or whatever is a high-level language – when I strongly feel that these do poorly (if examples on Wikipedia are representative) at abstracting data-management operations and therefore are far from ideal. Not one of today’s popular dynamic, functional, or object-oriented programming languages, as far as I can tell, thinks about optimizing data flow. Is it time to merge them with LabVIEW or VEE?  

So many memories … 

I first became really aware of Sun in the late ‘80s, when I was working for Prime.  At the time, Sun was one of the two new competitors in the area of engineering workstations – itself a new market.  The key area of competition was cross-machine file systems that made multiple workstations look like one system – in other words, you’d invoke a program on one machine, and if it didn’t reside there, the file system would do a remote procedure call (RPC) to the other.  Sun’s system was called NFS.

 

Yes, Sun won that war – but the way it did it was a harbinger of things to come.  With more than a little chutzpah, Sun insisted that Unix was the right way to do networked file systems.  Now, at the time, there was nothing to indicate that Unix was better (or worse) than any other operating system for doing cross-machine operations.  But Sun’s marketing tapped into a powerful Movement in computing. This Movement – geeks, first movers, technology enthusiasts, anti-establishment types – gave Sun a niche where established players like Prime and DEC could not crowd Sun off buyers’ short lists. The Movement was very pro-Unix, and that allowed Sun to establish itself as the engineering-workstation market leader.

 

Sun’s next marketing message appealed to the Movement very much:  it said it was going down-market and attacking Microsoft.  In fact, that became a feature of Sun for the next 15 years:  Scott McNealy would get up at Sun sales, investor, and analyst events and make cracks about Bill Gates and Microsoft.  Of course, when you looked closely at what was happening, that was pretty much hogwash:  Sun wasn’t cutting into the PC market, because it couldn’t cut prices that much.  Instead, Sun’s pricing was cutting into minicomputer low-end markets.  Because PCs and Novell LANs were cutting into those markets far more, the demise of minicomputer vendors is rightly ascribed to PCs.  But Sun’s real market growth came from moving up-market.

 

As everyone remembers, Scott McNealy as the public face of Sun had a real gift for pithy phrases criticizing competitors that really stuck in people’s minds. My favorite is the time in the early 1990s when IBM as Big Blue joined with Apple (corporate color: red) in a consortium to develop a common standard for some key market and crowd others out: Scott derided the result as “purple applesauce.”

 

But what really saved Sun in the early 90s was not the Movement nor Scott’s gift for credulity-straining claims. First among engineering-workstation vendors, Sun decided to move into servers. This took Sun from techie markets (although not consumer markets) to medium-scale to large-scale corporate IT – not the easiest market to crack. But at the time, lines of business were asserting their independence from central IT by creating their own corporate networks, and Sun was able to position itself against IBM, Unisys, NCR/AT&T, and HP in growing medium-scale businesses and lines of business.  While Silicon Graphics (number 2 in workstations) waited too long to move into servers and spent too much time trying to move down-market to compete with Microsoft, Sun grew in servers as the workstation market matured.

 

I remember talking to the trade press at that time and saying that Sun’s days of 90%/year revenue growth were over.  As a large company, you couldn’t grow as fast as a smaller one, and especially not in the server market.  I wasn’t wrong; but I certainly didn’t anticipate Sun’s amazing growth rate in the late 90s.  It was all due to the Internet boom in Silicon Valley.  Every startup wanted “an Oracle on a Sun”. Sun marketing positioned Java as part of the Movement – an object-oriented way of cutting through proprietary barriers to porting applications from one machine to another – and all the anti-Microsoft users flocked to Sun. Never mind the flaws in the language or the dip in programmer productivity as Java encouraged lower-level programming for a highly complex architecture; the image was what mattered.

 

Sun’s marketing chutzpah reached its height in those years.  I remember driving down the Southeast Expressway in Boston one day and seeing a Sun billboard that said “We created the Internet. Let us help you create your Internet.” Well, I was at Computer Corporation of America with full access to the Internet back in the early 1980s when the Internet was being “created”, and I can tell you that BBN was the primary creator of the Internet aside from the government and academia, and Sun was far less visible in Internet newsgroups than most other major computing vendors. Yet when I pointed this out to a Sun marketer, he was honestly surprised. Ah, the Koolaid was sweet in those days.

 

It is fashionable now to say that Sun’s downfall came because it was late to embrace Linux.  It is certainly true that Sun’s refusal to move aggressively to Linux cost it badly, especially because it ran counter to the Movement, and my then colleague Bill Claybrook deserves lots of credit for pushing them early and hard to move to Linux.  But I think the real mistake was in not moving from a company focused on hardware to one focused on software during the Internet-boom years. Oh, there were all sorts of excuses – B-school management theory said you should focus on services, and Sun did beef up its middleware – but it was always reacting, always behind, never really focused on differentiation via software.

 

The mood at analyst meetings during the Internet-bust years was highly defensive:  You guys don’t get us, we’ve shown before that we see things no one else sees and we’ll do it again, all our competitors are suffering too. And yet, the signs were there: talking to an Oracle rep, it became clear that Sun was no longer one of their key partners.

 

I am less critical of Jonathan Schwartz than some other commentators I have read. I think that he was dealt a hand that would lose no matter how he played it. The Internet-boom users had gone forever, leaving a Sun with too high a cost structure to make money from the larger corporations and financial-services markets that remained. In fact, I think that however well he executed, he was right to focus on open source (thereby making peace with the Movement) and software. At my last Sun do in 2007 when I was with Illuminata, the sense of innovation in sync with the Movement was palpable – even if Sun was mostly catching up to what other open-source and Web efforts like Google’s were doing. But underlying the marketers’ bravado at that event was depression at the endless layoffs that were slowly paring back the company. The analysts’ dinner was populated as much by the ghosts of past Sun employees as by those that remained.

 

Even as a shadow of its former self, I am going to miss Sun. I am going to miss the techie enthusiasm that produced some really good if way over-hyped ideas that continue to help move the industry forward. I am going to miss many of the smart, effective marketers and technologists still there that will probably never again get such a bully pulpit. I am going to miss a competitor that still was part of the ongoing debate about the Next Big Thing, a debate which more often than not has produced the Next Big Thing. I’m not going to miss disentangling marketing claims that sound good but aren’t true, or competitor criticisms that are great sound bites but miss the point, while watching others swallow the Sun line, hook and sinker included; but the days when Sun’s misdeeds in those areas mattered are long past.

 

Rest in peace, Sun.   All you employees and Sun alumni, take a bow.

 Never having had the chance to study system dynamics at Sloan School of Management (MIT), I was very happy recently to have the opportunity to read Donella Meadows’ “Thinking in Systems”, an excellent primer on the subject – I recommend it highly. Reading the book sparked some thoughts on how system dynamics and the concept of business and IT agility complement each other – and more importantly, how they challenge each other fundamentally. 

Let’s start with the similarities.  System dynamics says that most systems grow over time; my concept of business agility would argue that growth is a type of change, and agile businesses should do better at handling that type of change.  System dynamics says that people have a lot to say about system functioning, and people resist change; I would argue that business agility includes building organizations in which people expect and know how to handle change, because they know what to do.  System dynamics says that to change system behavior, it is better to change the system than replace components (including people); business agility says that business processes if changed can increase the agility of the company, even if the same people are involved.  

What about the differences?  System dynamics really doesn’t have a good analog for the proactive side of agility.  They mention resilience, which is really the ability of a system to react well to a wider range of external changes, they mention “self-organization” as elaborating the complexity of systems, and they talk about a system having a certain amount of room to grow without reaching constraints or limits; but there is an implicit assumption that unexpected or planned change is the exception, not the norm.  Likewise, according to system dynamics, tuning the system to handle changes better is in the long run simply delaying the inevitable; a more effective redesign changes the system itself, as profoundly as possible. Agility says that change is the norm, that redesign should be aimed at improving the ability to “proact” and the ability to react, and that increased agility has a value independent of what system is being used. 

 System dynamics poses challenges to the practice of business agility, as well.  It says that how agility is to be improved matters:  have we found the right “leverage point” for the improvement, have we understood well enough how people will “game the system”, have we anticipated future scenarios in which the “agilified” process generates new constraints and reaches new limits?  To my mind, the key question that system dynamics raises about business agility is, are we measuring it without incorporating the importance of the unmeasurable?  Or, to put it in system-dynamics terms, in attempting to capture the business value of increased agility in terms of costs, revenues, and upside and downside risks, are we “emphasizing quantity over quality”? 
 
 

I think, based on the data on agility improvements I’ve seen so far, that one of the most interesting ideas about business agility is that focusing on agility results in doing better in long-term costs, revenues, and upside/downside risks than a strategy focused on costs, revenues, or risks themselves.  If this is true, and if organizations set out to improve agility “for agility’s sake”, I don’t think system dynamics and agility strategies are in disagreement:  both want to create a process, an organization, a business built to do the right thing more often (“quality”), not one to improve a cost or revenue metric (“quantity”).  Or, as Tom Lehrer the comedian once put it, we are “doing well by doing good”. 

So my most important take-away from gaining an admittedly basic understanding of system dynamics is that metrics like AFI (agility from investment, which attempts to measure the long-term effects of a change in agility on costs, revenues, and risks) explain the relative agility effects of various strategies, but should not be used to justify strategies not focused on agility that may improve costs, revenues, and/or risks in the short term, but will actually have a negative effect in the long term.  As Yoda in Star Wars might put it: “Build to change or be not agile; there is no accidental agility.” 

Recently I’ve been writing down some thoughts about business and IT agility:  What they are, how they evidence themselves in the bottom line and in risk (or its proxy, public-company beta), and how to measure them.  At the same time, in my study of “data usefulness” (how much potentially useful data actually gets used effectively by the appropriate target), I included a factor called ‘data agility,’ or the ability of the organization to keep up to date with new useful data sources.  What I want to do now is consider a larger set of questions:  what does agility mean in the context of the organizational process that ideally gathers all potentially useful information in a timely fashion and leverages it effectively, how can we measure it, and what offers the biggest “bang for the agility-investment buck”?

 

My initial pass at “information-handling agility” is there are four sources of change that are key:  Unexpected changes in the environment, planned changes in the non-informational organization/process (which also should cover expected changes in the environment), unplanned changes in the non-informational organization, and new data sources/types.  Therefore, information-handling agility includes the ability to react rapidly and effectively in supplying information about unexpected changes in the environment, the proactively planned but timely supply of information about expected changes in the environment, the ability to react rapidly and effectively by supplying different types of information due to an unexpected internal change, and the ability to proactively seek and effectively use new data sources.

 

Note that, strictly speaking, this doesn’t cover all cases.  For example, it doesn’t cover outside change during the information-handling process – but that’s reasonable, if in most cases that change either doesn’t change the ultimate information use or it’s so important that it’s already handled by special “alerts”, as seems to be the case in real-world businesses.  Likewise, the definition of data agility doesn’t include all changes in the data, rather than just the new data-source ones; again, in the real world this seems to be much less of a problem.

 

To see how this can be measured and what offers the biggest “bang for the buck,” let’s create a “thought experiment”.  Let’s take Firm A, a typical firm in my “data usefulness” research, and apply the Agility From Investment (AFI) metric, defined as AFI = (1 + % change [revenues] – % change [development and operational change in costs]) * (1 + %change [upside risk] - % change [downside risk]) - 1. Let’s assume that Firm A invests in decreasing the time it takes to deliver data to the average user from 7 days to 3 ½ days – and ensures that the data can be used as effectively as before.  Btw, the different types of agility won’t show up again, but they underlie the analysis.

 

We can see that in the long term, assuming its competitors don’t match it, the “timeliness” strategy will improve revenues by increasing the agility of new-product development – but only if new-product development is agile itself.  If we assume an “average” NPD out there of ½ the firms being “agile enough”, then we have 15% improvement in ROI x ½ = 7 ½ % (the maximum change in long-term revenues).  Since we have only improved timeliness by ½, the maximum change becomes 3 ¾ %; the typical data usefulness is about 1/3, taking it down to 1 ¼ %; and information’s share of this takes it below 1%.

 

Costs are seemingly a different story.  Reducing time to deliver information affects not only the per-transaction IT costs of delivery, but also every business process that depends on that information.  So it is reasonable to assume a 1% decrease in NPD costs, but also a 5% decrease in operational costs, for an average of 3%.  Meanwhile, the increase in upside risk goes through a similar computation as for revenues, yielding less that a 1% increase in that type of risk.

 

That leaves downside risk.  Those risks appear to be primarily failure to get information in time to react effectively to a disaster, and failure to get the right information to act effectively.  Because the effect on risk increases as the response time gets closer to zero, it is reasonable to estimate the effect on downside risk at perhaps a 5% decrease; and since only 1/3 of the data is useful, that takes it down below 2%.  Putting it all together, AFI = (1 + 1% + 3%) * (1 + 1% + 2%) – 1 = a 7% overall improvement in the corporation’s bottom line and risk – and that’s being optimistic.

 

Now suppose we invested in doubling the percentage of potentially useful data that is effectively used – i.e., not timeliness but accuracy, consistency, scope, fit with the needs of the user/business, and analyzeability. Performing the same computations, I come out with AFI = (1 + 1% + 1.5%) * (1 + 7.5% + 1%) – 1 = an 11% long-term agility improvement.

 

One more experiment: suppose we invested in immediately identifying key new data sources and pouring them into our information-handling process, rather than waiting ½ year or more. Again, applying the same computations, but with one more assumption (a high degree of rapid change in the sources of key data), AFI = (1 + 2% + 2%) * (1 + 7.5% + 8%) – 1 = a 20% improvement in long-term contribution to the company’s bottom line.

 

Now, mind you, I have carefully shaped my assumptions, so please do not assume that this analysis is exactly what any firm will experience over the long term.  There are, however, two take-aways that do not seem to be part of the general consensus today.

 

First, firms are probably typically underestimating the long-term effects of efforts aimed at improving data usefulness (including timeliness, effectiveness, and data agility). Reasonably enough, they are concerned with immediate decision-making and strategies that affect information-handling tangentially and piecemeal. However, the result, as I have noted, is a “whack-a-mole” game in which no sooner is one information-handling problem tackled than another pops up.

 

Second, firms are also clearly underestimating the long-term benefits of improving data usefulness compared to improving timeliness, and of improving data agility compared to improving both timeliness and data usefulness.  The reason for that appears to be that firms don’t appreciate the value for new-product development of inserting better and new data in the new-product development process, compared to more timely delivery of the same old data.

 

I offer up one action item:  IT organizations should seriously consider adding a Data Agility role. The job would be monitoring all organizational sources of data from the environment – especially the Web – and ensuring that they are appropriately added to the information-handling inputs and process as soon as possible.

My personal experiences as a programmer have led me to anticipate – apparently correctly – that agile development would deliver consistently better results by cost, profit, and agility metrics. What about the down side? Or, to put it another way, what else could users do that agile development hasn’t done?

After I left Prime, I started as a computer industry analyst at The Yankee Group. I will always be grateful to Yankee for giving me the broadest possible canvas on which to paint my visions of what could be – as I used to put it, I covered “everything below the mainframe”. Of course, in the early 90s that was only ½ of the computing field … Anyway, one of the things I wrote was a comprehensive report called “Software Development: Think Again or Fail”. [yes, I know; I was a bit immodest in those days]

The reason I bring this up is that two things mentioned in that report seem to be missing in agile development:

1. High-level tools, languages, and components; and
2. Tasking programmers with keeping track of particular markets.

As far as I can see, agile theory and practice doesn’t give a hoot whether those programmers are using Java, Perl, Python, or Ruby on Rails. I use those examples because they all have been touted as ways to speed up programming in the Java/open-source world, and because only tunnel vision leads people to conclude that they’re anything but dolled-up 3GLs that do very well at handling function-driven programming and only adequately at rapidly generating data-access and user-interface code. Compare that to M204 UL, drag-and-drop VPEs (visual programming environments), and the like, and I am forced to conclude that in some respects, these tools are still less productive than what was available 13 years ago. The point is that, even if agile succeeds in improving the speed of the individual programmer, the other productivity shoe will never drop as long as the first instinct of managers and programmers is to reach for a 3GL.

The second point is that although agile does well with making sure that programmers talk to individual end users, that is different from following the entire software market. Following a market gives context to what the end user wants, and allows the designer to look at where the market appears to be going, rather than where end users have been.

So my caution about agile development is that my experience tells me that so much more can be done. The results are startling and consistent; but they could be more so. Agile development deserves praise; but the worst thing for software development would be to assume that no more fundamental changes in the paradigm need be done.

Thoughts on Agility
07/09/2007

The more I write about agile software development, Key Agility Indicators, and users seeing an environment of rapid change as their most worrisome business pressure, the more I wonder why agility, or flexibility, is not a standard way of assessing how a business is doing. Here's my argument:

Agility is a different measure and target from costs or revenues or risks. It's really about the ability of the organization to respond to significant changes in its normal functioning or new demands from outside, rapidly and effectively. It's not just costs, because a more agile organization will reap added revenues by beating out its competitors for business and creating new markets. It's not just profits or revenues, because a more agile organization can also be more costly, just as an engine tuned for one speed can perform better at that speed than one tuned to minimize the cost of changing speeds; and bad changes, such as a downturn in the economy, may decrese your revenues no matter how agile you are. It's not just risk, because agility should involve responding well to positive risks and changes as well as negative ones, and often can involve generating changes in the organization without or before any pressures or risks.

That said, we should understand how increased or decreased agility impacts other business measures, just as we should understand how increased costs affect cash flow, profits, and business risks, or increased revenues affect costs (e.g., are we past the point where marginal revenue = marginal cost?), or the likelihood that computer failures will croak the business. My conjecture is that increased agility will always decrease downside risk, but should increase upside risk. Likewise, increased agility that exceeds the competition's rate of agility improvement will always have a positive effect on gross margin over time, whether through more rapid implementation of cost-decreasing measures or an effective competitive edge in implementing new products that increase revenues and margins. And, of course, decreased agility will operate in the opposite direction. However, the profit effects will in many cases be hard to detect, both because of stronger trends from the economy and from unavoidable disasters, and because the rate of change in the environment may vary.

How to measure agility? At the product-development level, the answer seems faily easy: lob a major change at the process and see how it reacts. Major changes happen all the time, so it's not as if we can't come up with some baseline and some way of telling whether our organization is doing better or worse than before.

At the financial-statement level, the answer isn't as obvious. Iirc, IBM suggested a measure like inventory turnover. Yes, if you speed up production, certainly you can react to an increase in sales better; but what I believe we're really talking about is a derivative effect: for example, a change in the level of sales OVER a change in cost of goods sold, or a percent change in product mix over the percent change in cost of goods sold, or change in financial leverage over change in revenues (a proxy for the ability to introduce better new products faster?).

So I wonder if financial analysts shouldn't take a crack at the problem of measuring a firm's agility. It would certainly be interesting to figure out if some earnings surprises could have been partially preducted by a company's relative agility, or lack of it.

At the level of the economy, I guess I don't see an obvious application so far. Measures of frictional unemployment over total employment, I would think, would serve as a interesting take both on how much economic change is going on and to what extent comparative advantage is shifting. But I'm not sure that they would also serve to get at how well and how quickly a nation's economy is responding to these changes. I suppose we could look at companies' gross margin changes over the short term in a particular industry compared to overall industry gross margin changes to guess at each company's relative agility in responding to changes in the industry. However, that's not a good cross-industry yardstick...

And finally, is this something where unforeseen factors make any measurement iffy? If what drives long-term success is innovation, which is driven by luck, then you can be very agile and still lose out to a competitor who is moderately agile and comes up with a couple of once-in-a-genertion market-defining good new products.

Everyone talks about the weather, Mark Twain said, but nobody does anything about it. Well, everyone's talking about agility, and lots of people are doing something about it; but I don't think anybody really knows how effective their efforts are. Ideas, anyone?


Uncategorized

Rethinking Printing
11/03/2011

A recent piece about HP’s plan to assert the key role of its printer division in modern computing, and move it beyond the “cash cow” status it presently seems to have by redefining printers’ use cases, left me underwhelmed. As a printer/copier/scanner/fax machine user, I could not see major benefits to me (or for that matter, to the typical SMB) from the proposed “paper is still relevant, cut printing costs, use the printer as a mailbox” strategy.

 

Still, it made me think.  If I Ruled the World, how would I like to redesign things?  How could printing technology play a key role in the Web 3.0 of the future? What follows is, I realize, an exercise in fantasy – it will probably never happen. Still, I think it might form a useful framework with which to think about what printing will really do in the future – and what it won’t.

 

Printing a Hamburger

 

One of my favorite old ads was one for A-1 steak sauce that began by saying, “My friends, what is a hamburger?” The point being, of course, that it was chopped steak, and therefore people should consider A-1 instead of ketchup. I would suggest that the whole printer area would benefit from asking, “My friends, what is a printer?”

 

My answer would be a “virtual” one: printing/scanning/copying is about creating a computer representation of a physical image – call it a graphic – that can be displayed on a wide variety of form factors. Thus, a scanner can take these physical graphics from the outside world; a “snapshot” can take such a graphic from a video stored inside the computer; software can intercept physical representations being sent to output devices from applications, such as internal representations of photos, internal representations of Web pages, internal representations of reports, check images, signed legal documents. These standardized representations of graphics then can be customized for, and streamed to, a wide variety of form factors: computer screens, cell phone and laptop displays, printers, email messages (attachments), or fax machines (although I tend to think that these are fading away, replaced by email PDFs).

 

Is this Enterprise Content Management?  No. The point of such a “gateway”, that represents many graphic formats in a few ways and then customizes for a wide variety of physical displays, is that it is aimed at physical display – not at managing multiple users’ workflow. Its unit is the pixel, and its strength is the ability to utterly simplify the task of, say, taking a smartphone photo and simultaneously printing, emailing, faxing, and displaying on another worker’s screen.

 

One of those output devices – and probably the most useful – is the printer/scanner/copier. However, the core of the solution is distributed broker software like the Web’s email servers that pass the common representations from physical store to physical store, and route them to “displays” on demand. Rather, today’s printer is simply the best starting point for creating such a solution, because it does the best at capturing the full richness of a graphic.

 

We are surprisingly close to being able to do such a thing.  Documents or documents plus signatures can be converted into “PDF” graphics; photos into JPGs and the like; emails, instant messages, Twitter, and Facebook into printable form; screen and Web page printing is mature; check image scanning has finally become somewhat functional; and we are not far from a general ability to do a freeze-frame JPG from a video, just by pressing a button.  But, somehow, nobody has put it all together in such a “gateway.”

 

Extreme Fantasy

 

As a side note, I’d like to ask if the following is at all feasible. Visualize in your mind your smartphone for a second.  Suppose you had optional add-ons in back. They would contain 4-5 pieces of phone-sized “image paper” in one slim add-on, and a laser-type “print head” that would place the appropriate color on each pixel on the paper  A button or screen icon on the front would allow printing of whatever was displaying on the screen – including, as I suggested, a “snapshot” of a video.

 

Can it be done? I don’t know. I know that I would love to have such a thing for my mobile devices, even crippled as it probably would be.  Remember the old Polaroid OneShot? The joy of that was the immediacy, the ability to share with someone physically present what was not a really high-quality photo, but was still a great topic of conversation.

 

Why haven’t printer vendors moved more in this direction? Yes, I know that attempts to provide cut-down attachable printers for laptops haven’t sold.  But I think that’s because the vendors have failed to distinguish two use cases.  One is the highly mobile case, where you’re running around a local office or factory and need to print on whatever form factor:  small laptop, tablet, or even cell phone.  That’s the case where all you need is the ability to print out 4-5 pages worth of info, bad quality and all. For that, you need a very light printer and preferably one that attaches to the form factor so that you can carry both – like strapping two books together.

 

The second is the “mobile office” case, where you go somewhere and then sit and work. In that case, you need full power, including a multi-sheet scanner/copier feeder – why don’t printer vendors realize how useful that is? It should be light enough so it can be carried like a briefcase or laptop, but it doesn’t have to be attachable; and it should be wireless and work with a MiFi laptop. Above all, it should be foldable so that it’s compact and its parts don’t fall off when you carry it.

 

Sad Conclusions

 

I realize that some of the details of the above fantasy may be unrealistic. he real point of the fantasy is that paper and the printer are not dead, no matter what; because, as HP pointed out people keep using them more. And, I believe, that’s because it is just so useful to have a “static” representation of lots of information to carry around with you. Nothing on the horizon replaces that need, not the cell phone or laptop with its small display, nor the immobile PC or TV, nor even the tablet with its book support but inability to “detach” valued excerpts for file-cabinet storage and real-world sharing.

 

But not being dead doesn’t mean wild success. I believe that HP’s printer “vision” is still too parochial, because it fails to detach the core value proposition from a particular physical device. It may be safe to extrapolate present markets; but I believe that such a marketing approach falls far short of print technology’s potential.

 

Still, I don’t believe that printer vendors will do what I fantasize.  The risks of failure are too great, and the present revenues too comforting.  No, it seems likely to me that five years from now, the printer business will still be seen as a cash cow – because nobody asked what a hamburger is.

 

Pity.

I note from various sources that some VSPs (Very Serious People, to borrow an acronym from Paul Krugman) are now raising questions about HP’s financials in the wake of Mark Hurd’s departure for Oracle. To cherrypick some quotes: “They need to .. regain investor confidence”; “HP is in a difficult situation”; “It sounds like … Hurd took too many costs out of [the services] business”;  “HP … now … are known for inconsistency … It could become a value trap.” And, of course, there are comparisons with IBM, Dell, software vendors like Oracle, and so on.

 

I am certainly not an unalloyed HP booster. In fact, I have made many unflattering comparisons of HP with IBM myself over the years. However, I disagree with the apocalyptic tone of these pronouncements.  In fact, I will stick out my neck and predict that HP will not implode over the next 3 years, and it will not fall behind IBM in revenues either, barring a truly epochal acquisition by IBM. I believe that these VSPs are placing too much emphasis on upcoming strategies bearing the imprint of personalities like HP’s Leo Aptheker and IBM’s Sam Palmisano, and not enough emphasis on the existing positioning of IBM, Dell, HP, Microsoft, Oracle, and Apple.

 

Let’s Start With the Negative!

 

So what are these problems that I have criticized HP for? Well, let’s start with its solution portfolio.  Of the major computer vendors, HP may be the closest to a conglomerate – and that’s not a good thing. Let’s see, it has a printer/all-in-one company, a PC company, one or two server companies (including Tandem), a business/IT services/outsourcing company, and even, if you want to stretch a point, an administrative software utility company (the old SystemView) with some more recent software (Mercury Interactive testing) attached. Moreover, because HP has until very recently not tried very hard to stitch these together either as solutions or as a software/hardware stack, they are not as integrated as others – strikingly, not as integrated as IBM, which was once known for announcing global solutions whose components turned out to be in the early stages of learning to talk to each other. At first glance, HP’s endowments seem impressive; closer up, these seem, as someone once said in another context, like cats fighting in a burlap bag.

 

Moreover, HP, unlike any of the other companies I have mentioned except Dell, simply does not have software in its DNA. Back in the early 1990s, a Harvard Business Review article asserted that hardware companies at the time would suffer unless they became primarily services companies; I asserted then, and I assert now, that they also should become software companies.

 

I believe that this lack of software solutions and development personnel has several bad effects that have decreased HP’s revenues and profits by at least 20% over the last 20 years. Software development connects you with the open source community, the consumer market, and the latest technologies that impinge on computer vendors quite effectively. It allows your services arm to offer more leading-edge services, rather than trying to customize others’ software in a quick and dirty fashion for one particular services engagement. And, in the end, it moves hardware development ahead faster, as it focuses chip development on major real-world workloads that your software supports. Moreover, as IBM itself has proven, even if investment in software doesn’t pay off immediately, eventually you get it right.

 

A third, more recent problem, does relate to Mark Hurd’s cost focus – although the same might be said for IBM. A truism of business strategy proved by the problems created by CFO dominance at US car companies in the 1980s and 1990s is that too long a focus on the financials rather than product innovation costs a company dearly. It is quite possible that HP has eaten its innovation seed corn in the process of turning into a “consistent” money maker.

 

Finally, HP has in the past had a tendency in its hardware products to be “the nice alternative”: not locking you in or like Sun or Microsoft, willing to provide a platform for Oracle and Microsoft databases, open to anyone’s middleware. Whatever the merits of that approach, it creates a perception among customers that HP is not leading-edge in the sense that Apple, or even Microsoft and Oracle, are. Twenty-one years ago, in my first HP briefing, famous analyst Nina Lytton showed up in a brilliant pink outfit and immediately announced that HP’s strategy reminded her of a “great pink cloud.” That sort of rosy but not clear-cut presentation of one’s strategy and future plans does not create the sort of excitement among customers that Steve Jobs’ iPhone and iPad announcements, or even IBM’s display of Watson, do.

 

 

It Doesn’t Matter

 

And yet, when we look at HP vs. IBM in the longer run – from 1990, when I started as an analyst, to now – the ongoing success of HP is striking. At the start, IBM’s yearly revenues were in the $80s billion, and HP’s perhaps a quarter as much. Today, HP’s revenues are perhaps 1/3 greater, IBM at around a $100B run rate and HP perhaps at $135B. Some of that HP growth can be attributed to acquisitions; but a lot of it comes from growth of its core business and its acquisitions. To put it another way, IBM has been very successful at growing its profit margin; HP has been very successful at growing.

 

And growth at that scale is not easy. Companies have been trying to knock IBM off its Number 1 perch in revenues since the 1960s, and only HP has succeeded. Nobody else is in striking distance yet – Microsoft is at a $70B run rate, apparently, with seemingly no prospects of exceeding $100B in the next couple of years.

 

The reason, I think, is HP’s acquisition of Compaq back in the 1990s.  Since then, having beaten back Dell’s challenge, HP is in a very strong position in the PC-form-factor scale-out markets.  Despite recent apparent gains by System x, IBM focuses on the business market, and all of the other vendors mentioned above do not compete in PC hardware. Moreover, the PC market aligns HP with Intel and Microsoft, and thereby is relatively well protected from IBM’s POWER chips or even Oracle/Sun’s SPARC chipset, whatever life that has left in it (there is still no sign that AMD threatens Intel’s chip dominance significantly).

 

So let HP’s scale-up servers and storage falter in technology (e.g., the Itanium bet) relative to IBM and EMC, if they do; with the steady decrease in market share by Sun, HP is, and will in the short term remain, the IBM alternative in this market. Let Dell and IBM’s System x tout their business Linux scale-out prowess; the prevalence of existing scale-out PCs in public clouds and Microsoft LOBs means that HP is well positioned to handle competition in those areas over the next couple of years.

 

And who else but IBM can attack HP?  Oracle may talk big, but Sun’s market share appears to be shrinking, and 15 years of Larry Ellison talking about the virtual desktop and Oracle Database handling your filesystem have failed to make a dent in Windows, much less Wintel. Microsoft has no need to move into hardware, and apparently no desire. Apple appears to be playing a different sport altogether.

 

In fact, the only serious threat to HP over the short term is any major movement of consumers off PCs and laptops as they move to smartphones and tablets.  Here again, I think, analysts are too apocalyptic. Yes, iPhones can handle an astonishing range of consumer tasks, but not as easily or in as sophisticated a fashion as PCs, and users still continue to want to create and organize personal stores of photos etc. as well as share them – something the smartphone does not yet do. Meanwhile, the tablet offers the small form factor and attractive user interface that today’s laptop does not; but it is more likely that the tablet will acquire PC features, than that it will morph into an iPhone.

 

And Whither IBM?

 

In fact, an interesting question, given IBM’s status as the most direct competitor of HP, is whether IBM can begin to speed up its revenue growth.  IBM has been delivering strong financials for almost 20 years, while talking a good game about innovation.  In fact, I would say that they have indeed been innovative in some areas – but not enough yet to grow their revenues fast.  Will the big innovation be green technology?  The cloud? Analytics? Because, let’s face it, the only two things that recently have delivered big revenue gains are cell phones and Web 2.0/social media – and Apple, Google, and Facebook are the ones reaping the most revenues from these, not IBM.

 

In fact, as I have argued, IBM can do quite well with its present strong position in scale-up, but it cannot dominate the business side of computer markets when HP, Microsoft, and Intel have such a strong position in scale-out, nor can it match HP in consumer markets – and these affect business sales.

 

User Bottom Line: Don’t Panic, Do Buy Both

 

It would be nice, wouldn’t it, to be back in the old days when no one ever got fired for buying IBM systems, or Oracle databases? Well, those days are gone forever, and no blunder or inspired move, by Aptheker, Palmisano, Hurd, Ellison, Ballmer, Dell, or Jobs, will bring them back.

 

Given that, the smart IT buyer will acquire a little of each, in the areas in which each is best. It is true, for example, that IBM has exceptional services scope that allows effective integration – including integration of scale-out technology from Microsoft, Intel, and HP, or for that matter (System x) from IBM itself. This “mixed” enterprise architecture is the New Normal; vendor lock-in or a tide of Web innovation fueled by an Oracle and a Sun is so 1990s.

 

It is said that when Mary Queen of Scots wed the King of France, she was saluted with: “Let others wage war; let you, happy Scotland, bear children” (it’s better in Latin). Let the VSPs and apocalyptic analysts assert that vendor personalities waging war should affect your buying decision; you, happy CIO, should buy products from any of the vendors mentioned above, without worrying that a vendor is about to go belly-up in two seconds. And the vendors that have the greatest ability to integrate, like IBM and HP, will do quite well if you do.

 

 

I would like to start by apologizing to MEDecision for this piece. Instead of writing only about the Patient-Centered “Medical Home” (PCMH) – a great idea, and one about which MEDecision has shown great insight as they move to provide PCMH implementation support – I have chosen to focus instead on the relationship between this idea and that of “business agility” in health care. In other words, I am writing about what I want to write about, not what they deserve to be heard about.

 

That said, the effort to make businesses of all stripes more agile provides an excellent perspective on PCMH, its likely effects, its evolution, and the pluses and minuses (mostly pluses) of MEDecision’s offerings and plans. If, instead of thinking of PCMH as the goal, process improvements as the effects, and MEDecision’s and other offerings as the means, we think of increased health care organization agility as the goal, overall improved outcomes and vendor ROI as the side-benefits, and the PCMH as the means, I believe that we get a clearer picture of how much PCMH really matters in the long run.

 

So let’s start by drawing the picture, very briefly: What is business agility? What is PCMH? What is MEDecision doing about PCMH? Then, let’s see just how much more agile, and more effective, PCMH will probably make health care – and what might be done even better.

 

Some of my conclusions may well surprise or shock you. Specifically, I suggest that at certain points less of an emphasis on quality will produce better outcomes for the customer/patient. Moreover, at certain points less of an emphasis on cutting costs will produce lower costs. And finally, I assert that the main value of PCMH in the long run is not that it puts more control in the hands of a single primary care physician or nurse practitioner, but rather that it is more capable of frequently interacting with and adapting to the needs of the customer/patient.

 

How could I possibly draw these conclusions? Let’s see.

 

What Is Business Agility?

 

I define business agility as the relative ability of an overall organization to handle change, and includes both reactive and proactive agility. Manifestations of agility include both increased speed and increased effectiveness of change. Side-effects of increased agility are lowered costs, lowered downside risks, increased upside risks (this is good!), increased revenues, increased margins, and increased customer satisfaction. These side-effects can occur over both the long term and the short term.

 

Initial data indicates that the most effective, and perhaps the only truly effective strategies allowing organizations to increase business agility are to:

 ·         Focus primarily on agility itself, and on costs, revenues, and margins only as secondary concerns. ·         Measure agility primarily as time to value, not as an ability to meet deadlines or time to market.·         Establish processes similar to those of agile software development.·         Scale agile efforts by making the scaling tools and resources fit the needs of the people driving the agile process, not by constraining those people according to the needs of the tools or the bottom line.

Key counter-intuitive findings about business agility strategies are:

 ·         New-product-development agility improvements typically have a greater positive effect (on costs, revenues, etc.) than those which enhance operational or disaster/risk management agility. Improvements in handling external changes have a greater positive effect than improvements in handling internal changes.
  • Reductions in downside risk can actually decrease agility and have a negative overall effect. Greater upside risk is almost always a good thing.
·         Improvements in proactive agility produce greater positive effects than improvements in reactive agility. However, agile organizations should focus on improving processes rather than on increasing the number of things processes anticipate.·         Scaling agile processes is doable. However, scaling them by “compromise” with existing non-agile processes is likely to inhibit, reduce or negate process and business agility improvements.

The PCMH, and MEDecision’s Take on IT

The term PCMH, frankly, is confusing. As it has evolved, it centers not around the patient or consumer (a user of health care services who may or may not be a patient at any one time), but around a central point of patient management, typically a “nurse practitioner” or “health care coordinator” operating from the point of view of the primary care physician for a consumer. Likewise, the “medical home” is not the consumer’s physical home, but a “virtual home” for each consumer’s patient processes, usually located physically and/or logistically within the health care system/infrastructure itself.

 

The key innovative concepts of the PCMH compared to present ways of handling things are:

 1.       It’s comprehensive (including all medical organizations that a patient interacts with, and all parts of the patient process)2.       It’s coordinated (i.e., there is one integrated process, rather than numerous isolated ones)3.       It’s continuous (not really, but it means to be and does bridge some of the gaps in previous processes)4.       It’s quality- and efficiency-driven (this is not explicit in current definitions, but is the likely outcome of today’s focus on improved quality and reduced costs)

In attempting to support implementation of the PCMH, MEDecision starts from a position of strength through various solutions. Its Alineo provides extensive support for real-world case (read: patient process) management by hospitals; Nexalign offers “decision support” for PCP-patient interactions; and InFrame provides cross-provider “health information exchanges” (HIEs). All three include collaboration tools that make the integration of separate processes into one coordinated process much more straightforward. All three ensure that insurance providers play their inescapable roles in the process. And today’s widespread implementation of MEDecision ensures that its current systems are collecting a large chunk of the quality and efficiency information that will be needed in, by, and to sustain the PCMH.

 

As you might expect, MEDecision’s immediate plans for the PCMH include extension of Alineo for use by a PCP’s “care coordinator” and development of “mini-HIEs” for the offices of PCPs. Further down the line, we might expect “telemedicine” for remote patient-PCP and patient-process interactions, “centers of excellence” for quality best practices, and better information-sharing with patients (and/or consumers) via the Web.

 Looking Through the Lens of Business Agility: Marketing Myopia

More than forty years ago, an article in Harvard Business Review titled “Marketing Myopia” introduced a fundamental tenet of good marketing: you must know what market you are really in; that is, you must know the biggest fit that you can make with your ultimate consumer’s present and future needs. Over the years, that tenet has taken many forms, from positioning cars as purveyors of feelings in the 1980s to ongoing one-to-one customer relationship management in the 1990s and leveraging social networking in the 2000s. Always, always, it has been a key component of business agility, because its success depends on the ability to continuously adapt to and anticipate consumers’ needs.

 

The “know your market” tenet also allows us to understand many of the key agility-related advantages – and potential flaws – of the PCMH. From this viewpoint, government and insurer are middlemen (if highly important middlemen); the real market is the vast majority of consumers who want the feeling embodied in the statement “I feel healthy [or can feel healthy]”. That does not at all mean that vendors should aim at deceiving consumers; in the long run, that never works. However, it does mean that the aim of vendors should be to constantly use consumer input to fine-tune their services to deliver both the objective reality and the subjective feeling of potential health to consumers, with government and insurers acting as tools for scaling agility, not reasons to shift focus from agility.

 

Looking at the PCMH concept, then, we find many attractive features for increasing agility and providing more agile processes. There is increased interaction with the consumer leading to increased personalized knowledge of the consumer (with privacy protections). There is coordination across organizations, plus the ability of one person to drive an individualized patient process, and adapt it to that patient’s needs. There is movement of process control, from people with rare interactions with a consumer, to people with somewhat more frequent interactions with the consumer. There is at least the beginning of the concept of “patient centric” processes.

 

There are also serious questions about the PCMH concept, centered around the idea of it being quality- and efficiency-driven. Business agility theory suggests that focusing on quality and efficiency rather than agility is self-defeating: it produces less quality and less efficiency than focusing on agility. How so? Consider how focusing on the speed of the patient process rather than its effectiveness, and assuming that effectiveness means an increased probability of the “right” surgery for an ailment rather than increased ability to “spiral in on” the right diagnosis and fine-tune it for evolving symptoms fails to put the patient process in the context of an overall lifetime consumer/health-system interaction. Consider also how inadequate such an approach  is in a constantly changing environment as the consumer and the society changes, and how such an approach focuses on tool and physician costs rather than supporting the ability of tools and physicians to better adapt to consumer needs.

 

These are all clear and present dangers of so-called quality-driven, efficiency-driven processes. The results of such approaches are more of what we have been seeing in the last forty years: dissatisfaction of every party in the process, cost squeezes that somehow increase expenses; process controls that eliminate touchy-feely services along with so-called inefficiencies, cookbook medicine that reduces the likelihood of immediate medical malpractice suits’ risks but increases the likelihood of poor outcomes that again increase the likelihood of malpractice suits; insurer and government regulations that continually lag medical knowledge and user needs; and usually inadequate, often adversarial problem resolution processes.

 

To correct these shortcomings, I believe it is necessary to go over the PCMH with a fine-tooth comb, aiming to make it agile rather than high-quality or cost-effective. For example, the “medical home” should be virtual, allowing a hand-off of central control to the hospital when the consumer is an in-patient, and to an in-home nurse or the patient him/herself for elderly consumers living in age-adapted homes. Use of outcome data should focus on consumer-driven changes in the service, the process, or process agility, not who did what wrong.  There should be greater focus on the use of up-to-date consumer data such as lifestyle decisions (private) that correlate with ailments, health worries, and “what if you face this situation?” scenarios. Annual checkups, specialist appointments, and hospital treatments should be all part of a strategy for continuous, dynamic interactions with the patient, with both personalized (diagnostic) and process-focused (procedures/surgeries) professionals fully aware of the patient’s historical context and able to adapt immediately via virtual access to other professionals in other parts of the process at any time. Over-provisioning to handle surges should be a necessary part of the cost structure, metrics and incentives on all sides should start with “time to value” rather than “outcome price/performance”, and virtualization (the ability to untether the process from any physical location) should be everywhere.

 Conclusions and Suggestions

Although business agility applied to health care would be nice, it is very unlikely that it will become pervasive in the next few years. After all, cutting-edge agile software development is still not really the norm in business IT, a decade after the introduction of the concept. Still, the PCMH is an excellent place to start, and MEDecision is in an excellent position to foster both PCMH and PCMH-driven agility right now – if some of the rough edges concerning quality and efficiency obsession can be smoothed away. Let me repeat my counter-intuitive conclusions:

 ·         Less emphasis on quality and more emphasis on adaptability in the PCMH patient process should lead to higher-quality outcomes;·         Less emphasis on cost-cutting by increased efficiency and more emphasis on more flexible patient processes should lower costs; ·         Less focus on the health care professional and more focus on frequent interactions with patients, even outside of the patient process, should allow PCMH to provide more satisfaction to both professionals and customers.

It must be noted that the biggest barrier to improved agility, oddly enough, is not the government, but the amazing ability of insurers to continually shoot themselves in the foot, business-wise. Here are two recent examples. First, the insurance industry should have known by 2007 from the examination of climate science that certain parts of Florida were or would become  difficult markets for home insurance in the not too distant future. It appears that only this year are insurers considering this possibility, and I doubt that they will do anything effective until 2012. The result: Five years of losses; easily predictable and preventable.

 

The second example, (altered to protect the innocent) is of a condition identified in the late 1980s, that under certain circumstances actually grants individuals a longer healthy lifespan. One insurance company, hearing the news of a diagnosis in the early 1990s, but using medical knowledge 10 years out of date to assess risk, refused further life insurance for a customer except at exorbitant prices, despite the fact that available data and family history confirmed that the customer had, if anything, fewer risks.

 

They have continued to do so for the past 20 years, as the customer approaches average male life span without serious problems, and in the process have cost themselves $1 million in additional life insurance sales, $300,000 in long-term care insurance premiums, and about $100,000 in wasted sales and customer-service costs – and the customer is extremely dissatisfied. Meanwhile, other insurance companies have typically followed the uninformed lead of the original insurance company, without even bothering to recheck with their medical experts.

 Let’s face it, insurers are so unagile and so focused on out-of-date risk assessments and costs, that they often wind up being more vulnerable to more downside risks, more costs, and less profits. Where should insurers look for advice on how to become more agile? Why, to software vendors, who are the leaders in implementing business agility, of course. Software vendors such as – well, such as MEDecision, who are themselves implementing agile new-product software development. Hmm.  Didn’t I say I wasn’t going to talk about MEDecision? Oh, well.

The recent IBM announcement of new “virtualization” capabilities had the usual quota of significant improvements in value-add to the customer – but its real significance was a signpost to a further evolution of the meaning of “virtual”, a step forward that, as always before, drives new user efficiencies and effectiveness.

 

The focus of the announcement was on Intel-based hardware, and the IBM briefing pointed out ways in which the IBM solution went beyond so-called “basic” virtualization on PC-server-style Intel platforms, and the resulting 25-300% improvements in costs, TCO (total cost of ownership), utilization, power and time expended, etc. The extensions included the usual (for those following IBM) suspects: Tivoli (especially Systems Director), System x with blades, CloudBurst, SONAS, IBM Service Delivery Manager, and popular third-party software from the likes of VMWare, CA, BMC, and Microsoft. The emphasis was on improvements in consolidation, workload management, process automation, and solution delivery. Nothing there to signal a significant step forward in virtualization.

 

Or is there? Here’s what I see as the key step forward, and the key value-add. Sorry, it’s going to require a little history.

 

Whither Virtual?

The word virtual, used to mean a disconnect from physical computers, actually has changed quite a bit since its beginning in the 1960s – and so virtualization, meaning changing existing architectures in the direction of such a disconnect, is a far bigger job now than then. It started in the 1960s with “virtual memory”, the idea that a small bit of RAM could be made to look like a massive amount of main memory, with 80-90% of the performance of physical RAM, by judicious access to greater-capacity disk. Underlying this notion – now enshrined in just about every computer – was the idea of a “veneer” or “false face” to the user, application, or administrator, under which software desperately labored to make the virtual appear physical.

 

Shortly afterwards, in the 1960s and early 1970s, the “virtual machine” appeared, in such products as the IBM 8100. Here, the initial meaning of “virtual” was flipped on its head: instead of placing a huge “face” on a small physical amount of memory, a VM put a smaller “face” mimicking a physical computer over a larger physical one. At the time, with efficient performance the byword of vendors, VMs were for the upper end of the market, where large amounts of storage meant that running multiple “machines” on a computer provided transactional concurrency that in certain cases made it worthwhile to use VMs instead of a monolithic operating system to run multiple applications on a single machine. And so it remained, pretty much, until the 1990s.

 

In the 1990s, the Internet and gaming brought to consumers’ attention the idea of “virtual reality” – essentially, a “false face” over an entire computer, set of computers, or even an Internet that created full-fledged fantasy worlds a la Second Life. At almost the same time, Sun’s espousal of Java brought the notion of VMs as portable “single-application computers” across platforms and vendors. Both of these added the notion of virtuality across multiple physical machines.

 

The key addition to the meaning of “virtual” over the last decade has been the notion of “storage virtualization”, and more recently a variant popularized by Composite Software, “data virtualization”. In this case, the disconnect is not so much between one physical part of a machine and another, or even between two machines, but between programs across physical machines and data across physical machines. The “veneer”, here, presents physical storage of data (even across the Internet, in cloud computing) as one gigantic “data store”, to be accessed by one or multiple “services” that themselves are collections of applications disconnected from specific physical computers.

 

Note that at each stage, an extension of the meaning of virtual meant major cost, performance, and efficiency benefits for users – not to mention an increasing ability to develop widely-used new applications for competitive advantage. Virtual memory cost much less than physical memory. The virtual machine, as it evolved, allowed consolidation and movement onto cheaper remote platforms and, ultimately, the cloud. Storage virtualization has provided enormous data-management cost savings and performance improvements, especially as it allows better workload and data-access parallelization and data compression. And the latter two have played a key role in the second-generation business success of the Web.

 

So what’s next in the saga of virtualization? And will this, too, bring major benefits to users?

 

Tying It All Together

One can imagine a few significant ways to extend the meaning of “virtual” at this point – e.g., by applying it to sensor-driven data streams, as in virtual event stream processing. However, what is significant to me about IBM’s announcement is that includes features to tie existing meanings of virtual together. 

 

Specifically, it appears that IBM seeks to create a common “virtual reality” incorporating virtual machines, storage virtualization, and the “virtual reality” of the cloud. It provides a common “veneer” above these (the “virtual reality” part) for administrators, including some common management of VMs, services, and storage. Underneath that, it provides specific links between these for greater integration – links between virtualized storage (SONAS), virtualized applications (VMWare, KVM), and virtual services (IBM’s cloud support), all disconnected from physical machines. These links cover specific activities, including the consolidation, workload management, process automation, and solution delivery tasks cited above. Added to other IBM or third-party software, they can provide a full disconnected virtual infrastructure for any of today’s key IT or consumer needs.

 

So, as I see it, that’s the significant step forward: not coming up with a new meaning of “virtual”, but integrating the use cases of the various existing meanings of virtual – tying it all together. And the benefits? The usual benefits of such integration, as evidenced by IBM’s “success stories”: greater efficiency across a wider range of existing infrastructure, leading to major cost, performance, and ultimately revenue improvements. These will be evidenced especially in the cloud, since that’s what everyone is paying attention to these days; but they go beyond the cloud, to infrastructures that may well delay cloud integration, defer it forever, or move directly to the cloud’s successor architectures.

 

Users’ Bottom Lines

The key message for users, to my mind, is to treat the new manifestation of virtualization as an added reason to upgrade one’s flavor of virtualization, although not necessarily a reason to upgrade in and of itself. Rather, it should make one’s specific IT budget plans for the immediate future related to virtualization more of a slam dunk, and cause IT planners to dust off some “nice to haves”. And for those seeking a short-cut to current cloud technology, well, wrapping all sorts of virtualization in one bundle seems like a good bet.

 

When users make a choice of vendor, I would say that in most cases IBM ought to be at least in the conversation. This is not to say one vendor’s approach is clearly superior to another’s on its face (remember, virtualization is a “false face”!). However, in this announcement IBM has pointed to specific use cases where its technology has been used and has achieved significant benefits for the user; so much of the implementation risk is reduced.

 Above all, IT buyers should be conscious that in buying the new virtualization, they are tapping further into an extremely powerful current in the computer technology river, and one that is currently right in the main stream. There is, therefore, much less risk from riding the current via new-virtualization investment than from being edged slowly into a backwater through inaction. As virtualization enables cloud, and cloud changes virtualization’s face, the smart IT buyer will add that new face to customer-facing apps, using vendors like IBM as makeup artists supreme.

Recently, I read yet another blog post in which a user ranted against an annoying feature of the latest word processing consumer software that wiped out work back to an autosave. The problem, he seemed to think, lay with those annoying developers who kept adding features to products that weren’t really needed, making it harder to understand and use, and increasing the chance of accidental mistakes causing a meltdown. Sorry, but I disagree.  And having helped develop a word processor back in the late 70s and a file system back in the late 80s, and followed the field as an analyst since, I’ve seen the world from both sides now, as a computer scientist and as a marketer; so I think I have a good perspective on the problem.

There are two related problems with most software used by consumers: what I call orthogonality (in math, elegance) and metaphor. Orthogonality says that your basic operations on which everything else is built are "on the same level" and together they cover everything -- power plus intuitive sense to the user. Metaphor says that the idea of how the user operates with this software is a comparison to a model -- and that model should be as powerful as possible.

In the case of word processing (and most other consumer software) all products are not as orthogonal as they should be.  One of the reasons the original Word succeeded was that it was more orthogonal than its competitors in its commands:  file, edit, etc. are a pretty good take.  That means that necessary additions and elaborations are also more orthogonal; the rich get richer.

Where everyone (including Apple and Google) really falls down is in metaphor.  To take one example we are still haunted by: the original metaphor for word processors and other desktop software was, indeed, a physical desktop, with a one-level filing system underneath.  It took a while for people to accept a wholly unfamiliar metaphor, the folder within folder within folder -- even though it was far more powerful, easier to program and upgrade, and, on average, made things easier for the user who learned the new metaphor. For the last 25 years, all consumer software vendors have consistently rejected an even better metaphor: what is called in math the directed acyclic graph.  This would allow multiple folders to access the same folder or file: essentially, incredibly easy cross-filing. I know from design experience that using this approach in a word processor or other consumer software would be almost as intuitive as the present "tree" (folder) metaphor. Instead, software vendors have adopted kludges such as "aliases" that only make the product far more complicated. The same is true of supporting both dynamic and static file storage on the desktop (too long a discussion).

The reason orthogonality and good metaphor rarely get done or last is that almost never do a good developer and a good marketer (one who understands not only what consumers say they want but what they could want) connect in software development. Sorry, I have watched Steve Jobs for 30 years now, and while he is superb at the marketing end, he does very badly at understanding metaphor plus orthogonality from the mathematical/technical point of view. And the rest are probably worse.

The net result for Word, and, sorry Mr. User, for all those "better" previous word processors, is that time makes all these problems worse, and it results in either failure to incorporate valuable new metaphors (and I do think that spell- and grammar-checking are overall better than the old days, and worth the frustrations of poor orthogonality and awkward usage in isolated cases) or retrofitting of a more orthogonal approach. Specifically, I suspect (because I think I've seen it) that supporting the old WordPerfect ctrl-a approach for both the old Word command interface and the new toolbar style plus new features added just one too many dangerous key combinations next to the ones traditionally used. You miss, you pay -- and yes, the same thing will happen with touch screen gestures.

Whether this business game is worth the candle I leave to anyone who is a user.  I know, however, from long experience, who to blame -- and it's not primarily the latest developer. Fundamentally, I blame a long series of marketers who at least are told about the problem -- I've told many of them myself -- and when push comes to shove, keep chickening out.  The reasons for not doing orthogonality with a better metaphor always seem better to them at the time, the development time longer, the risks of the new higher, the credibility of the trouble-maker suspect, and they won't be around to deal with the problems of playing it safe. These are all good superficial reasons; but they're wrong. And we all suffer, developers not least – because they have to try to clean up the mess.

So give a little blame, if it makes you feel better, to the latest developer or product upgrade designer, who didn’t understand how the typical consumer would use the latest version; give a little blame, if you can figure out how, to the previous developers and designers, who didn’t anticipate these problems. But the marketer, be he the CEO or a lowly product marketer, who makes the fundamental decision about where to go next is really the only person who can hear both the voice of the consumer and the voice that understands the technical/mathematical usefulness of orthogonality and a good metaphor. The marketer is the one who has a real opportunity to make things better; to him, the user should assign the primary blame.

 

The writer Peter Beagle, commenting favorably on JRR Tolkien’s anti-heroes, once wrote “We worship all the wrong heroes.” I won’t go that far. But I will say that we need to hold our present consumer software marketing heroes to higher standards.  And stop reflexively making the developer the villain.

As Han Solo noted repeatedly in Star Wars – often mistakenly – I’ve got a bad feeling about this.

 

Last year, IBM acquired SPSS. Since then, IBM has touted the excellence of SPSS’ statistical capabilities, and its fit with the Cognos BI software. Intel has just announced that it will acquire McAfee. Intel touts the strength of McAfee’s security offerings, and the fit with Intel’s software strategy. I don’t quarrel with the fit, nor with the strengths that are cited. But it seems to me that both IBM and Intel may – repeat, may – be overlooking problems with their acquisitions that will limit the value-add to the customer of the acquisition.

 

Let’s start with IBM and SPSS. Back in the 1970s, when I was a graduate student, SPSS was the answer for university-based statistical research. Never mind the punched cards; SPSS provided the de-facto standard software for the statistical analysis typical in those days, such as regression and t tests. Since then, it has beefed up its “what if” predictive analysis, among other things, and provided extensive support for business-type statistical research, such as surveys. So what’s not to like?

 

Well, I was surprised to learn, by hearsay, that among psychology grad students, SPSS was viewed as not supporting (or not providing an easy way to do) some of the advanced statistical functions that researchers wanted to do, such as scatter plots, compared to SAS or Stata. This piqued my curiosity; so I tried to get onto SPSS’ web site (www.spss.com) on a Sunday afternoon to do some research in the matter. After several waits for a web page to display of 5 minutes or so, I gave up.[1]

 

Now, this may not seem like a big deal. However, selling is about consumer psychology, and so good psychology research tools really do matter to a savvy large-scale enterprise. If SPSS really does have some deficits in advanced psychology statistical tools, then it ought to at least support the consumer by providing rapid web-site access, and it or IBM ought to at least show some signs of upgrading the in-depth psychology research capabilities that were, at least for a long time, SPSS’ “brand.” But if there were any signs of “new statistical capabilities brought to SPSS by IBM” or “upgrades to SPSS’ non-parametric statistics in version 19”, they were not obvious to me from IBM’s web site.

 

And, following that line of conjecture, I would be quite unconcerned, if I were SAS or Stata, that IBM had chosen to acquire SPSS. On the contrary, I might be pleased that IBM had given them lead time to strengthen and update their own statistical capabilities, so that whatever happened to SPSS sales, researchers would continue to require SAS as well as SPSS. It is even not out of the bounds of possibility to conjecture that SPSS will make IBM less of a one-stop BI shop than before, because it may open the door to further non-SPSS sales, if SPSS falls further behind in advanced psych-stat tools – or continues to annoy the inquisitive customer with 5-minute web-site wait times.

 

Interestingly, my concern about McAfee also falls under the heading of “annoying the customer.” Most of those who use PCs are familiar with the rivalry between Symantec’s Norton and McAfee in combating PC viruses and the like. For my part, my experience (and that of many of the tests by PC World) was that, despite significant differences, both did their job relatively well, and that one could not lose by staying with either, or by switching from the one to the other.

 

That changed, about 2-3 years ago. Like many others, I chose not to move to Vista, but stayed with XP. At about this time, I began to take a major hit in performance and startup time. Even after I ruthlessly eliminated all startup entries except McAfee (which refused to stay eliminated), startup took in the 3-5 minute range, performance in the first few minutes after the desktop displayed was practically nil, and performance after that (especially over the Web) was about half what it should have been. Meanwhile, when I switched to the free Comcast version of McAfee, stopping their automatic raiding of my credit card for annual renewals was like playing Whack-a-Mole, and newer versions increasingly interrupted processing at all times either to request confirmations of operations or to carry out unsolicited scans that slowed performance to a crawl in the middle of work hours.

 

Well, you say, business as usual. Except that Comcast switched to Norton last year, and, as I have I downloaded the new security software to each of five new/old XP/Win7 PCs/laptops, the difference has been dramatic in each case. No more prompts demanding response; no more major overhead from scans; startup clearly faster, and faster still once I removed stray startup entries via Norton; performance on the Web and off the Web close to performance without security software. And PC World assures me that there is still no major difference in security between Norton and McAfee.

 

Perhaps I am particularly unlucky. Perhaps Intel, as it attempts to incorporate McAfee’s security into firmware and hardware, will fix the performance problems and eliminate the constant nudge-nudge wink-wink of McAfee’s response-demanding reminders. It’s just that, as far as I can see from the press release, Intel does not even mention “We will use Intel technology to speed up your security software.” Is this a good sign? Not by me, it isn’t.

 

So I conjecture, again, that Intel’s acquisition of McAfee may be great news – for Symantec. What happens in consumer tends to bleed over into business, so problems with consumer performance may very well affect business users’ experience of their firewalls, as well; in which case, this would give Symantec the chance to make hay while Intel shines a light on McAfee’s performance, and to cement its market to such a point that Intel will find it difficult to push a one-stop security-plus-hardware shop on its customers.

 

Of course, none of my evidence is definitive, and many other things may affect the outcome. However, if I were a business customer, I would be quite concerned that, along with the value-add of the acquisitions of SPSS by IBM and McAfee by Intel, may come a significant value-subtract.



[1] No other industry web page I referenced at that time, including www.ibm.com, took more than 5 seconds.

In a recent briefing, someone suggested to an executive of one of Microsoft’s rivals that he might want to partner with Microsoft. He looked a bit nonplussed at the suggestion. And that, in turn, triggered memories of wonderful quotes that I have heard over the years, about Microsoft and others.

Here is a very brief list of some of my favorite quotes. My standards are high: the speaker should show pithiness, wit, and insight, not just bile and the gift of gab. And the quote should stand the test of time: it should, fairly or unfairly, relate to the enduring brand or reputation of the companies on which the speaker commented.

1.       I remember reading this one in a trade publication in the early 1990s. The president of Ashton-Tate, faced with declining sales of A-T’s product in the face of competition from Microsoft’s Excel and Access, set out on a joint project with Microsoft. Later, he was asked to comment. “The only thing worse than competing with Microsoft,” he said, “Is working with them.” Time has long since hewn some of the sharp edges off Microsoft’s ability to work with other companies. Still, around the industry, long-time competitors and others with long memories are wary of Microsoft, both as a competitor and as a partner. From old-timers, I still get a laugh with that one.

2.       Also in the early 1990s, Apple (Little Red), IBM (Big Blue], and a third partner set out to do an emulation of Unix that was clearly aimed at Sun. The response of Scott McNeally was blunt: “This consortium, “ he asserted, “Is Purple Apple Sauce.” And so it proved to be. This one has shown itself to be less a comment about the Apple and IBM of the future, and more about the inability of joint projects between vendors to live up to their hype.

3.       In 1990, I was in the crowd when Ken Olsen of Digital Equipment – pretty much number 2 in the computer industry at that time – was asked about UNIX. He sighed, “Why are people trying to make it into an enterprise engine? It’s such a nice toy operating system.” Within five years, that toy operating system had effectively destroyed DEC.  I pick on Ken unfairly, because he had had a pretty good track record up to that point. However, I continue to be wary of companies that disparage new technologies. It’s nice to keep a company focused; but when you speak beyond your technical competence, as Ken did, it’s sometimes a recipe for disaster.

4.       This one dates from the later 1990s. Some magazine had done an interview with Larry Ellison, where he indulged in some of his trademark zingers about competitors. When it came to Computer Associates, however, which at the time was making money by buying legacy software, laying off all possible personnel, and maintaining the software as is, Larry was clearly straining to say something nice. “Well,” he noted, “Every ecosystem needs a scavenger.” He may have meant it as a put-down; but I thought it was, and is, an important point. Yes, every computer ecosystem should have a scavenger; because, let’s face it, most computer technologies never go away. I am sure there are still a few Altairs and drums out there, not to mention hydraulic computing. And I am sure that CA really appreciated the remark – not.

Well, that’s it for now. I have left out a few lesser ones I have enjoyed, such as the time John Sculley got up to speak and my Apple-executive table partner braced himself. What’s the matter, I asked. Well, he said, the last time John spoke at a venue like this, he promised that Apple would be delivering the Newton – and then we had to go build it.

Anyone else remember some good ones?

 

Climate Readings
04/05/2010

This blog is not a forum for climate change discussion; nor do I intend to turn it into one.  However, I recently had the opportunity to look at two recent books on the subject, the latest (“The Climate Crisis”, Archer and Rahmstorf), was published this year. These added new and, to my mind, disturbing considerations to the climate change topic.

 

Below, I note the new conclusions which, by my own subjective assessment, arise from their analysis. Note that I am not attempting to say that these are definitive; just that they seem to follow from the analysis presented in these books, based on my quick-and-dirty calculations.

 

One: It all depends on Greenland. The two big possible big sources of sea-level rise, above and beyond what’s already been verified, are land ice on Greenland and in Antartica. It appears that over the next few decades, Antarctica will be “neutral”: the sea ice blocking faster land-ice slide into the sea and therefore faster land-ice melting is apparently still stable for the two largest of the three sea-ice extensions there, so until this starts breaking up it is likely that matters will continue more or less as they have for the last 30 years or more.

 

Greenland, however, appears to be accelerating its ice’s slide into the sea, and past events have shown that this type of sudden change can happen in 10-100 years, instead of centuries. If all the ice in Greenland were to melt into the sea, it would raise sea levels by about 23 feet.  If we assume that half of Greenland’s ice were to melt (as it did in a past meltdown) in the next 50 years, then we are talking 12 feet of sea level rise all over the world, on top of the foot rise we have seen in the last 50 years and the 2-3 foot rise that is “baked in” from other factors. So if Greenland goes, by 2060 we may see 15 feet of sea level rise; if not, we might see only 2 feet.

 

Moreover, a Greenland meltdown has a big effect on the amount of global warming. That meltdown dumps fresh water into the ocean right near the major “carbon pump” that sends much of the carbon dioxide in the air deep under water, not to surface for thousands of years. Fresh water doesn’t sink – so the pump may slow or stop. This means that the oceans, which up to now have been soaking up maybe 30-40% of the carbon dioxide from human emissions, would decrease their “carbon uptake” drastically. And that, in turn, might add 2-3 degrees (Fahrenheit) to the “final equilibrium” increase in global temperature. There’s already a consensus around 5.4 degrees F increase from global warming, which may well be on the conservative side – now we’re talking 8 degrees F global warming. And if we use less conservative initial assumptions, we are talking about maybe 7 degrees F global warming by 2060 and 10 degrees F by 2100.

 

Two: It wasn’t just our bad characteristics that resulted in global warming; it was also our good ones. The data show an almost-level temperature between 1940 and 1970. The reason appears to be that our high and increasing levels of pollution released more and more pollution-related water vapor into the air, which counteracted the increase in carbon dioxide (and, to a lesser extent, methane). When we began to clean up the pollution in the 1970s, the rise resumed.

 

Three: nuclear is not the long-term answer.  This one I am not sure I got the gist of, but here’s how it runs: supplies of uranium, if we were to use present technology and build enough new plants to cover most needs, would last for 58 years. If we use breeder reactors, then uranium would last for at least 1000-5000 years; but breeder reactors are incredibly vulnerable to terrorism and state misuse.  In other words, if we use breeder reactors, the chance of misuse of nuclear weapons and of nuclear winter goes up a hundredfold – and nuclear winter would be even worse than global warming.

 

Four: as we have been talking, our chance to avoid the big impacts of global warming may have already passed. The best estimate right now for the upcoming long-term increase in global temperature is 3 degrees Celsius (multiply by 1.8 for degrees Fahrenheit). The UN has identified low-cost changes in our industries and lifestyles (less than $100/ton of carbon dioxide emitted) that would limit that increase to 1.5 degrees Celsius. Any change that is 2 degrees Celsius or greater triggers the big impacts: bigger extreme weather events, large extensions of drought, mass die-offs and species extinctions (because humanity has paved over much of the world, making migration of many species in reaction to change impossible). Now, keep in mind that (without going into the details) there is a good possibility that the 3 degrees estimate and therefore the 1.5 degrees estimate are too optimistic by ½-1 degree. If that is true, then because the low-cost solution depends on all the low-cost technologies in a variety of areas, the cost to achieve 1.5-degree change almost certainly goes up a lot.  On the other hand, if we had started 10 years ago, we would have been able to limit the increase to maybe 1 degree Celsius, and we would still be safely in low-cost territory. Likewise, if we fail to act effectively for another 10 years (everything else being equal), costs will go up perhaps twice as sharply, even when we factor in the arrival of new technologies in some areas.

 

Five: the United States continues to be the biggest problem. According to the data, from 1950-2003 the US was the biggest (net) contributor of carbon dioxide emissions per capita; and today, the US is the biggest contributor of carbon dioxide emissions (overall, not per capita). Since China and Russia are numbers 2 and 3, it’s pretty likely that the US continues to be the biggest contributor per capita. That means that the US is having the biggest negative impact on global warming, and that net decreases on carbon dioxide emissions by the US, individually and collectively, would have the biggest positive impact.

 

My take? Boy, do I hope I’m wrong in my conclusions. But, as someone once noted, hope is not a plan.

 

 

A few weeks ago, I met an old friend – Narain Gehani, once at Cornell with me, then at Bell Labs, and now at Stevens Institute of Technology – and he presented me with a book of reminiscences that he had written: “Bell Labs: Life in the Crown Jewel”. I enjoyed reading about Narain’s life in the 35 years since we last met; Narain tells a great story, and is a keen observer. However, one thing struck me about his experiences that he probably didn’t anticipate – the frequency with which he participated in projects aimed at delivering software products or software-infused products, products that ultimately didn’t achieve merited success in one marketplace or other (Surf n Chat and Maps r Us were the ones that eventually surfaced on the Web). 

This is one aspect of software development that I rarely see commented on. We have all talked about the frequency of software-development project failure due to poor processes. But how about projects that by any reasonable standard should be a success?  They produce high-quality software. The resulting solutions add value for the customer. They are well received by end users (assuming end users see them at all). They are clearly different from everything else in the market. And yet, looking back on my own experience as a software developer, I realize that I had a series of projects that were strikingly similar to Narain’s – they never succeeded in the market as I believe they should have. 

My first project was at Hendrix Corporation, and was a word processing machine that had the wonderful idea of presenting a full page of text, plus room on the side for margins (using the simple trick of rotating the screen by 90 degrees to achieve “portrait” mode). For those who have never tried this, it has a magical effect on writing:  you are seeing the full page as the reader is seeing it, discouraging long paragraphs and encouraging visual excitement. The word processor was fully functional, but after I left Hendrix it was sold to AB Dick, which attempted to fold it into their own less-functional system, and then it essentially vanished. 

Next was a short stint at TMI, developing bank-transfer systems for Bankwire, Fedwire, etc. The product was leading-edge at the time, but the company was set up to compensate by bonuses that shrank as company size increased. As employee dissatisfaction peaked, it sold itself to Logica, key employees left, and the product apparently ceased to evolve. 

At Computer Corporation of America, I worked on a product called CSIN. It was an appliance that would do well-before-its-time cross-database querying about chemical substances “in the field”, for pollution cleanup efforts. It turned out that the main market for the product was librarians; and these did not have the money to buy a whole specialized machine. That product sold one copy in its lifetime, iirc. 

Next up was COMET/204, one of the first email systems, and the only one of its time to be based on a database – in this case, CCA’s wonderful MODEL 204. One person, using MODEL 204 for productivity, could produce a new version of COMET/204 in half the time it took six programmers, testers, and project managers to create a competing product in C or assembler. COMET/204 had far more functionality than today’s email systems, and solved problems like “how do you discourage people from using Reply All too much?” (answer: you make Reply the default, forcing them to do extra effort to do Reply All spam). While I was working on it, COMET/204 sold about 10 copies. One problem was the price: CCA couldn’t figure out how to price COMET/204 competitively when it contained a $60,000 database.  Another was that in those days, many IT departments charged overhead according to usage of system resources – in particular, I/Os (and, of course, using a database meant more I/Os, even if it delivered more scalable performance). One customer kept begging me to hurry up with a new version so he could afford to add 10 new end users to his department’s system. 

After that, for a short stint, there was DEVELOPER/204.  This had the novel idea that there were three kinds of programming: interface-driven, program-driven, and data-driven. The design phase would allow the programmer to generate the user interface, the program outline, or the data schema; the programmer could then generate an initial set of code (plus database and interface) from the design. In fact, you could generate a fully functional, working software solution from the design. And it was reversible: if you made a change in the code/interface/data schema, it was automatically reflected back into the design. The very straightforward menu system for DEVELOPER/204 simply allowed the developer to do these things. After I left CCA, the company bundled DEVELOPER/204 with MODEL/204, and I was later told that it was well received. However, that was after IBM’s DB2 had croaked MODEL 204’s market; so I don’t suppose that DEVELOPER/204 sold copies to many new customers. 

My final stop was Prime, and here, with extraordinary credit to Jim Gish, we designed an email system for the ages. Space is too short to list all of the ideas that went into it; but here are a couple: it not only allowed storage of data statically in user-specified or default folders, but also dynamically, by applying “filters” to incoming messages, filters that could be saved and used as permanent categorizing methods – like Google’s search terms, but with results that could be recalled at will. It scrapped the PC’s crippled “desktop metaphor” and allowed you to store the same message in multiple folders – automatic cross-referencing. As a follow-on, I did an extension of this approach into a full-blown desktop operating system. It was too early for GUIs – but I did add one concept that I still love: the “Do the Right Thing” key, which figured out what sequence of steps you wanted to take 90% of the time, and would do it for you automatically. Because Prime was shocked by our estimate that the email system would take 6 programmers 9 months to build, they farmed it to a couple of programmers in England, where eventually a group of 6 programmers took 9 months to build the first version of the product – releasing the product just when Prime imploded. 

Looking back on my programming experiences, I realized that the software I had worked on had sold a grand total (while I was there) of about 14 copies. Thinking today, I realize that my experiences were pretty similar to Narain’s: as he notes, he went through “several projects” that were successful development processes (time, budget, functionality) but for one reason another never took off as products.  It leads me to suspect that this type of thing – project success, product lack of success happens just as much as software development failure, and very possibly more often. 

So why does it happen?  It’s too easy to blame the marketers, who often in small companies have the least ability to match a product to its (very real) market. Likewise, the idea that the product is somehow “before its time” and not ready to be accepted by the market may be true – but in both Narain’s and my experience, many of these ideas never got to that point, never got tested in any market at all. 

If I had to pick two factors that most of these experiences had in common, they were, first, barriers within the company between the product and its ultimate customers, and second, markets that operate on the basis of “good enough”. The barriers in the company might be friction with another product (how do you low-ball a high-priced database in email system pricing?) or suspicion of the source (is Bell Labs competing with my development teams?). The market’s idea of “good enough” may be “hey, gmail lets me send and receive messages; that’s good enough, I don’t need to look further”. 

But the key point of this type of software failure is that, ultimately, it may have more profound negative effects on top and bottom lines, worker productivity, and consumer benefits in today’s software-infused products than development failure. This is because, as I have seen, in many cases the ideas are never adopted. I have seen some of the ideas cited above adopted eventually, with clear benefits for all; but some, like cross-referencing, have never been fully implemented or widely used, 25 or more years after they first surfaced in failed software. Maybe, instead of focusing on software development failure, we should be focusing on recovering our memory of vanished ideas, and reducing the rate of quality-product failure. 

When I was at Sloan School of Management at MIT in 1980, John Rockart presented a case study of a development failure that was notable for one outcome: instead of firing the manager of the project, the company rewarded him with responsibility for more projects. They recognized, in the software development arena, the value of his knowledge of how the project failed. In the same way, perhaps companies with software-infused products should begin to collect ideas, not present competitive ideas, but past ideas that are still applicable but have been overlooked. If not, we are doomed not to repeat history, but rather, and even worse, to miss the chance to make it come out better this time.

This is just my way of recognizing the organizations that I believe have most contributed to increasing the agility of their companies, their industries, or the global economy. I judge things from my own quirky perspective, attempting to distinguish short-term, narrow project changes from longer-term, more impactful, structural changes.  Let’s go to the envelopes:

 

  1. IBM. For the most part, IBM has an exceptionally broad and insightful approach to agility. This includes favorable comments within the agile community about their products (one agile site called IBM Rational’s collaborative tools the equivalent of steroids in baseball); using agile development practices such as mashups and the Innovation Jam in its own strategy creation and innovation as well as product development; attempts to redefine “agile”; and the fostering of the smart grid. Negatives include a too-narrow definition of agile that does not include proactive anticipation of opportunities and system limits, and a legacy software set that is still too complex. Nevertheless, compared to most companies, IBM “gets it.”
  2. The Agile Alliance. While the alliance unfortunately focuses primarily on agile software development, within that sphere it continues to be the pre-eminent meeting place and cheerleader for those “bottom up” individuals who have driven agile from 1-3 person software projects to thousands-of-people designing and developing software-infused products. One concern: their recent consideration of the idea that lean manufacturing and agile development are completely complementary, when lean manufacturing necessarily assumes a pre-determined result for either product development or product manufacture, and focuses on costs rather than customer satisfaction.
  3. The Beyond Budgeting Roundtable.  Never heard of it? It started with Swedish companies claiming that budgets could be replaced, and should be, because they constrained the agility of the resulting organization. Although this is not a mainstream concept, enough companies have adopted this approach to show that it can work. Consider this an “award in progress.”

 

Looking at these, I am saddened that the list isn’t larger. In fairness, a lot of very agile small organizations exist, and Indian companies such as Wipro are in the forefront of experimenting with agile software development. However, even those large companies that give lip service to agility and apply it in their software and product development would admit that the overall structure of the company in which these concepts are applied is still too inflexible and too cost-driven, and the structural changes have not yet been fully cemented into the organization for the long term. Wait ‘til next year …

 

Recently, Paul Krugman has been commenting on what he sees as the vanishing knowledge of key concepts such as Say’s Law in the economics profession, partly because it has been in the interest of a particular political faction that the history of the Depression be rewritten in order to bolster their cause.  The danger of such a rewriting, according to Krugman, is that it saps the will of the US to take the necessary steps to handle another very serious recession. This has caused me to ask myself, are there corresponding dangerous rewritings of history in the computer industry?

 

I think there are.  The outstanding example, for me, is the way my memory of what happened to OS/2 differs from that of others that I have spoken to recently.

 

Here’s my story of what happened to OS/2.  In the late 1980s, Microsoft and IBM banded together to create a successor to DOS, then the dominant operating system in the fastest-growing computer-industry market. The main reason was users’ increasing interest in the Apple’s GUI-based rival operating system.  In time, the details of OS/2 were duly released.

 

Now, there were two interesting things about OS/2, as I found out when researching it as a programmer at Prime Computer.  First, there were a large stack of APIs for various purposes, requiring many large manuals of documentation.  Second, OS/2 also served as the basis for a network operating system (NOS) called LAN Manager (Microsoft’s product). So if you wanted to implement a NOS involving OS/2 PCs, you had to implement LAN Manager.  But, iirc, LAN Manager required 64K of RAM memory in the client PC – and PCs were still 1-2 years from supporting 64K of RAM.

 

This reason this mattered is that, as I learned from talking to Prime sales folk, NOSs were in the process of shattering the low-end offerings of major computer makers.  The boast of Novell at that time was that, using a basic PC as the server, it could deliver shared data and applications to any client PC faster than that PC’s own disk. So a NOS full of cheap PCs was just the thing for any doctor’s office, retail store, or other department/workgroup, much cheaper than a mini from Prime, Data General, Wang, or even IBM – and it could be composed of the PCs that members of the workgroup had already acquired for other purposes.

 

In turn, this meant that the market for PCs was really a dual consumer/business market involving PC LANs, in which home computers were used interchangeably with office ones. So all those applications that the PC LANs supported would have to run on DOS PCs with something like Novell NetWare, because OS/2 PCs required LAN Manager, which would not be usable for another 2 years … you get the idea. And so did the programmers of new applications, who, when they waded through the OS/2 documentation, found no clear path to a big enough market for OS/2-based apps.

 

So here was Microsoft, watching carefully as the bulk of DOS programmers held off on OS/2, and Apple gave Microsoft room to move by insisting on full control of their GUI’s APIs, shutting out app programmers.  And in a while, there was the first version of Windows.  It was not as powerful as OS/2, nor was it backed by IBM.  But it supported DOS, it allowed any NOS but LAN Manager, and the app programmers went for it in droves.  And OS/2 was toast.

 

Toast, also, were the minicomputer makers, and, eventually, many of the old mainframe companies in the BUNCH (Burroughs, Univac, NCR, Control Data, Honeywell). Toast was Apple’s hope of dominating the PC market. The sidelining of OS/2 was part of the ascendance of PC client-server networks, not just PCs, as the foundation of server farms and architectures that were applied in businesses of all scales.

 

What I find, talking to folks about that time, is that there seem to be two versions, different from mine, about what really happened at that time.  The first I call “evil Microsoft” or “it’s all about the PC”. A good example of this version is Wikipedia’s entry on OS/2. This glosses over the period between 1988, when OS/2 was released, and 1990, when Windows was released, in order to say that (a) Windows was cheaper and supported more of what people wanted than OS/2, and (b) Microsoft arranged that it be bundled on most new PCs, ensuring its success.  In this version, Microsoft seduced consumers and businesses by creating a de-facto standard, deceiving businesses in particular into thinking that the PC was superior to (the dumb terminal, Unix, Linux, the mainframe, the workstation, network computers, open source, the cell phone, and so on). And all attempts to knock the PC off its perch since OS/2 are recast as noble endeavors thwarted by evil protectionist moves by monopolist Microsoft, instead of failures to provide a good alternative that supports users’ tasks both at home and at work via a standalone and networkable platform.

 

The danger of this first version, imho, is that we continue to ignore the need of the average user to have control over his or her work. Passing pictures via cell phone and social networking via the Internet are not just networking operations; the user also wants to set aside his or her own data, and work on it on his or her own machine. Using “diskless” network computers at work or setting too stringent security-based limits on what can be brought home simply means that employees get around those limits, often by using their own laptops. By pretending that “evil Microsoft” has caused “the triumph of the PC”, purveyors of the first version can make us ignore that users want both effective networking to take advantage of what’s out there and full personal computing, one and inseparable.

 

The second version I label “it’s the marketing, not the technology.”  This was put to me in its starkest form by one of my previous bosses:  it didn’t matter that LAN Manager wouldn’t run on a PC, because what really killed OS/2, and kills every computer company that fails, was bad marketing of the product (a variant, by the way, is to say that it was all about the personalities: Bill Gates, Steve Ballmer, Steve Jobs, IBM).  According to this version, Gates was a smart enough marketer to switch to Windows; IBM were dumb enough at marketing that they hung on to OS/2.  Likewise, the minicomputer makers died because they went after IBM on the high end (a marketing move), not because PC LANs undercut them on the low end (a technology against which any marketing strategy probably would have been ineffective).

 

The reason I find this attitude pernicious is that I believe it has led to a serious dumbing down of computer-industry analysis and marketing in general. Neglect of technology limitations in analysis and marketing has led to devaluation of technical expertise in both analysts and marketers. For example, I am hard-pressed to find more than a few analysts with graduate degrees in computer science and/or a range of experience in software design that give them a fundamental understanding of the role of the technology in a wide array of products – I might include Richard Winter and Jonathan Eunice, among others, in the group of well-grounded commentators. It’s not that other analysts and marketers don’t have important insights to contribute, whether they’re from IT, journalism, or generic marketing backgrounds; it is that the additional insights of those who understand what technologies underlie an application are systematically devalued as “just like any other analyst,” when those insights can indeed do a better job of assessing a product and its likelihood of success/usefulness. 

 

Example:  does anyone remember Parallan? In the early ‘90s, they were a startup betting on OS/2 LAN Manager. I was working at Yankee Group, which shared the same boss and location as a venture capital firm called Battery Ventures.  Battery Ventures invested in Parallan.  No one asked me about it; I could have told them about the technical problems with LAN Manager. Instead, the person who made the investment came up to me later and filled my ears with laments about how bad luck in the market had deep-sixed his investment.

 

The latest manifestation of this rewriting of history is the demand that analysts be highly visible, so that there’s a connection between what they say and customer sales.  Visibility is about the cult of personality – many of the folks who presently affect customer sales, from my viewpoint, often fail to appreciate the role of the technology that comes from outside of their areas of expertise, or view the product almost exclusively in terms of marketing. Kudos, by the way, to analysts like Charles King, who recognize the need to bring in technical considerations in Pund-IT Review from less-visible analysts like Dave Hill. Anyway, the result of dumbing-down by the cult of visibility is less respect for analysts (and marketers), loss of infrastructure-software “context” when assessing products on the vendor and user side, and increased danger of the kind of poor technology choices that led to the demise of OS/2.

 

So, as we all celebrate the advent of cell phones as the successor to the PC, and hail the coming of cloud computing as the best way to save money, please ignore the small voice in the corner that says that the limitations of the technology of putting apps on the cell phone matter, and that cloud computing may cause difficulties with individual employees passing data between home and work. Oh, and be sure to blame the analyst or marketer for any failures, so the small voice in the corner will become even fainter, and history can successfully continue to be rewritten.


 
Wayne Kernochan