Blog


Your Categories
Information Infrastructure EII TCO/ROI Hardware Uncategorized Green IT Development

Uncategorized

Rethinking Printing
11/03/2011

A recent piece about HP’s plan to assert the key role of its printer division in modern computing, and move it beyond the “cash cow” status it presently seems to have by redefining printers’ use cases, left me underwhelmed. As a printer/copier/scanner/fax machine user, I could not see major benefits to me (or for that matter, to the typical SMB) from the proposed “paper is still relevant, cut printing costs, use the printer as a mailbox” strategy.

 

Still, it made me think.  If I Ruled the World, how would I like to redesign things?  How could printing technology play a key role in the Web 3.0 of the future? What follows is, I realize, an exercise in fantasy – it will probably never happen. Still, I think it might form a useful framework with which to think about what printing will really do in the future – and what it won’t.

 

Printing a Hamburger

 

One of my favorite old ads was one for A-1 steak sauce that began by saying, “My friends, what is a hamburger?” The point being, of course, that it was chopped steak, and therefore people should consider A-1 instead of ketchup. I would suggest that the whole printer area would benefit from asking, “My friends, what is a printer?”

 

My answer would be a “virtual” one: printing/scanning/copying is about creating a computer representation of a physical image – call it a graphic – that can be displayed on a wide variety of form factors. Thus, a scanner can take these physical graphics from the outside world; a “snapshot” can take such a graphic from a video stored inside the computer; software can intercept physical representations being sent to output devices from applications, such as internal representations of photos, internal representations of Web pages, internal representations of reports, check images, signed legal documents. These standardized representations of graphics then can be customized for, and streamed to, a wide variety of form factors: computer screens, cell phone and laptop displays, printers, email messages (attachments), or fax machines (although I tend to think that these are fading away, replaced by email PDFs).

 

Is this Enterprise Content Management?  No. The point of such a “gateway”, that represents many graphic formats in a few ways and then customizes for a wide variety of physical displays, is that it is aimed at physical display – not at managing multiple users’ workflow. Its unit is the pixel, and its strength is the ability to utterly simplify the task of, say, taking a smartphone photo and simultaneously printing, emailing, faxing, and displaying on another worker’s screen.

 

One of those output devices – and probably the most useful – is the printer/scanner/copier. However, the core of the solution is distributed broker software like the Web’s email servers that pass the common representations from physical store to physical store, and route them to “displays” on demand. Rather, today’s printer is simply the best starting point for creating such a solution, because it does the best at capturing the full richness of a graphic.

 

We are surprisingly close to being able to do such a thing.  Documents or documents plus signatures can be converted into “PDF” graphics; photos into JPGs and the like; emails, instant messages, Twitter, and Facebook into printable form; screen and Web page printing is mature; check image scanning has finally become somewhat functional; and we are not far from a general ability to do a freeze-frame JPG from a video, just by pressing a button.  But, somehow, nobody has put it all together in such a “gateway.”

 

Extreme Fantasy

 

As a side note, I’d like to ask if the following is at all feasible. Visualize in your mind your smartphone for a second.  Suppose you had optional add-ons in back. They would contain 4-5 pieces of phone-sized “image paper” in one slim add-on, and a laser-type “print head” that would place the appropriate color on each pixel on the paper  A button or screen icon on the front would allow printing of whatever was displaying on the screen – including, as I suggested, a “snapshot” of a video.

 

Can it be done? I don’t know. I know that I would love to have such a thing for my mobile devices, even crippled as it probably would be.  Remember the old Polaroid OneShot? The joy of that was the immediacy, the ability to share with someone physically present what was not a really high-quality photo, but was still a great topic of conversation.

 

Why haven’t printer vendors moved more in this direction? Yes, I know that attempts to provide cut-down attachable printers for laptops haven’t sold.  But I think that’s because the vendors have failed to distinguish two use cases.  One is the highly mobile case, where you’re running around a local office or factory and need to print on whatever form factor:  small laptop, tablet, or even cell phone.  That’s the case where all you need is the ability to print out 4-5 pages worth of info, bad quality and all. For that, you need a very light printer and preferably one that attaches to the form factor so that you can carry both – like strapping two books together.

 

The second is the “mobile office” case, where you go somewhere and then sit and work. In that case, you need full power, including a multi-sheet scanner/copier feeder – why don’t printer vendors realize how useful that is? It should be light enough so it can be carried like a briefcase or laptop, but it doesn’t have to be attachable; and it should be wireless and work with a MiFi laptop. Above all, it should be foldable so that it’s compact and its parts don’t fall off when you carry it.

 

Sad Conclusions

 

I realize that some of the details of the above fantasy may be unrealistic. he real point of the fantasy is that paper and the printer are not dead, no matter what; because, as HP pointed out people keep using them more. And, I believe, that’s because it is just so useful to have a “static” representation of lots of information to carry around with you. Nothing on the horizon replaces that need, not the cell phone or laptop with its small display, nor the immobile PC or TV, nor even the tablet with its book support but inability to “detach” valued excerpts for file-cabinet storage and real-world sharing.

 

But not being dead doesn’t mean wild success. I believe that HP’s printer “vision” is still too parochial, because it fails to detach the core value proposition from a particular physical device. It may be safe to extrapolate present markets; but I believe that such a marketing approach falls far short of print technology’s potential.

 

Still, I don’t believe that printer vendors will do what I fantasize.  The risks of failure are too great, and the present revenues too comforting.  No, it seems likely to me that five years from now, the printer business will still be seen as a cash cow – because nobody asked what a hamburger is.

 

Pity.

I note from various sources that some VSPs (Very Serious People, to borrow an acronym from Paul Krugman) are now raising questions about HP’s financials in the wake of Mark Hurd’s departure for Oracle. To cherrypick some quotes: “They need to .. regain investor confidence”; “HP is in a difficult situation”; “It sounds like … Hurd took too many costs out of [the services] business”;  “HP … now … are known for inconsistency … It could become a value trap.” And, of course, there are comparisons with IBM, Dell, software vendors like Oracle, and so on.

 

I am certainly not an unalloyed HP booster. In fact, I have made many unflattering comparisons of HP with IBM myself over the years. However, I disagree with the apocalyptic tone of these pronouncements.  In fact, I will stick out my neck and predict that HP will not implode over the next 3 years, and it will not fall behind IBM in revenues either, barring a truly epochal acquisition by IBM. I believe that these VSPs are placing too much emphasis on upcoming strategies bearing the imprint of personalities like HP’s Leo Aptheker and IBM’s Sam Palmisano, and not enough emphasis on the existing positioning of IBM, Dell, HP, Microsoft, Oracle, and Apple.

 

Let’s Start With the Negative!

 

So what are these problems that I have criticized HP for? Well, let’s start with its solution portfolio.  Of the major computer vendors, HP may be the closest to a conglomerate – and that’s not a good thing. Let’s see, it has a printer/all-in-one company, a PC company, one or two server companies (including Tandem), a business/IT services/outsourcing company, and even, if you want to stretch a point, an administrative software utility company (the old SystemView) with some more recent software (Mercury Interactive testing) attached. Moreover, because HP has until very recently not tried very hard to stitch these together either as solutions or as a software/hardware stack, they are not as integrated as others – strikingly, not as integrated as IBM, which was once known for announcing global solutions whose components turned out to be in the early stages of learning to talk to each other. At first glance, HP’s endowments seem impressive; closer up, these seem, as someone once said in another context, like cats fighting in a burlap bag.

 

Moreover, HP, unlike any of the other companies I have mentioned except Dell, simply does not have software in its DNA. Back in the early 1990s, a Harvard Business Review article asserted that hardware companies at the time would suffer unless they became primarily services companies; I asserted then, and I assert now, that they also should become software companies.

 

I believe that this lack of software solutions and development personnel has several bad effects that have decreased HP’s revenues and profits by at least 20% over the last 20 years. Software development connects you with the open source community, the consumer market, and the latest technologies that impinge on computer vendors quite effectively. It allows your services arm to offer more leading-edge services, rather than trying to customize others’ software in a quick and dirty fashion for one particular services engagement. And, in the end, it moves hardware development ahead faster, as it focuses chip development on major real-world workloads that your software supports. Moreover, as IBM itself has proven, even if investment in software doesn’t pay off immediately, eventually you get it right.

 

A third, more recent problem, does relate to Mark Hurd’s cost focus – although the same might be said for IBM. A truism of business strategy proved by the problems created by CFO dominance at US car companies in the 1980s and 1990s is that too long a focus on the financials rather than product innovation costs a company dearly. It is quite possible that HP has eaten its innovation seed corn in the process of turning into a “consistent” money maker.

 

Finally, HP has in the past had a tendency in its hardware products to be “the nice alternative”: not locking you in or like Sun or Microsoft, willing to provide a platform for Oracle and Microsoft databases, open to anyone’s middleware. Whatever the merits of that approach, it creates a perception among customers that HP is not leading-edge in the sense that Apple, or even Microsoft and Oracle, are. Twenty-one years ago, in my first HP briefing, famous analyst Nina Lytton showed up in a brilliant pink outfit and immediately announced that HP’s strategy reminded her of a “great pink cloud.” That sort of rosy but not clear-cut presentation of one’s strategy and future plans does not create the sort of excitement among customers that Steve Jobs’ iPhone and iPad announcements, or even IBM’s display of Watson, do.

 

 

It Doesn’t Matter

 

And yet, when we look at HP vs. IBM in the longer run – from 1990, when I started as an analyst, to now – the ongoing success of HP is striking. At the start, IBM’s yearly revenues were in the $80s billion, and HP’s perhaps a quarter as much. Today, HP’s revenues are perhaps 1/3 greater, IBM at around a $100B run rate and HP perhaps at $135B. Some of that HP growth can be attributed to acquisitions; but a lot of it comes from growth of its core business and its acquisitions. To put it another way, IBM has been very successful at growing its profit margin; HP has been very successful at growing.

 

And growth at that scale is not easy. Companies have been trying to knock IBM off its Number 1 perch in revenues since the 1960s, and only HP has succeeded. Nobody else is in striking distance yet – Microsoft is at a $70B run rate, apparently, with seemingly no prospects of exceeding $100B in the next couple of years.

 

The reason, I think, is HP’s acquisition of Compaq back in the 1990s.  Since then, having beaten back Dell’s challenge, HP is in a very strong position in the PC-form-factor scale-out markets.  Despite recent apparent gains by System x, IBM focuses on the business market, and all of the other vendors mentioned above do not compete in PC hardware. Moreover, the PC market aligns HP with Intel and Microsoft, and thereby is relatively well protected from IBM’s POWER chips or even Oracle/Sun’s SPARC chipset, whatever life that has left in it (there is still no sign that AMD threatens Intel’s chip dominance significantly).

 

So let HP’s scale-up servers and storage falter in technology (e.g., the Itanium bet) relative to IBM and EMC, if they do; with the steady decrease in market share by Sun, HP is, and will in the short term remain, the IBM alternative in this market. Let Dell and IBM’s System x tout their business Linux scale-out prowess; the prevalence of existing scale-out PCs in public clouds and Microsoft LOBs means that HP is well positioned to handle competition in those areas over the next couple of years.

 

And who else but IBM can attack HP?  Oracle may talk big, but Sun’s market share appears to be shrinking, and 15 years of Larry Ellison talking about the virtual desktop and Oracle Database handling your filesystem have failed to make a dent in Windows, much less Wintel. Microsoft has no need to move into hardware, and apparently no desire. Apple appears to be playing a different sport altogether.

 

In fact, the only serious threat to HP over the short term is any major movement of consumers off PCs and laptops as they move to smartphones and tablets.  Here again, I think, analysts are too apocalyptic. Yes, iPhones can handle an astonishing range of consumer tasks, but not as easily or in as sophisticated a fashion as PCs, and users still continue to want to create and organize personal stores of photos etc. as well as share them – something the smartphone does not yet do. Meanwhile, the tablet offers the small form factor and attractive user interface that today’s laptop does not; but it is more likely that the tablet will acquire PC features, than that it will morph into an iPhone.

 

And Whither IBM?

 

In fact, an interesting question, given IBM’s status as the most direct competitor of HP, is whether IBM can begin to speed up its revenue growth.  IBM has been delivering strong financials for almost 20 years, while talking a good game about innovation.  In fact, I would say that they have indeed been innovative in some areas – but not enough yet to grow their revenues fast.  Will the big innovation be green technology?  The cloud? Analytics? Because, let’s face it, the only two things that recently have delivered big revenue gains are cell phones and Web 2.0/social media – and Apple, Google, and Facebook are the ones reaping the most revenues from these, not IBM.

 

In fact, as I have argued, IBM can do quite well with its present strong position in scale-up, but it cannot dominate the business side of computer markets when HP, Microsoft, and Intel have such a strong position in scale-out, nor can it match HP in consumer markets – and these affect business sales.

 

User Bottom Line: Don’t Panic, Do Buy Both

 

It would be nice, wouldn’t it, to be back in the old days when no one ever got fired for buying IBM systems, or Oracle databases? Well, those days are gone forever, and no blunder or inspired move, by Aptheker, Palmisano, Hurd, Ellison, Ballmer, Dell, or Jobs, will bring them back.

 

Given that, the smart IT buyer will acquire a little of each, in the areas in which each is best. It is true, for example, that IBM has exceptional services scope that allows effective integration – including integration of scale-out technology from Microsoft, Intel, and HP, or for that matter (System x) from IBM itself. This “mixed” enterprise architecture is the New Normal; vendor lock-in or a tide of Web innovation fueled by an Oracle and a Sun is so 1990s.

 

It is said that when Mary Queen of Scots wed the King of France, she was saluted with: “Let others wage war; let you, happy Scotland, bear children” (it’s better in Latin). Let the VSPs and apocalyptic analysts assert that vendor personalities waging war should affect your buying decision; you, happy CIO, should buy products from any of the vendors mentioned above, without worrying that a vendor is about to go belly-up in two seconds. And the vendors that have the greatest ability to integrate, like IBM and HP, will do quite well if you do.

 

 

I would like to start by apologizing to MEDecision for this piece. Instead of writing only about the Patient-Centered “Medical Home” (PCMH) – a great idea, and one about which MEDecision has shown great insight as they move to provide PCMH implementation support – I have chosen to focus instead on the relationship between this idea and that of “business agility” in health care. In other words, I am writing about what I want to write about, not what they deserve to be heard about.

 

That said, the effort to make businesses of all stripes more agile provides an excellent perspective on PCMH, its likely effects, its evolution, and the pluses and minuses (mostly pluses) of MEDecision’s offerings and plans. If, instead of thinking of PCMH as the goal, process improvements as the effects, and MEDecision’s and other offerings as the means, we think of increased health care organization agility as the goal, overall improved outcomes and vendor ROI as the side-benefits, and the PCMH as the means, I believe that we get a clearer picture of how much PCMH really matters in the long run.

 

So let’s start by drawing the picture, very briefly: What is business agility? What is PCMH? What is MEDecision doing about PCMH? Then, let’s see just how much more agile, and more effective, PCMH will probably make health care – and what might be done even better.

 

Some of my conclusions may well surprise or shock you. Specifically, I suggest that at certain points less of an emphasis on quality will produce better outcomes for the customer/patient. Moreover, at certain points less of an emphasis on cutting costs will produce lower costs. And finally, I assert that the main value of PCMH in the long run is not that it puts more control in the hands of a single primary care physician or nurse practitioner, but rather that it is more capable of frequently interacting with and adapting to the needs of the customer/patient.

 

How could I possibly draw these conclusions? Let’s see.

 

What Is Business Agility?

 

I define business agility as the relative ability of an overall organization to handle change, and includes both reactive and proactive agility. Manifestations of agility include both increased speed and increased effectiveness of change. Side-effects of increased agility are lowered costs, lowered downside risks, increased upside risks (this is good!), increased revenues, increased margins, and increased customer satisfaction. These side-effects can occur over both the long term and the short term.

 

Initial data indicates that the most effective, and perhaps the only truly effective strategies allowing organizations to increase business agility are to:

 ·         Focus primarily on agility itself, and on costs, revenues, and margins only as secondary concerns. ·         Measure agility primarily as time to value, not as an ability to meet deadlines or time to market.·         Establish processes similar to those of agile software development.·         Scale agile efforts by making the scaling tools and resources fit the needs of the people driving the agile process, not by constraining those people according to the needs of the tools or the bottom line.

Key counter-intuitive findings about business agility strategies are:

 ·         New-product-development agility improvements typically have a greater positive effect (on costs, revenues, etc.) than those which enhance operational or disaster/risk management agility. Improvements in handling external changes have a greater positive effect than improvements in handling internal changes.
  • Reductions in downside risk can actually decrease agility and have a negative overall effect. Greater upside risk is almost always a good thing.
·         Improvements in proactive agility produce greater positive effects than improvements in reactive agility. However, agile organizations should focus on improving processes rather than on increasing the number of things processes anticipate.·         Scaling agile processes is doable. However, scaling them by “compromise” with existing non-agile processes is likely to inhibit, reduce or negate process and business agility improvements.

The PCMH, and MEDecision’s Take on IT

The term PCMH, frankly, is confusing. As it has evolved, it centers not around the patient or consumer (a user of health care services who may or may not be a patient at any one time), but around a central point of patient management, typically a “nurse practitioner” or “health care coordinator” operating from the point of view of the primary care physician for a consumer. Likewise, the “medical home” is not the consumer’s physical home, but a “virtual home” for each consumer’s patient processes, usually located physically and/or logistically within the health care system/infrastructure itself.

 

The key innovative concepts of the PCMH compared to present ways of handling things are:

 1.       It’s comprehensive (including all medical organizations that a patient interacts with, and all parts of the patient process)2.       It’s coordinated (i.e., there is one integrated process, rather than numerous isolated ones)3.       It’s continuous (not really, but it means to be and does bridge some of the gaps in previous processes)4.       It’s quality- and efficiency-driven (this is not explicit in current definitions, but is the likely outcome of today’s focus on improved quality and reduced costs)

In attempting to support implementation of the PCMH, MEDecision starts from a position of strength through various solutions. Its Alineo provides extensive support for real-world case (read: patient process) management by hospitals; Nexalign offers “decision support” for PCP-patient interactions; and InFrame provides cross-provider “health information exchanges” (HIEs). All three include collaboration tools that make the integration of separate processes into one coordinated process much more straightforward. All three ensure that insurance providers play their inescapable roles in the process. And today’s widespread implementation of MEDecision ensures that its current systems are collecting a large chunk of the quality and efficiency information that will be needed in, by, and to sustain the PCMH.

 

As you might expect, MEDecision’s immediate plans for the PCMH include extension of Alineo for use by a PCP’s “care coordinator” and development of “mini-HIEs” for the offices of PCPs. Further down the line, we might expect “telemedicine” for remote patient-PCP and patient-process interactions, “centers of excellence” for quality best practices, and better information-sharing with patients (and/or consumers) via the Web.

 Looking Through the Lens of Business Agility: Marketing Myopia

More than forty years ago, an article in Harvard Business Review titled “Marketing Myopia” introduced a fundamental tenet of good marketing: you must know what market you are really in; that is, you must know the biggest fit that you can make with your ultimate consumer’s present and future needs. Over the years, that tenet has taken many forms, from positioning cars as purveyors of feelings in the 1980s to ongoing one-to-one customer relationship management in the 1990s and leveraging social networking in the 2000s. Always, always, it has been a key component of business agility, because its success depends on the ability to continuously adapt to and anticipate consumers’ needs.

 

The “know your market” tenet also allows us to understand many of the key agility-related advantages – and potential flaws – of the PCMH. From this viewpoint, government and insurer are middlemen (if highly important middlemen); the real market is the vast majority of consumers who want the feeling embodied in the statement “I feel healthy [or can feel healthy]”. That does not at all mean that vendors should aim at deceiving consumers; in the long run, that never works. However, it does mean that the aim of vendors should be to constantly use consumer input to fine-tune their services to deliver both the objective reality and the subjective feeling of potential health to consumers, with government and insurers acting as tools for scaling agility, not reasons to shift focus from agility.

 

Looking at the PCMH concept, then, we find many attractive features for increasing agility and providing more agile processes. There is increased interaction with the consumer leading to increased personalized knowledge of the consumer (with privacy protections). There is coordination across organizations, plus the ability of one person to drive an individualized patient process, and adapt it to that patient’s needs. There is movement of process control, from people with rare interactions with a consumer, to people with somewhat more frequent interactions with the consumer. There is at least the beginning of the concept of “patient centric” processes.

 

There are also serious questions about the PCMH concept, centered around the idea of it being quality- and efficiency-driven. Business agility theory suggests that focusing on quality and efficiency rather than agility is self-defeating: it produces less quality and less efficiency than focusing on agility. How so? Consider how focusing on the speed of the patient process rather than its effectiveness, and assuming that effectiveness means an increased probability of the “right” surgery for an ailment rather than increased ability to “spiral in on” the right diagnosis and fine-tune it for evolving symptoms fails to put the patient process in the context of an overall lifetime consumer/health-system interaction. Consider also how inadequate such an approach  is in a constantly changing environment as the consumer and the society changes, and how such an approach focuses on tool and physician costs rather than supporting the ability of tools and physicians to better adapt to consumer needs.

 

These are all clear and present dangers of so-called quality-driven, efficiency-driven processes. The results of such approaches are more of what we have been seeing in the last forty years: dissatisfaction of every party in the process, cost squeezes that somehow increase expenses; process controls that eliminate touchy-feely services along with so-called inefficiencies, cookbook medicine that reduces the likelihood of immediate medical malpractice suits’ risks but increases the likelihood of poor outcomes that again increase the likelihood of malpractice suits; insurer and government regulations that continually lag medical knowledge and user needs; and usually inadequate, often adversarial problem resolution processes.

 

To correct these shortcomings, I believe it is necessary to go over the PCMH with a fine-tooth comb, aiming to make it agile rather than high-quality or cost-effective. For example, the “medical home” should be virtual, allowing a hand-off of central control to the hospital when the consumer is an in-patient, and to an in-home nurse or the patient him/herself for elderly consumers living in age-adapted homes. Use of outcome data should focus on consumer-driven changes in the service, the process, or process agility, not who did what wrong.  There should be greater focus on the use of up-to-date consumer data such as lifestyle decisions (private) that correlate with ailments, health worries, and “what if you face this situation?” scenarios. Annual checkups, specialist appointments, and hospital treatments should be all part of a strategy for continuous, dynamic interactions with the patient, with both personalized (diagnostic) and process-focused (procedures/surgeries) professionals fully aware of the patient’s historical context and able to adapt immediately via virtual access to other professionals in other parts of the process at any time. Over-provisioning to handle surges should be a necessary part of the cost structure, metrics and incentives on all sides should start with “time to value” rather than “outcome price/performance”, and virtualization (the ability to untether the process from any physical location) should be everywhere.

 Conclusions and Suggestions

Although business agility applied to health care would be nice, it is very unlikely that it will become pervasive in the next few years. After all, cutting-edge agile software development is still not really the norm in business IT, a decade after the introduction of the concept. Still, the PCMH is an excellent place to start, and MEDecision is in an excellent position to foster both PCMH and PCMH-driven agility right now – if some of the rough edges concerning quality and efficiency obsession can be smoothed away. Let me repeat my counter-intuitive conclusions:

 ·         Less emphasis on quality and more emphasis on adaptability in the PCMH patient process should lead to higher-quality outcomes;·         Less emphasis on cost-cutting by increased efficiency and more emphasis on more flexible patient processes should lower costs; ·         Less focus on the health care professional and more focus on frequent interactions with patients, even outside of the patient process, should allow PCMH to provide more satisfaction to both professionals and customers.

It must be noted that the biggest barrier to improved agility, oddly enough, is not the government, but the amazing ability of insurers to continually shoot themselves in the foot, business-wise. Here are two recent examples. First, the insurance industry should have known by 2007 from the examination of climate science that certain parts of Florida were or would become  difficult markets for home insurance in the not too distant future. It appears that only this year are insurers considering this possibility, and I doubt that they will do anything effective until 2012. The result: Five years of losses; easily predictable and preventable.

 

The second example, (altered to protect the innocent) is of a condition identified in the late 1980s, that under certain circumstances actually grants individuals a longer healthy lifespan. One insurance company, hearing the news of a diagnosis in the early 1990s, but using medical knowledge 10 years out of date to assess risk, refused further life insurance for a customer except at exorbitant prices, despite the fact that available data and family history confirmed that the customer had, if anything, fewer risks.

 

They have continued to do so for the past 20 years, as the customer approaches average male life span without serious problems, and in the process have cost themselves $1 million in additional life insurance sales, $300,000 in long-term care insurance premiums, and about $100,000 in wasted sales and customer-service costs – and the customer is extremely dissatisfied. Meanwhile, other insurance companies have typically followed the uninformed lead of the original insurance company, without even bothering to recheck with their medical experts.

 Let’s face it, insurers are so unagile and so focused on out-of-date risk assessments and costs, that they often wind up being more vulnerable to more downside risks, more costs, and less profits. Where should insurers look for advice on how to become more agile? Why, to software vendors, who are the leaders in implementing business agility, of course. Software vendors such as – well, such as MEDecision, who are themselves implementing agile new-product software development. Hmm.  Didn’t I say I wasn’t going to talk about MEDecision? Oh, well.

The recent IBM announcement of new “virtualization” capabilities had the usual quota of significant improvements in value-add to the customer – but its real significance was a signpost to a further evolution of the meaning of “virtual”, a step forward that, as always before, drives new user efficiencies and effectiveness.

 

The focus of the announcement was on Intel-based hardware, and the IBM briefing pointed out ways in which the IBM solution went beyond so-called “basic” virtualization on PC-server-style Intel platforms, and the resulting 25-300% improvements in costs, TCO (total cost of ownership), utilization, power and time expended, etc. The extensions included the usual (for those following IBM) suspects: Tivoli (especially Systems Director), System x with blades, CloudBurst, SONAS, IBM Service Delivery Manager, and popular third-party software from the likes of VMWare, CA, BMC, and Microsoft. The emphasis was on improvements in consolidation, workload management, process automation, and solution delivery. Nothing there to signal a significant step forward in virtualization.

 

Or is there? Here’s what I see as the key step forward, and the key value-add. Sorry, it’s going to require a little history.

 

Whither Virtual?

The word virtual, used to mean a disconnect from physical computers, actually has changed quite a bit since its beginning in the 1960s – and so virtualization, meaning changing existing architectures in the direction of such a disconnect, is a far bigger job now than then. It started in the 1960s with “virtual memory”, the idea that a small bit of RAM could be made to look like a massive amount of main memory, with 80-90% of the performance of physical RAM, by judicious access to greater-capacity disk. Underlying this notion – now enshrined in just about every computer – was the idea of a “veneer” or “false face” to the user, application, or administrator, under which software desperately labored to make the virtual appear physical.

 

Shortly afterwards, in the 1960s and early 1970s, the “virtual machine” appeared, in such products as the IBM 8100. Here, the initial meaning of “virtual” was flipped on its head: instead of placing a huge “face” on a small physical amount of memory, a VM put a smaller “face” mimicking a physical computer over a larger physical one. At the time, with efficient performance the byword of vendors, VMs were for the upper end of the market, where large amounts of storage meant that running multiple “machines” on a computer provided transactional concurrency that in certain cases made it worthwhile to use VMs instead of a monolithic operating system to run multiple applications on a single machine. And so it remained, pretty much, until the 1990s.

 

In the 1990s, the Internet and gaming brought to consumers’ attention the idea of “virtual reality” – essentially, a “false face” over an entire computer, set of computers, or even an Internet that created full-fledged fantasy worlds a la Second Life. At almost the same time, Sun’s espousal of Java brought the notion of VMs as portable “single-application computers” across platforms and vendors. Both of these added the notion of virtuality across multiple physical machines.

 

The key addition to the meaning of “virtual” over the last decade has been the notion of “storage virtualization”, and more recently a variant popularized by Composite Software, “data virtualization”. In this case, the disconnect is not so much between one physical part of a machine and another, or even between two machines, but between programs across physical machines and data across physical machines. The “veneer”, here, presents physical storage of data (even across the Internet, in cloud computing) as one gigantic “data store”, to be accessed by one or multiple “services” that themselves are collections of applications disconnected from specific physical computers.

 

Note that at each stage, an extension of the meaning of virtual meant major cost, performance, and efficiency benefits for users – not to mention an increasing ability to develop widely-used new applications for competitive advantage. Virtual memory cost much less than physical memory. The virtual machine, as it evolved, allowed consolidation and movement onto cheaper remote platforms and, ultimately, the cloud. Storage virtualization has provided enormous data-management cost savings and performance improvements, especially as it allows better workload and data-access parallelization and data compression. And the latter two have played a key role in the second-generation business success of the Web.

 

So what’s next in the saga of virtualization? And will this, too, bring major benefits to users?

 

Tying It All Together

One can imagine a few significant ways to extend the meaning of “virtual” at this point – e.g., by applying it to sensor-driven data streams, as in virtual event stream processing. However, what is significant to me about IBM’s announcement is that includes features to tie existing meanings of virtual together. 

 

Specifically, it appears that IBM seeks to create a common “virtual reality” incorporating virtual machines, storage virtualization, and the “virtual reality” of the cloud. It provides a common “veneer” above these (the “virtual reality” part) for administrators, including some common management of VMs, services, and storage. Underneath that, it provides specific links between these for greater integration – links between virtualized storage (SONAS), virtualized applications (VMWare, KVM), and virtual services (IBM’s cloud support), all disconnected from physical machines. These links cover specific activities, including the consolidation, workload management, process automation, and solution delivery tasks cited above. Added to other IBM or third-party software, they can provide a full disconnected virtual infrastructure for any of today’s key IT or consumer needs.

 

So, as I see it, that’s the significant step forward: not coming up with a new meaning of “virtual”, but integrating the use cases of the various existing meanings of virtual – tying it all together. And the benefits? The usual benefits of such integration, as evidenced by IBM’s “success stories”: greater efficiency across a wider range of existing infrastructure, leading to major cost, performance, and ultimately revenue improvements. These will be evidenced especially in the cloud, since that’s what everyone is paying attention to these days; but they go beyond the cloud, to infrastructures that may well delay cloud integration, defer it forever, or move directly to the cloud’s successor architectures.

 

Users’ Bottom Lines

The key message for users, to my mind, is to treat the new manifestation of virtualization as an added reason to upgrade one’s flavor of virtualization, although not necessarily a reason to upgrade in and of itself. Rather, it should make one’s specific IT budget plans for the immediate future related to virtualization more of a slam dunk, and cause IT planners to dust off some “nice to haves”. And for those seeking a short-cut to current cloud technology, well, wrapping all sorts of virtualization in one bundle seems like a good bet.

 

When users make a choice of vendor, I would say that in most cases IBM ought to be at least in the conversation. This is not to say one vendor’s approach is clearly superior to another’s on its face (remember, virtualization is a “false face”!). However, in this announcement IBM has pointed to specific use cases where its technology has been used and has achieved significant benefits for the user; so much of the implementation risk is reduced.

 Above all, IT buyers should be conscious that in buying the new virtualization, they are tapping further into an extremely powerful current in the computer technology river, and one that is currently right in the main stream. There is, therefore, much less risk from riding the current via new-virtualization investment than from being edged slowly into a backwater through inaction. As virtualization enables cloud, and cloud changes virtualization’s face, the smart IT buyer will add that new face to customer-facing apps, using vendors like IBM as makeup artists supreme.

Recently, I read yet another blog post in which a user ranted against an annoying feature of the latest word processing consumer software that wiped out work back to an autosave. The problem, he seemed to think, lay with those annoying developers who kept adding features to products that weren’t really needed, making it harder to understand and use, and increasing the chance of accidental mistakes causing a meltdown. Sorry, but I disagree.  And having helped develop a word processor back in the late 70s and a file system back in the late 80s, and followed the field as an analyst since, I’ve seen the world from both sides now, as a computer scientist and as a marketer; so I think I have a good perspective on the problem.

There are two related problems with most software used by consumers: what I call orthogonality (in math, elegance) and metaphor. Orthogonality says that your basic operations on which everything else is built are "on the same level" and together they cover everything -- power plus intuitive sense to the user. Metaphor says that the idea of how the user operates with this software is a comparison to a model -- and that model should be as powerful as possible.

In the case of word processing (and most other consumer software) all products are not as orthogonal as they should be.  One of the reasons the original Word succeeded was that it was more orthogonal than its competitors in its commands:  file, edit, etc. are a pretty good take.  That means that necessary additions and elaborations are also more orthogonal; the rich get richer.

Where everyone (including Apple and Google) really falls down is in metaphor.  To take one example we are still haunted by: the original metaphor for word processors and other desktop software was, indeed, a physical desktop, with a one-level filing system underneath.  It took a while for people to accept a wholly unfamiliar metaphor, the folder within folder within folder -- even though it was far more powerful, easier to program and upgrade, and, on average, made things easier for the user who learned the new metaphor. For the last 25 years, all consumer software vendors have consistently rejected an even better metaphor: what is called in math the directed acyclic graph.  This would allow multiple folders to access the same folder or file: essentially, incredibly easy cross-filing. I know from design experience that using this approach in a word processor or other consumer software would be almost as intuitive as the present "tree" (folder) metaphor. Instead, software vendors have adopted kludges such as "aliases" that only make the product far more complicated. The same is true of supporting both dynamic and static file storage on the desktop (too long a discussion).

The reason orthogonality and good metaphor rarely get done or last is that almost never do a good developer and a good marketer (one who understands not only what consumers say they want but what they could want) connect in software development. Sorry, I have watched Steve Jobs for 30 years now, and while he is superb at the marketing end, he does very badly at understanding metaphor plus orthogonality from the mathematical/technical point of view. And the rest are probably worse.

The net result for Word, and, sorry Mr. User, for all those "better" previous word processors, is that time makes all these problems worse, and it results in either failure to incorporate valuable new metaphors (and I do think that spell- and grammar-checking are overall better than the old days, and worth the frustrations of poor orthogonality and awkward usage in isolated cases) or retrofitting of a more orthogonal approach. Specifically, I suspect (because I think I've seen it) that supporting the old WordPerfect ctrl-a approach for both the old Word command interface and the new toolbar style plus new features added just one too many dangerous key combinations next to the ones traditionally used. You miss, you pay -- and yes, the same thing will happen with touch screen gestures.

Whether this business game is worth the candle I leave to anyone who is a user.  I know, however, from long experience, who to blame -- and it's not primarily the latest developer. Fundamentally, I blame a long series of marketers who at least are told about the problem -- I've told many of them myself -- and when push comes to shove, keep chickening out.  The reasons for not doing orthogonality with a better metaphor always seem better to them at the time, the development time longer, the risks of the new higher, the credibility of the trouble-maker suspect, and they won't be around to deal with the problems of playing it safe. These are all good superficial reasons; but they're wrong. And we all suffer, developers not least – because they have to try to clean up the mess.

So give a little blame, if it makes you feel better, to the latest developer or product upgrade designer, who didn’t understand how the typical consumer would use the latest version; give a little blame, if you can figure out how, to the previous developers and designers, who didn’t anticipate these problems. But the marketer, be he the CEO or a lowly product marketer, who makes the fundamental decision about where to go next is really the only person who can hear both the voice of the consumer and the voice that understands the technical/mathematical usefulness of orthogonality and a good metaphor. The marketer is the one who has a real opportunity to make things better; to him, the user should assign the primary blame.

 

The writer Peter Beagle, commenting favorably on JRR Tolkien’s anti-heroes, once wrote “We worship all the wrong heroes.” I won’t go that far. But I will say that we need to hold our present consumer software marketing heroes to higher standards.  And stop reflexively making the developer the villain.

As Han Solo noted repeatedly in Star Wars – often mistakenly – I’ve got a bad feeling about this.

 

Last year, IBM acquired SPSS. Since then, IBM has touted the excellence of SPSS’ statistical capabilities, and its fit with the Cognos BI software. Intel has just announced that it will acquire McAfee. Intel touts the strength of McAfee’s security offerings, and the fit with Intel’s software strategy. I don’t quarrel with the fit, nor with the strengths that are cited. But it seems to me that both IBM and Intel may – repeat, may – be overlooking problems with their acquisitions that will limit the value-add to the customer of the acquisition.

 

Let’s start with IBM and SPSS. Back in the 1970s, when I was a graduate student, SPSS was the answer for university-based statistical research. Never mind the punched cards; SPSS provided the de-facto standard software for the statistical analysis typical in those days, such as regression and t tests. Since then, it has beefed up its “what if” predictive analysis, among other things, and provided extensive support for business-type statistical research, such as surveys. So what’s not to like?

 

Well, I was surprised to learn, by hearsay, that among psychology grad students, SPSS was viewed as not supporting (or not providing an easy way to do) some of the advanced statistical functions that researchers wanted to do, such as scatter plots, compared to SAS or Stata. This piqued my curiosity; so I tried to get onto SPSS’ web site (www.spss.com) on a Sunday afternoon to do some research in the matter. After several waits for a web page to display of 5 minutes or so, I gave up.[1]

 

Now, this may not seem like a big deal. However, selling is about consumer psychology, and so good psychology research tools really do matter to a savvy large-scale enterprise. If SPSS really does have some deficits in advanced psychology statistical tools, then it ought to at least support the consumer by providing rapid web-site access, and it or IBM ought to at least show some signs of upgrading the in-depth psychology research capabilities that were, at least for a long time, SPSS’ “brand.” But if there were any signs of “new statistical capabilities brought to SPSS by IBM” or “upgrades to SPSS’ non-parametric statistics in version 19”, they were not obvious to me from IBM’s web site.

 

And, following that line of conjecture, I would be quite unconcerned, if I were SAS or Stata, that IBM had chosen to acquire SPSS. On the contrary, I might be pleased that IBM had given them lead time to strengthen and update their own statistical capabilities, so that whatever happened to SPSS sales, researchers would continue to require SAS as well as SPSS. It is even not out of the bounds of possibility to conjecture that SPSS will make IBM less of a one-stop BI shop than before, because it may open the door to further non-SPSS sales, if SPSS falls further behind in advanced psych-stat tools – or continues to annoy the inquisitive customer with 5-minute web-site wait times.

 

Interestingly, my concern about McAfee also falls under the heading of “annoying the customer.” Most of those who use PCs are familiar with the rivalry between Symantec’s Norton and McAfee in combating PC viruses and the like. For my part, my experience (and that of many of the tests by PC World) was that, despite significant differences, both did their job relatively well, and that one could not lose by staying with either, or by switching from the one to the other.

 

That changed, about 2-3 years ago. Like many others, I chose not to move to Vista, but stayed with XP. At about this time, I began to take a major hit in performance and startup time. Even after I ruthlessly eliminated all startup entries except McAfee (which refused to stay eliminated), startup took in the 3-5 minute range, performance in the first few minutes after the desktop displayed was practically nil, and performance after that (especially over the Web) was about half what it should have been. Meanwhile, when I switched to the free Comcast version of McAfee, stopping their automatic raiding of my credit card for annual renewals was like playing Whack-a-Mole, and newer versions increasingly interrupted processing at all times either to request confirmations of operations or to carry out unsolicited scans that slowed performance to a crawl in the middle of work hours.

 

Well, you say, business as usual. Except that Comcast switched to Norton last year, and, as I have I downloaded the new security software to each of five new/old XP/Win7 PCs/laptops, the difference has been dramatic in each case. No more prompts demanding response; no more major overhead from scans; startup clearly faster, and faster still once I removed stray startup entries via Norton; performance on the Web and off the Web close to performance without security software. And PC World assures me that there is still no major difference in security between Norton and McAfee.

 

Perhaps I am particularly unlucky. Perhaps Intel, as it attempts to incorporate McAfee’s security into firmware and hardware, will fix the performance problems and eliminate the constant nudge-nudge wink-wink of McAfee’s response-demanding reminders. It’s just that, as far as I can see from the press release, Intel does not even mention “We will use Intel technology to speed up your security software.” Is this a good sign? Not by me, it isn’t.

 

So I conjecture, again, that Intel’s acquisition of McAfee may be great news – for Symantec. What happens in consumer tends to bleed over into business, so problems with consumer performance may very well affect business users’ experience of their firewalls, as well; in which case, this would give Symantec the chance to make hay while Intel shines a light on McAfee’s performance, and to cement its market to such a point that Intel will find it difficult to push a one-stop security-plus-hardware shop on its customers.

 

Of course, none of my evidence is definitive, and many other things may affect the outcome. However, if I were a business customer, I would be quite concerned that, along with the value-add of the acquisitions of SPSS by IBM and McAfee by Intel, may come a significant value-subtract.



[1] No other industry web page I referenced at that time, including www.ibm.com, took more than 5 seconds.

In a recent briefing, someone suggested to an executive of one of Microsoft’s rivals that he might want to partner with Microsoft. He looked a bit nonplussed at the suggestion. And that, in turn, triggered memories of wonderful quotes that I have heard over the years, about Microsoft and others.

Here is a very brief list of some of my favorite quotes. My standards are high: the speaker should show pithiness, wit, and insight, not just bile and the gift of gab. And the quote should stand the test of time: it should, fairly or unfairly, relate to the enduring brand or reputation of the companies on which the speaker commented.

1.       I remember reading this one in a trade publication in the early 1990s. The president of Ashton-Tate, faced with declining sales of A-T’s product in the face of competition from Microsoft’s Excel and Access, set out on a joint project with Microsoft. Later, he was asked to comment. “The only thing worse than competing with Microsoft,” he said, “Is working with them.” Time has long since hewn some of the sharp edges off Microsoft’s ability to work with other companies. Still, around the industry, long-time competitors and others with long memories are wary of Microsoft, both as a competitor and as a partner. From old-timers, I still get a laugh with that one.

2.       Also in the early 1990s, Apple (Little Red), IBM (Big Blue], and a third partner set out to do an emulation of Unix that was clearly aimed at Sun. The response of Scott McNeally was blunt: “This consortium, “ he asserted, “Is Purple Apple Sauce.” And so it proved to be. This one has shown itself to be less a comment about the Apple and IBM of the future, and more about the inability of joint projects between vendors to live up to their hype.

3.       In 1990, I was in the crowd when Ken Olsen of Digital Equipment – pretty much number 2 in the computer industry at that time – was asked about UNIX. He sighed, “Why are people trying to make it into an enterprise engine? It’s such a nice toy operating system.” Within five years, that toy operating system had effectively destroyed DEC.  I pick on Ken unfairly, because he had had a pretty good track record up to that point. However, I continue to be wary of companies that disparage new technologies. It’s nice to keep a company focused; but when you speak beyond your technical competence, as Ken did, it’s sometimes a recipe for disaster.

4.       This one dates from the later 1990s. Some magazine had done an interview with Larry Ellison, where he indulged in some of his trademark zingers about competitors. When it came to Computer Associates, however, which at the time was making money by buying legacy software, laying off all possible personnel, and maintaining the software as is, Larry was clearly straining to say something nice. “Well,” he noted, “Every ecosystem needs a scavenger.” He may have meant it as a put-down; but I thought it was, and is, an important point. Yes, every computer ecosystem should have a scavenger; because, let’s face it, most computer technologies never go away. I am sure there are still a few Altairs and drums out there, not to mention hydraulic computing. And I am sure that CA really appreciated the remark – not.

Well, that’s it for now. I have left out a few lesser ones I have enjoyed, such as the time John Sculley got up to speak and my Apple-executive table partner braced himself. What’s the matter, I asked. Well, he said, the last time John spoke at a venue like this, he promised that Apple would be delivering the Newton – and then we had to go build it.

Anyone else remember some good ones?

 

Climate Readings
04/05/2010

This blog is not a forum for climate change discussion; nor do I intend to turn it into one.  However, I recently had the opportunity to look at two recent books on the subject, the latest (“The Climate Crisis”, Archer and Rahmstorf), was published this year. These added new and, to my mind, disturbing considerations to the climate change topic.

 

Below, I note the new conclusions which, by my own subjective assessment, arise from their analysis. Note that I am not attempting to say that these are definitive; just that they seem to follow from the analysis presented in these books, based on my quick-and-dirty calculations.

 

One: It all depends on Greenland. The two big possible big sources of sea-level rise, above and beyond what’s already been verified, are land ice on Greenland and in Antartica. It appears that over the next few decades, Antarctica will be “neutral”: the sea ice blocking faster land-ice slide into the sea and therefore faster land-ice melting is apparently still stable for the two largest of the three sea-ice extensions there, so until this starts breaking up it is likely that matters will continue more or less as they have for the last 30 years or more.

 

Greenland, however, appears to be accelerating its ice’s slide into the sea, and past events have shown that this type of sudden change can happen in 10-100 years, instead of centuries. If all the ice in Greenland were to melt into the sea, it would raise sea levels by about 23 feet.  If we assume that half of Greenland’s ice were to melt (as it did in a past meltdown) in the next 50 years, then we are talking 12 feet of sea level rise all over the world, on top of the foot rise we have seen in the last 50 years and the 2-3 foot rise that is “baked in” from other factors. So if Greenland goes, by 2060 we may see 15 feet of sea level rise; if not, we might see only 2 feet.

 

Moreover, a Greenland meltdown has a big effect on the amount of global warming. That meltdown dumps fresh water into the ocean right near the major “carbon pump” that sends much of the carbon dioxide in the air deep under water, not to surface for thousands of years. Fresh water doesn’t sink – so the pump may slow or stop. This means that the oceans, which up to now have been soaking up maybe 30-40% of the carbon dioxide from human emissions, would decrease their “carbon uptake” drastically. And that, in turn, might add 2-3 degrees (Fahrenheit) to the “final equilibrium” increase in global temperature. There’s already a consensus around 5.4 degrees F increase from global warming, which may well be on the conservative side – now we’re talking 8 degrees F global warming. And if we use less conservative initial assumptions, we are talking about maybe 7 degrees F global warming by 2060 and 10 degrees F by 2100.

 

Two: It wasn’t just our bad characteristics that resulted in global warming; it was also our good ones. The data show an almost-level temperature between 1940 and 1970. The reason appears to be that our high and increasing levels of pollution released more and more pollution-related water vapor into the air, which counteracted the increase in carbon dioxide (and, to a lesser extent, methane). When we began to clean up the pollution in the 1970s, the rise resumed.

 

Three: nuclear is not the long-term answer.  This one I am not sure I got the gist of, but here’s how it runs: supplies of uranium, if we were to use present technology and build enough new plants to cover most needs, would last for 58 years. If we use breeder reactors, then uranium would last for at least 1000-5000 years; but breeder reactors are incredibly vulnerable to terrorism and state misuse.  In other words, if we use breeder reactors, the chance of misuse of nuclear weapons and of nuclear winter goes up a hundredfold – and nuclear winter would be even worse than global warming.

 

Four: as we have been talking, our chance to avoid the big impacts of global warming may have already passed. The best estimate right now for the upcoming long-term increase in global temperature is 3 degrees Celsius (multiply by 1.8 for degrees Fahrenheit). The UN has identified low-cost changes in our industries and lifestyles (less than $100/ton of carbon dioxide emitted) that would limit that increase to 1.5 degrees Celsius. Any change that is 2 degrees Celsius or greater triggers the big impacts: bigger extreme weather events, large extensions of drought, mass die-offs and species extinctions (because humanity has paved over much of the world, making migration of many species in reaction to change impossible). Now, keep in mind that (without going into the details) there is a good possibility that the 3 degrees estimate and therefore the 1.5 degrees estimate are too optimistic by ½-1 degree. If that is true, then because the low-cost solution depends on all the low-cost technologies in a variety of areas, the cost to achieve 1.5-degree change almost certainly goes up a lot.  On the other hand, if we had started 10 years ago, we would have been able to limit the increase to maybe 1 degree Celsius, and we would still be safely in low-cost territory. Likewise, if we fail to act effectively for another 10 years (everything else being equal), costs will go up perhaps twice as sharply, even when we factor in the arrival of new technologies in some areas.

 

Five: the United States continues to be the biggest problem. According to the data, from 1950-2003 the US was the biggest (net) contributor of carbon dioxide emissions per capita; and today, the US is the biggest contributor of carbon dioxide emissions (overall, not per capita). Since China and Russia are numbers 2 and 3, it’s pretty likely that the US continues to be the biggest contributor per capita. That means that the US is having the biggest negative impact on global warming, and that net decreases on carbon dioxide emissions by the US, individually and collectively, would have the biggest positive impact.

 

My take? Boy, do I hope I’m wrong in my conclusions. But, as someone once noted, hope is not a plan.

 

 

A few weeks ago, I met an old friend – Narain Gehani, once at Cornell with me, then at Bell Labs, and now at Stevens Institute of Technology – and he presented me with a book of reminiscences that he had written: “Bell Labs: Life in the Crown Jewel”. I enjoyed reading about Narain’s life in the 35 years since we last met; Narain tells a great story, and is a keen observer. However, one thing struck me about his experiences that he probably didn’t anticipate – the frequency with which he participated in projects aimed at delivering software products or software-infused products, products that ultimately didn’t achieve merited success in one marketplace or other (Surf n Chat and Maps r Us were the ones that eventually surfaced on the Web). 

This is one aspect of software development that I rarely see commented on. We have all talked about the frequency of software-development project failure due to poor processes. But how about projects that by any reasonable standard should be a success?  They produce high-quality software. The resulting solutions add value for the customer. They are well received by end users (assuming end users see them at all). They are clearly different from everything else in the market. And yet, looking back on my own experience as a software developer, I realize that I had a series of projects that were strikingly similar to Narain’s – they never succeeded in the market as I believe they should have. 

My first project was at Hendrix Corporation, and was a word processing machine that had the wonderful idea of presenting a full page of text, plus room on the side for margins (using the simple trick of rotating the screen by 90 degrees to achieve “portrait” mode). For those who have never tried this, it has a magical effect on writing:  you are seeing the full page as the reader is seeing it, discouraging long paragraphs and encouraging visual excitement. The word processor was fully functional, but after I left Hendrix it was sold to AB Dick, which attempted to fold it into their own less-functional system, and then it essentially vanished. 

Next was a short stint at TMI, developing bank-transfer systems for Bankwire, Fedwire, etc. The product was leading-edge at the time, but the company was set up to compensate by bonuses that shrank as company size increased. As employee dissatisfaction peaked, it sold itself to Logica, key employees left, and the product apparently ceased to evolve. 

At Computer Corporation of America, I worked on a product called CSIN. It was an appliance that would do well-before-its-time cross-database querying about chemical substances “in the field”, for pollution cleanup efforts. It turned out that the main market for the product was librarians; and these did not have the money to buy a whole specialized machine. That product sold one copy in its lifetime, iirc. 

Next up was COMET/204, one of the first email systems, and the only one of its time to be based on a database – in this case, CCA’s wonderful MODEL 204. One person, using MODEL 204 for productivity, could produce a new version of COMET/204 in half the time it took six programmers, testers, and project managers to create a competing product in C or assembler. COMET/204 had far more functionality than today’s email systems, and solved problems like “how do you discourage people from using Reply All too much?” (answer: you make Reply the default, forcing them to do extra effort to do Reply All spam). While I was working on it, COMET/204 sold about 10 copies. One problem was the price: CCA couldn’t figure out how to price COMET/204 competitively when it contained a $60,000 database.  Another was that in those days, many IT departments charged overhead according to usage of system resources – in particular, I/Os (and, of course, using a database meant more I/Os, even if it delivered more scalable performance). One customer kept begging me to hurry up with a new version so he could afford to add 10 new end users to his department’s system. 

After that, for a short stint, there was DEVELOPER/204.  This had the novel idea that there were three kinds of programming: interface-driven, program-driven, and data-driven. The design phase would allow the programmer to generate the user interface, the program outline, or the data schema; the programmer could then generate an initial set of code (plus database and interface) from the design. In fact, you could generate a fully functional, working software solution from the design. And it was reversible: if you made a change in the code/interface/data schema, it was automatically reflected back into the design. The very straightforward menu system for DEVELOPER/204 simply allowed the developer to do these things. After I left CCA, the company bundled DEVELOPER/204 with MODEL/204, and I was later told that it was well received. However, that was after IBM’s DB2 had croaked MODEL 204’s market; so I don’t suppose that DEVELOPER/204 sold copies to many new customers. 

My final stop was Prime, and here, with extraordinary credit to Jim Gish, we designed an email system for the ages. Space is too short to list all of the ideas that went into it; but here are a couple: it not only allowed storage of data statically in user-specified or default folders, but also dynamically, by applying “filters” to incoming messages, filters that could be saved and used as permanent categorizing methods – like Google’s search terms, but with results that could be recalled at will. It scrapped the PC’s crippled “desktop metaphor” and allowed you to store the same message in multiple folders – automatic cross-referencing. As a follow-on, I did an extension of this approach into a full-blown desktop operating system. It was too early for GUIs – but I did add one concept that I still love: the “Do the Right Thing” key, which figured out what sequence of steps you wanted to take 90% of the time, and would do it for you automatically. Because Prime was shocked by our estimate that the email system would take 6 programmers 9 months to build, they farmed it to a couple of programmers in England, where eventually a group of 6 programmers took 9 months to build the first version of the product – releasing the product just when Prime imploded. 

Looking back on my programming experiences, I realized that the software I had worked on had sold a grand total (while I was there) of about 14 copies. Thinking today, I realize that my experiences were pretty similar to Narain’s: as he notes, he went through “several projects” that were successful development processes (time, budget, functionality) but for one reason another never took off as products.  It leads me to suspect that this type of thing – project success, product lack of success happens just as much as software development failure, and very possibly more often. 

So why does it happen?  It’s too easy to blame the marketers, who often in small companies have the least ability to match a product to its (very real) market. Likewise, the idea that the product is somehow “before its time” and not ready to be accepted by the market may be true – but in both Narain’s and my experience, many of these ideas never got to that point, never got tested in any market at all. 

If I had to pick two factors that most of these experiences had in common, they were, first, barriers within the company between the product and its ultimate customers, and second, markets that operate on the basis of “good enough”. The barriers in the company might be friction with another product (how do you low-ball a high-priced database in email system pricing?) or suspicion of the source (is Bell Labs competing with my development teams?). The market’s idea of “good enough” may be “hey, gmail lets me send and receive messages; that’s good enough, I don’t need to look further”. 

But the key point of this type of software failure is that, ultimately, it may have more profound negative effects on top and bottom lines, worker productivity, and consumer benefits in today’s software-infused products than development failure. This is because, as I have seen, in many cases the ideas are never adopted. I have seen some of the ideas cited above adopted eventually, with clear benefits for all; but some, like cross-referencing, have never been fully implemented or widely used, 25 or more years after they first surfaced in failed software. Maybe, instead of focusing on software development failure, we should be focusing on recovering our memory of vanished ideas, and reducing the rate of quality-product failure. 

When I was at Sloan School of Management at MIT in 1980, John Rockart presented a case study of a development failure that was notable for one outcome: instead of firing the manager of the project, the company rewarded him with responsibility for more projects. They recognized, in the software development arena, the value of his knowledge of how the project failed. In the same way, perhaps companies with software-infused products should begin to collect ideas, not present competitive ideas, but past ideas that are still applicable but have been overlooked. If not, we are doomed not to repeat history, but rather, and even worse, to miss the chance to make it come out better this time.

This is just my way of recognizing the organizations that I believe have most contributed to increasing the agility of their companies, their industries, or the global economy. I judge things from my own quirky perspective, attempting to distinguish short-term, narrow project changes from longer-term, more impactful, structural changes.  Let’s go to the envelopes:

 

  1. IBM. For the most part, IBM has an exceptionally broad and insightful approach to agility. This includes favorable comments within the agile community about their products (one agile site called IBM Rational’s collaborative tools the equivalent of steroids in baseball); using agile development practices such as mashups and the Innovation Jam in its own strategy creation and innovation as well as product development; attempts to redefine “agile”; and the fostering of the smart grid. Negatives include a too-narrow definition of agile that does not include proactive anticipation of opportunities and system limits, and a legacy software set that is still too complex. Nevertheless, compared to most companies, IBM “gets it.”
  2. The Agile Alliance. While the alliance unfortunately focuses primarily on agile software development, within that sphere it continues to be the pre-eminent meeting place and cheerleader for those “bottom up” individuals who have driven agile from 1-3 person software projects to thousands-of-people designing and developing software-infused products. One concern: their recent consideration of the idea that lean manufacturing and agile development are completely complementary, when lean manufacturing necessarily assumes a pre-determined result for either product development or product manufacture, and focuses on costs rather than customer satisfaction.
  3. The Beyond Budgeting Roundtable.  Never heard of it? It started with Swedish companies claiming that budgets could be replaced, and should be, because they constrained the agility of the resulting organization. Although this is not a mainstream concept, enough companies have adopted this approach to show that it can work. Consider this an “award in progress.”

 

Looking at these, I am saddened that the list isn’t larger. In fairness, a lot of very agile small organizations exist, and Indian companies such as Wipro are in the forefront of experimenting with agile software development. However, even those large companies that give lip service to agility and apply it in their software and product development would admit that the overall structure of the company in which these concepts are applied is still too inflexible and too cost-driven, and the structural changes have not yet been fully cemented into the organization for the long term. Wait ‘til next year …

 

Recently, Paul Krugman has been commenting on what he sees as the vanishing knowledge of key concepts such as Say’s Law in the economics profession, partly because it has been in the interest of a particular political faction that the history of the Depression be rewritten in order to bolster their cause.  The danger of such a rewriting, according to Krugman, is that it saps the will of the US to take the necessary steps to handle another very serious recession. This has caused me to ask myself, are there corresponding dangerous rewritings of history in the computer industry?

 

I think there are.  The outstanding example, for me, is the way my memory of what happened to OS/2 differs from that of others that I have spoken to recently.

 

Here’s my story of what happened to OS/2.  In the late 1980s, Microsoft and IBM banded together to create a successor to DOS, then the dominant operating system in the fastest-growing computer-industry market. The main reason was users’ increasing interest in the Apple’s GUI-based rival operating system.  In time, the details of OS/2 were duly released.

 

Now, there were two interesting things about OS/2, as I found out when researching it as a programmer at Prime Computer.  First, there were a large stack of APIs for various purposes, requiring many large manuals of documentation.  Second, OS/2 also served as the basis for a network operating system (NOS) called LAN Manager (Microsoft’s product). So if you wanted to implement a NOS involving OS/2 PCs, you had to implement LAN Manager.  But, iirc, LAN Manager required 64K of RAM memory in the client PC – and PCs were still 1-2 years from supporting 64K of RAM.

 

This reason this mattered is that, as I learned from talking to Prime sales folk, NOSs were in the process of shattering the low-end offerings of major computer makers.  The boast of Novell at that time was that, using a basic PC as the server, it could deliver shared data and applications to any client PC faster than that PC’s own disk. So a NOS full of cheap PCs was just the thing for any doctor’s office, retail store, or other department/workgroup, much cheaper than a mini from Prime, Data General, Wang, or even IBM – and it could be composed of the PCs that members of the workgroup had already acquired for other purposes.

 

In turn, this meant that the market for PCs was really a dual consumer/business market involving PC LANs, in which home computers were used interchangeably with office ones. So all those applications that the PC LANs supported would have to run on DOS PCs with something like Novell NetWare, because OS/2 PCs required LAN Manager, which would not be usable for another 2 years … you get the idea. And so did the programmers of new applications, who, when they waded through the OS/2 documentation, found no clear path to a big enough market for OS/2-based apps.

 

So here was Microsoft, watching carefully as the bulk of DOS programmers held off on OS/2, and Apple gave Microsoft room to move by insisting on full control of their GUI’s APIs, shutting out app programmers.  And in a while, there was the first version of Windows.  It was not as powerful as OS/2, nor was it backed by IBM.  But it supported DOS, it allowed any NOS but LAN Manager, and the app programmers went for it in droves.  And OS/2 was toast.

 

Toast, also, were the minicomputer makers, and, eventually, many of the old mainframe companies in the BUNCH (Burroughs, Univac, NCR, Control Data, Honeywell). Toast was Apple’s hope of dominating the PC market. The sidelining of OS/2 was part of the ascendance of PC client-server networks, not just PCs, as the foundation of server farms and architectures that were applied in businesses of all scales.

 

What I find, talking to folks about that time, is that there seem to be two versions, different from mine, about what really happened at that time.  The first I call “evil Microsoft” or “it’s all about the PC”. A good example of this version is Wikipedia’s entry on OS/2. This glosses over the period between 1988, when OS/2 was released, and 1990, when Windows was released, in order to say that (a) Windows was cheaper and supported more of what people wanted than OS/2, and (b) Microsoft arranged that it be bundled on most new PCs, ensuring its success.  In this version, Microsoft seduced consumers and businesses by creating a de-facto standard, deceiving businesses in particular into thinking that the PC was superior to (the dumb terminal, Unix, Linux, the mainframe, the workstation, network computers, open source, the cell phone, and so on). And all attempts to knock the PC off its perch since OS/2 are recast as noble endeavors thwarted by evil protectionist moves by monopolist Microsoft, instead of failures to provide a good alternative that supports users’ tasks both at home and at work via a standalone and networkable platform.

 

The danger of this first version, imho, is that we continue to ignore the need of the average user to have control over his or her work. Passing pictures via cell phone and social networking via the Internet are not just networking operations; the user also wants to set aside his or her own data, and work on it on his or her own machine. Using “diskless” network computers at work or setting too stringent security-based limits on what can be brought home simply means that employees get around those limits, often by using their own laptops. By pretending that “evil Microsoft” has caused “the triumph of the PC”, purveyors of the first version can make us ignore that users want both effective networking to take advantage of what’s out there and full personal computing, one and inseparable.

 

The second version I label “it’s the marketing, not the technology.”  This was put to me in its starkest form by one of my previous bosses:  it didn’t matter that LAN Manager wouldn’t run on a PC, because what really killed OS/2, and kills every computer company that fails, was bad marketing of the product (a variant, by the way, is to say that it was all about the personalities: Bill Gates, Steve Ballmer, Steve Jobs, IBM).  According to this version, Gates was a smart enough marketer to switch to Windows; IBM were dumb enough at marketing that they hung on to OS/2.  Likewise, the minicomputer makers died because they went after IBM on the high end (a marketing move), not because PC LANs undercut them on the low end (a technology against which any marketing strategy probably would have been ineffective).

 

The reason I find this attitude pernicious is that I believe it has led to a serious dumbing down of computer-industry analysis and marketing in general. Neglect of technology limitations in analysis and marketing has led to devaluation of technical expertise in both analysts and marketers. For example, I am hard-pressed to find more than a few analysts with graduate degrees in computer science and/or a range of experience in software design that give them a fundamental understanding of the role of the technology in a wide array of products – I might include Richard Winter and Jonathan Eunice, among others, in the group of well-grounded commentators. It’s not that other analysts and marketers don’t have important insights to contribute, whether they’re from IT, journalism, or generic marketing backgrounds; it is that the additional insights of those who understand what technologies underlie an application are systematically devalued as “just like any other analyst,” when those insights can indeed do a better job of assessing a product and its likelihood of success/usefulness. 

 

Example:  does anyone remember Parallan? In the early ‘90s, they were a startup betting on OS/2 LAN Manager. I was working at Yankee Group, which shared the same boss and location as a venture capital firm called Battery Ventures.  Battery Ventures invested in Parallan.  No one asked me about it; I could have told them about the technical problems with LAN Manager. Instead, the person who made the investment came up to me later and filled my ears with laments about how bad luck in the market had deep-sixed his investment.

 

The latest manifestation of this rewriting of history is the demand that analysts be highly visible, so that there’s a connection between what they say and customer sales.  Visibility is about the cult of personality – many of the folks who presently affect customer sales, from my viewpoint, often fail to appreciate the role of the technology that comes from outside of their areas of expertise, or view the product almost exclusively in terms of marketing. Kudos, by the way, to analysts like Charles King, who recognize the need to bring in technical considerations in Pund-IT Review from less-visible analysts like Dave Hill. Anyway, the result of dumbing-down by the cult of visibility is less respect for analysts (and marketers), loss of infrastructure-software “context” when assessing products on the vendor and user side, and increased danger of the kind of poor technology choices that led to the demise of OS/2.

 

So, as we all celebrate the advent of cell phones as the successor to the PC, and hail the coming of cloud computing as the best way to save money, please ignore the small voice in the corner that says that the limitations of the technology of putting apps on the cell phone matter, and that cloud computing may cause difficulties with individual employees passing data between home and work. Oh, and be sure to blame the analyst or marketer for any failures, so the small voice in the corner will become even fainter, and history can successfully continue to be rewritten.

 
Wayne Kernochan