Capturing Innovation for Transformation

Divide and conquer is a popular theory, but it’s not always smart.  Networks and clouds are both complex ecosystems, not just disconnected sets of products, and so to think about them as products will threaten users’ ability to combine and optimize their usage.  Sadly, we live in a world where powerful forces are creating the very division that works against making the business case for transformation, and we really don’t have any strong forces working to combine things as they should be.

The first of our strong, divisive, forces is history.  While data networks were, at first, described as systems of devices (IBM’s Systems Network Architecture or SNA is the best example), the advent of open network technology in the form of IP and Ethernet made it convenient to consider networks as device communities loosely linked with an accepted overall architecture.  That model let us expand networks by buying devices, and you could gradually change vendors by gradually introducing new vendors as you expanded your network or aged out old devices.  The “architecture” became implicit, rather than explicit as it was with SNA.

Things got even more divisive with the exploitation of the OSI model and the visualization of networks as a series of interdependent layers, each of which had its own “architecture” and device complement.  It’s common to find three layers of network technology in legacy IP WANs today, from optical through IP.  Further, networks were also dividing into “zones”, with the “access network”, “metro network” and “core”.  All these divisions generated more fragmentation of architectures into products, and more integration and confusion for users.

Vendors saw this user perception of risk as an opportunity, and so they responded by trying to create proprietary symbiosis among their own layers, zones, and devices so a success anywhere would pull through their entire product line.  Network operators got so concerned about this that they started dividing their products into “domains” and setting limits on the number of vendors per domain and the number of domains a vendor could bid in.  AT&T’s Domain 2.0 approach is an example.

Then there’s coverage.  While all this was going on, we saw a major change in how technology was covered.  Early tech publications were subscription-based, and so they played to the interest of the buyers.  As time went on, we shifted to a controlled-circulation ad-sponsored model for tech media.  On the industry analyst side, we saw a similar shift—from a model where technology buyers paid for reports that served their interest to one where sellers paid for reports to be developed to serve their interests.

The “magic quadrant” concept is another factor that came along here.  Everyone wanted to be in the top-right “magic” quadrant of a product chart, but of course you couldn’t have a chart that everyone won with.  So how about many charts?  The mantra of vendors and positioning was “create a new product category” so you’d have an uncluttered win in that magic space you just defined.

Which brings us to the next issue, differential competition.  My work on the way that buyers make decisions shows that there are two early critical things that sellers have to consider.  One is the enablers, the characteristics of their product or strategy that actually make the business case for the buyer.  The other are the differentiators that separate different solutions that can make the business case.  If you look at how technologies are covered these days (take 5G or IoT as examples), what you find is that the publicity tends to accept the enablers as a given, not requiring proof.  That focuses the vendors on differentiation, which of course tends to be product-versus-product.  Differential competition creates two problems for the buyer, both serious.

First, the process tends to obscure the enablers that make the business case itself.  My project data from the first years of my user surveys (1982) versus the last full year of project data (2018) show the project failure rate (where “failure” means failure to make the business case that justified the project) is nearly triple what it was early on.  If you graph the failure rate, you see that the greatest rate of increase came after 2002, corresponding to the impact of all the issues I’ve cited here.

Second, differential competition almost always loses the ecosystemic vision of infrastructure along the way, by looking instead at product-for-product comparisons.  That means that integration issues are far more likely to bite during project implementation.  Going again to the data, the number of projects reporting overruns due to integration problems in my surveys before the year 2000 was a third that of project integration problems from 2001 onward.

An interesting side-effect here is the overall decline in strategic influence of vendors.  Starting in 1989 when I surveyed this subject, it was common to find that the top level of strategic influence where users said vendors had “significant impact on technology project planning, objectives, and implementation”, five vendors fit the bill.  In that period, we had two distinct vendor groups—those who had strategic influence and those who did not.  By 2013, we saw every single vendor’s strategic influence declining, and by 2018 no vendors fit that most-influence category.  Is this because vendors don’t know how to influence strategy, because users don’t trust vendors, or because there’s really no strategy to influence?

Slightly over half the users I contacted in late 2018 and nearly all the network operators told me that they did not believe there was any generally accepted architecture for cloud networking, virtual networks, SDN, etc.  Interestingly, about the same number said they were “committed” to these new technologies.

The obvious question here is the impact this might be creating on transformation, sales, etc.  That’s harder to measure in surveys and modeling, but as I’ve noted in earlier blogs, there is strong evidence that IT spending and network spending are both cyclical, with spending rising fastest when there’s a new paradigm that creates a business case for change, and dropping when that business case has been fully exploited and everyone is in maintenance mode.  Interestingly, the cyclical technology spending model I developed shows that we went through a series of cycles up to 2001 or so, at which time we finished a positive cycle, went negative, and stayed in maintenance mode ever since.

The tricky question is what generates a positive cycle, other than the simple value proposition that you have a business case you can make.  It seems to come down to three things—an opportunity (to create revenue, lower cost, improve productivity, etc.), an architecture that credibly addresses that opportunity and can be communicated to buyers confidently, and a product set that fits the architecture and is provided by a credible source or sources.  That we don’t have any of these things is the conclusion of my points above.

For enterprise buyers, the open-source ecosystem surrounding the cloud, cloud-native, and microservices seems to be providing the architecture and product set.  We’re still lacking an articulation of the opportunity and the architecture, though.  That doesn’t stop enterprises from migrating in the right direction because technologists can promote the new approaches being defined, given that their scope of adoption and impact is limited.  Operators, who face a longer depreciation cycle and a greater chance of stranded assets and massive turnovers in the case of transformations, are more reluctant to move without a full understanding of the future model.

I think the surveys and modeling I’ve done over the last 30 years shows that we do better when we have a solid set of “enablers” to drive a technology shift.  I think it’s also clear that a solid set of enablers means not only having business case elements that are true and that work, but also ones that are promoted, understood, and accepted enough to be validated by peer consensus.  The most important truth about open-source software isn’t that it’s free, but that it’s been successful in building a model for the future of applications and computing because it establishes a community consensus that eventually gets to the right place.  We can only hope that somehow open-source in network transformation can do the same.