Transformation is hard. That’s perhaps a simplistic way of summarizing all the things I’ve learned and heard over the last two months, but it certainly reflects operator views. Transformation based on technology changes is so hard, in fact, that a growing number of operators (at the CxO level) aren’t convinced any more that it’s even possible. Vendors, as I suggested in my blog last week, seem to have abandoned the role of “driving” transformation and adopted instead a position of “if you do it, I can sell you the products.” We’re in for some confusing times, for sure.
The word “transformation” means “a thorough or dramatic change.” It doesn’t mean gradual evolution, it means revolution, and a revolution is expensive, particularly when the value of incumbent infrastructure is enormous. If we look at the issue from a financial perspective first, we can get an idea of why things like cloud computing, SDN, and NFV have fallen far short of early expectations. Then perhaps we can understand how the problem could be fixed, or at least alleviated.
Let’s say we have $100 billion in investment in information technology and networking. Presuming a 5-year average expectation of useful life and an even distribution of purchasing over time, we have $20 billion per year of assets that are written down. To achieve a “thorough” change, we could reasonably expect to have to impact at least 51% of our infrastructure, which would mean two and a half times as much replacement as normal write-downs would allow. The extra $31 billion would have to be offset in some way by benefits.
One CFO told me five years ago that “There’s nothing cheaper than what you’ve already bought.” Savings in capital cost, which is what many “revolutionary” technologies offer, means nothing if you’re not expecting to incur capital cost because you already have something installed. So, the target market for a “new” technology is, as a baseline, only the 20% of infrastructure value that’s passed useful life. If we penetrated that by, say, 10%, we’re penetrating the real infrastructure by only 2%.
A two-percent change would take 25 years to reach revolutionary proportions, which hardly qualifies as a revolution or transformation. This is why capex-driven arguments for the cloud, or SDN, or NFV are by themselves almost always doomed to fail.
So now, given this, let’s look at the periods where “transformation” actually worked. We’ve had three past periods where the rate of growth in IT spending exceeded the rate of growth in GDP by a significant margin. They were from 1950 through 1968, from 1982 through 1989, and from 1992 through 2000. In all these periods, you can tie the happy IT outcome to a significant change in the relationship between computing/networking and workers/consumers. The explosion in computing and the data center was the first wave, the second was distributed personal computing, and the third the Internet. During these waves of positivity, we exceeded the current rate of growth in IT spending versus GDP by an average of 40%.
We’ve not had a wave of this happy kind since 2001, and part of the reason is that in the past we were “underempowered”. We had, up to the PC, no personal IT at all. We had, up until the Internet, no model for a consumer data service at all. That underempowerment meant underinvestment, which means that the risk of trying a new (and transformational) idea was lower because you didn’t have to displace non-depreciated infrastructure. I believe we could still promote a transformation on the demand-benefits side, but it will be harder.
Harder, but not impossible. Let’s take a space I’ve studied in detail, and so have decent numbers to work with. A network operator spends between 18 cents and 22 cents of every revenue dollar on capex. They spend about 18 cents on profits returned to shareholders, and the remainder is operations and administration. Within that, operators currently spend about 29 cents on “process opex” meaning the direct costs of service operations, marketing, and support. Suppose we could cut that process opex cost in half. The savings could be 14.5 cents per revenue dollar, which is between 66% and 81% of total capex. The savings could allow operators to increase their capital budgets by at least two-thirds without changing their profits.
What this all demonstrates is that the only way to transform networking and IT is to start by doing something profound to the cost of operations. Application lifecycle management for enterprises, or service lifecycle management for network operators, should be a primary automation target. Not only would that cover a much higher rate of capital spending, which would make infrastructure transformation feasible, it would also reduce or eliminate the risks associated with adopting a new technology, by automating the way it’s operationalized. Amazon and Microsoft both had cloud failures recently that could be directly linked to an operations error. The right lifecycle management could have prevented that.
This is perhaps the most important lifecycle management benefit. The ability to deploy new services and applications quickly, and sustain them through normal and abnormal conditions with far fewer errors, is fundamental to rapid introduction and stable operations. Here is also where the carrier network space is helping advance technology overall. DevOps, the deployment automation tool that’s been developed for enterprises, has not caught on nearly as well as it should have. Perhaps that’s because DevOps tools are more about deployment than full lifecycle management. With NFV, in particular, the carrier space has advanced the concept of orchestration, meaning a model-based handling of everything in a lifecycle. What’s needed now is to make orchestration, in its fullest sense, universal.
Logically, you can start universal orchestration either at the top—with management systems—or at the bottom with the transport infrastructure. Top-down orchestration has the advantage of generating a lot of opex savings with a very low capital investment. In fact, you could achieve ROIs up to 50 times as high versus SDN or NFV-driven transformation, with top-down orchestration. That doesn’t mean that you should never do infrastructure transformation, only that you could pay for more of it faster. But starting at the optical level has the advantage of fundamentally changing the way networking is done.
Every layer of networking builds on the layers below, which means that if you were to take care of a lot of issues at the bottom, they’d disappear from requirements above. Vendors tell me that about 80% of router code is associated with resiliency under load and failure conditions. Suppose you dramatically reduced those issues by handling them at the optical layer? One thing that hasn’t been considered widely is that SDN’s central control paradigm could be extended a lot further if you could assume that the physical layer of the network didn’t break often, or at all, and that path loading was instantly and invisibly handled by agile transport.
The transport or bottom-up model may have the best chance of success, because there is still a lot of room for vendor support. The notion of transforming networking by diminishing the role of L2/L3 technology is understandably unappealing to vendors who make their money there. For computer and white-box vendors it’s a different story, and for fiber players like Ciena or Infinera, the new model could be a big boon. In contrast, it’s hard to see any network or IT vendor changing course to promote top-down service lifecycle automation at this stage, and the OSS/BSS vendors who might have the interest (and even, in some cases, some products) have to fight the division between operations-driven IT and network equipment that prevails among operators. But so far neither group has done much, and so it’s possible that open-source lifecycle management will spread without vendor support.
You don’t certainly don’t hear about lifecycle automation from SDN vendors, or NFV players, or even optical players. Perhaps that’s why we’re not already in a golden age of network transformation. Perhaps that’s why The Economist has sort-of-postulated the notion of Amazon becoming the telco of the future. They call it “cloudification” but IMHO that’s just editorializing. Most of networking can’t be hosted, or shouldn’t be. What makes Amazon (or, more likely, Google) a potential winner in a future war with telcos is that they’re planning their networks around services, and presuming that services will evolve quickly. They don’t have a trillion dollars’ worth of infrastructure to depreciate to fund their innovations. But the telcos could step out, not by adopting a cloud-hosted everything but by adopting a cloud-centric vision of automation of the service lifecycle.
This could extend to the enterprise, too. There is little difference between deploying a service and deploying an application, or between the lifecycle requirements of the two. The same scheme used for service lifecycle management could automate cloud deployment and application lifecycle management, and do the same even in the data center. DevOps has always been about deployment, with only small recent innovations to address full lifecycle management. Orchestration/automation of the full lifecycle is possible.
I think lifecycle automation could fix most of the tech problems we have, open a new wave of innovation. But it’s not just selling the same crap and pushing for simple clicks on jazzy headline URLs. We’re going to have to work at this, and transformation is hard, as I said at the opening of this blog. Failure is harder, though.