Do We Need New Infrastructure for New Services?

Does a new set of services for network operators imply a new network infrastructure?  That’s a question some of you asked me after the series of blogs I’ve just done.  I’ve talked about software automation of the service lifecycle, and that has focused primarily on cost management.  Obviously, software automation could also facilitate the introduction of new services, but how effective that would be depends on whether the new services could be delivered from “stock” infrastructure.  Agile ordering of something that will take a year to deploy and test isn’t going to move the ball.

The problem being raised here is one operators have raised too.  Nobody likes to base network evolution on the notion of static services, focusing totally on improving profit by reducing costs.  There may not be an easy alternative, though.  A “new service” is a vague term, as I said in a prior blog.  We have multiple categories of “newness”, ranging from enhanced connection features, to connection-supporting things like security, to hosted or experiential features like cloud computing.  Operators are aware of them all, and interested in them to the extent that they can be validated.

Can they be, and if they can, how?  It depends on the class of new service we’re talking about.

Connection service innovation has focused primarily on elasticity of bandwidth or dynamic connection of endpoints.  People have been talking about the notion of a “turbo button” for some time; you push it to get a speed boost when you need it.  Turbo buttons are a consumer access feature largely killed off by neutrality regulation, but for business the equivalent concept is at least legal.  Workable might be another matter.

Remember the rule that we have to deploy automatically delivered stuff based on stock resources?  I can only dial capacity up or down to the extent that I’ve got access assets that are able to support both the range and the bandwidth agility.  For most business services, it would mean selling an access pipe fat enough to handle the highest capacity you intend to sell, then throttling it up and back based on a service-order change.  Operators are prepared to pre-position access capacity in support of this or other services, if there’s a revenue upside.

The challenge here has been that enterprises are interested in dynamic bandwidth to the extent that it lowers their overall costs.  They get frustrated when a salesperson says “Wouldn’t you like to be able to dial in some extra speed?” or “Wouldn’t a little extra capacity be helpful at end-of-quarter?”  Yeah, they’d also like to have their local tax authorities declare a dividend instead of sending a bill, and it would be helpful if some government regulation made buyers purchase their goods or services.  It’s not realistic, though.  Buyers say that business needs drive information exchange, and at the moment they only see a compulsion for dynamism if throttling back for lower performance and up for higher meant changing their costs in a net negative way.

The situation isn’t all that different at the connection-augmenting feature level either.  Yes, it would be nice to be able to get a virtual firewall installed with a click when you need it, but once you figured out you needed one, it’s unlikely you’d then say “Well, let’s just throw the old doors open to hacking!” and pull it out.  Most of the credible features, once installed, would tend to stay that way, which isn’t exactly a dynamic service model.

So, the answer to the opening question is “Yes!”  New network infrastructure is needed, but not to do what we’re already doing.  Yes, the real service opportunities arise in the cloud, and we know that because that’s where those successful OTT competitors we always hear about are living.  I don’t disagree with those who say that the operators would have to build something more OTT-like.  However, that doesn’t necessarily mean that they have to build that instead of what they already have.  Operators could build OTT infrastructure over their own tops.  The question is whether either building that infrastructure or sustaining it would be economically facilitated if some of the current connection services features were supported on it.  SDN and NFV have to prove that could be true, if they are to be useful to operators in the long term.

If we were to envision NFV’s contribution as hosting of virtual CPE features, it should be clear to anyone with a calculator that there’s no way that’s going to be broadly useful.  You can’t host vCPE for consumer services when the amortized cost of the feature as part of a cable modem or other broadband gateway is a couple bucks a year—which is just what it would be.  Business services might or might not benefit from cloud-hosting of vCPE, but there are only 1.5 million business sites in the US that are satellite sites of multi-site businesses.  Three-quarters of these don’t need business-grade access like Carrier Ethernet.  They’re not going to contribute either.

So, what provides the opportunity?  The big truth here is that there is no credible service that a network operator could deploy to justify a transition to NFV.  That doesn’t mean they couldn’t adopt it, only that they couldn’t get there with it.  The transition from not-NFV to NFV has to start with a big infusion of opex savings, and again we should realize that.  Operator costs for “process opex” involving service and network management and related costs (like churn) currently run about 31 cents on each revenue dollar, where OTT costs in the same area run less than six cents.

Once you get an efficient and agile service layer, you could start to build out to optimize what it can deliver, but even there you need help.  We cannot simply build mobile services based on NFV, because too much of mobile infrastructure is already deployed and we’d have to displace it.  We have to piggyback on an initiative that would refresh a lot of infrastructure as it rolled out, which means 5G.

Beyond mobile, the obvious opportunity and in fact the brass ring is IoT.  IoT could by itself build enough carrier cloud data centers to jumpstart the whole next wave of services.  However, operators are stuck in a transcendentally stupid vision of IoT based on giving every sensor a 5G radio (and the media is more than happy to play along).  As long as operators don’t have a realistic vision of the future, they’re not going to adopt a realistic strategy to deal with it.

Most of all, though, we don’t have a plan.  You can evolve to a lot of things in networking, but it’s not easy to evolve to a national or global change in network technology.  You need a vision of the end-game and a way of addressing the evolution as a series of technical steps linked to real and provable ROIs.

I firmly believe that I could justify the carrier cloud.  I firmly believe that I know the pieces needed to get there and the steps to take.  I firmly believe that there are six or seven vendors who could provide everything that the operators would need, and do it starting right now.  But I firmly believe that vendors wouldn’t promote the approach because the sale would be too complicated, and without strong vendor backing of a revolution, everyone ends up sitting in coffee shops instead of marching.

Do we want something to happen here?  If so, dear vendors, you need to stop asking buyers to take you on faith.  Prove your worth.