Ciena Gets Some Mojo

Ciena beat its estimates in both EPS and revenue, which raises the question of whether operators are moving more to a “capacity-biased” model of network architecture even without a complete picture of how such a model could be optimized, or how it could contribute to improved profits in the long run.

An interesting jumping-off point for this is a quote from Gary Smith, the CEO: “…the networks today are more ready than before for a step function and capacity to support demand.”  Ciena attributes a lot of its success to the faster-than-usual adoption of its latest product, WaveLogic 5, which is an 800G solution.  Capacity is not only the philosophical basis for networking, the quest for capacity is arguably the fundamental driver in network infrastructure spending.

Transport is essential, but it’s the bottom layer of a stack that converts raw optical capacity into connectivity and services.  Why, then, is Ciena beating on revenue and EPS when vendors higher in that stack, closer to the real buyer of the capacity, are reporting misses?  There are a number of reasons, some pedestrian and some interesting.

One pedestrian reason is that you have to establish transport in order to offer services.  Anything that introduces new geographies, new facilities, is going to have to be a termination point for optical transport.  That gives players like Ciena an early chance to pick a seat at the infrastructure-project table.

Another non-strategic point is that, as Smith says, WaveLogic 5 is in a class by itself in optical transport, so the head-to-head competitive posturing that higher-level vendors have to deal with is much reduced, or absent, for Ciena.  If you can get an 800G product that’s market-proven, and you’re seeing the usual industry trend toward higher and higher bandwidth demand, why not swing for the seats (to stay with our “seats” theme), equipment-wise?

Getting a bit more strategic, Ciena’s ability to supply the top-end transport gear combines with the natural desire of buyers to have a single-vendor transport architecture, to give Ciena a good shot at account control in the transport area.  Remember that priority seating guarantee that WaveLogic 5 could offer?  Ciena can use it to save seats for other Ciena products, and the bigger piece of the deal you can cover, the more indispensable you become.  It also lets you afford to keep dedicated account resources in play, further improving your control of accounts.

More strategic yet is the opportunity to use account control and customer credibility to climb out of pure optical into the packet layer.  Ciena reported “…several new wins in Q3 for our Packet business. We now have more than a dozen customers for our Adaptive IP solutions, including Telefónica UK and SK Telink, which we announced in Q3.”  Network budgets for carrier infrastructure are, to a degree, a single food source in an ecosystem full of predators.  One way to maximize your own position in that situation is to steal food from others, and Adaptive IP can steal lower-level (Ethernet-like) connectivity budget from the router vendors.

Adaptive IP is also a bridge to the Blue Planet operations/orchestration software.  Ciena had a mobile-operator Blue Planet win in Q3, demonstrating that it can use its transport position to bridge upward into network operations and automation.  This, IMHO, is an important step toward delivering on the positioning story of network-as-a-service (NaaS), which is all about creating agile transport to steal functionality from higher layers, and router vendors.

I’ve been critical of Ciena’s ability to deliver on its Adaptive IP and Blue Planet stories, but it seems like they’re doing a bit better at that.  Part of the improvement, according to operators, is the result of operator concerns that old network-building paradigms are hurting profit per bit.  Part, according to some operators I’ve talked with, is due to the fact that Ciena is doing a bit better in positioning their assets.  Their story isn’t perfect, particularly for situations where a prospective buyer gets the story first from website collateral, but it’s improved.

This is important to Ciena, because their earnings and EPS beats can’t disguise the fact that overall optical transport spending is under considerable pressure because of the coronavirus and lockdown.  As Smith says that buyers are finding “they can run their networks a little hotter.”  That defers investment in infrastructure, but of course all bad things, like all good things, “must end someday.”  What Ciena has to be thinking about now is what the rest of the vendors in the network stack are thinking about, which is “What happens when things get back to normal?”

Transport is inevitable.  Are operators and other transport consumers offering a temporary priority to this layer because it is so fundamental, and will they then overspend, relatively, in this network layer and at this point in time?  If so, normalcy won’t necessarily mean a big uptick for the optical layer.  The other higher-layer vendors might then see their numbers go up, and with that see themselves better positioned at the infrastructure-spending table.

Ciena now needs to manage the transition to normalcy.  They have to enhance their packet-optical and Blue Planet stories further, spread the net of marketing wide to open a dialog with those who aren’t currently engaged, and with the higher-layer planners of the buyers they already have.  Smith, on the earnings call, acknowledges that Ciena may have seen the uptick toward normalcy earlier than higher-layer vendors did, simply because those vendors were at a higher layer and networks have to build upward.  They have to expect the other layers will see the uptick eventually, and they have to compete for eyeball share then, as well as now.

What Ciena still lacks is that net of marketing.  It’s not surprising that companies who sell to a relatively small number of enormous customers through gigantic deals would rely more on sales, but in periods of change, lack of a strong marketing/positioning strategy puts a lot of burden on the sales force, and it’s particularly dangerous when you have to engage with new companies, or new people in the same company.

If there’s an agile packet optics function that somehow lives between transport and IP, and if that layer can work with transport to simplify IP, then it has value to the buyer.  If that layer can be definitively and tightly coupled to transport, then vendors like Ciena reap that value in the form of sales.  If the layer floats without an anchor, then the more aggressive vendors are likely to grab it up, and nobody’s likely to think an optical vendor is aggressive, either in technical evolution or marketing effectiveness.

Transformation isn’t confined to optics or transport.  You can see the router vendors contending with the pressure to define a new infrastructure model, and that same pressure will impact the transport layer eventually.  That it’s impacting other layers now means that people Ciena doesn’t ordinarily engage with have already gotten those seats I’ve mentioned time and again in this blog.  Their views will inevitably impact Ciena’s deals, and so Ciena needs to spread its message to them, and clarify their role in that future infrastructure model for all.

The Many Dimensions of and Questions on VMware’s Telco Strategy

VMware came out with an announcement of their 5G Telco Cloud Platform, the latest step in the developing battle for all those carrier cloud applications.  Their press release is a bit difficult to decode, but it’s worth taking the effort because the company is presenting what’s perhaps the last chance of network operators to take control of their cloud destiny.  The biggest question, though, is whether they want to do that.  As I said in yesterday’s blog, operators now seem determined to avoid carrier cloud.  Does this impact VMware, then?

Almost every network operator (“carrier”) has a transformation vision that includes, and is often centered on, a shift from specialized appliances to some form of hosted-feature framework for their network infrastructure.  This has come to be known as “carrier cloud” because all of the approaches involve some form of cloud computing.  I’ve modeled the opportunity for carrier cloud since 2013, and the “optimum” deployment (the one that returns the best ROI) would create about a hundred thousand new data centers by 2030, the great majority qualifying as “edge” data centers.

In 2019, as I’ve noted in prior blogs, enterprises had an epiphany about their use of the cloud, recognizing that most of their cloud benefits would be derived from a split-application model that hosted the GUI-centric front-end piece in public cloud facilities while the transaction processing back-end piece stayed in the data center.  The carriers had their own epiphany that same year—they realized they didn’t really know how to go about building carrier cloud and the cost implications were frightening.

The result of this realization/fear combination was that operators suddenly started seeing public cloud providers as partners in carrier cloud applications, a way of dipping their toes into the waters of the cloud without risking drowning.  Operators were more than happy to take advantage of the interest, an all the cloud providers have carrier cloud programs, focusing primarily on 5G Core feature hosting as the early application driver.

The problem with this, from the operators’ perspective, is that the cloud provider solutions are complete enough to put operators at risk for vendor lock-in, and also create (in their minds, at least) the risk that they’d get committed to a public cloud transition strategy, only to find they can’t transition because they never developed a path in the self-hosting direction.  That pair of concerns is what VMware seems to be focusing on.

The 5G Telco Cloud Platform is a platform, not a 5G-as-a-service approach.  Its primary virtue is that it can host any cloud-open solution to a carrier feature-hosting mission, meaning that it forms a kind of carrier-cloud middleware.  You can deploy it on your data center and in one or more public clouds, and the carrier cloud solution is then non-specific to partners or to the extent to which the operator wants to commit to public versus private hosting of carrier cloud.  In short, it removes both of those risks.

Why, then, is this approach not sweeping the markets and dominating the news?  The press release was dated September 1st, and it was picked up in Light Reading’s news feed, but not on their homepage as a feature story.  I think a part of that is that the VMware carrier-cloud position was covered earlier by some media in connection with the Dish adoption of the platform.  Another reason, I think, is that the positioning in the release doesn’t capture the reality of the situation.

Let me offer this quote as an example: “As CSPs evolve from NFV networks to cloud native and containerized networks, VMware is evolving its VMware vCloud NFV solution to Telco Cloud Infrastructure, providing CSPs a consistent and unified platform delivering consistent operations for both Virtual Network Functions (VNFs) and Cloud Native Network Functions (CNFs) across telco networks. Telco Cloud Infrastructure is designed to optimize the delivery of network services with telco centric enhancements, supporting distributed cloud deployments, and providing scalability and performance for millions of consumer and enterprise users. These telco centric enhancements enable CSPs to gain web-scale speed and agility while maintaining carrier-grade performance, resiliency, and quality.”

This is arguably the key paragraph, but what the heck does it mean?  First, telcos are not evolving from NFV networks to cloud-native.  They have no statistically significant commitment to NFV anywhere but stuck on the premises inside universal (white-box) CPE.  Second, there’s nothing less interesting to the media than a story about how you’re evolving your solution, which is that the release says.  Revolution is news, evolution takes too long.  Finally, all the descriptive terms used are pablum, they’re table stakes for anything.

And here’s the best part; the link in the release for more information is broken (that appears to be a problem with Business Wire, not VMware; you can get to the target by typing in the text, or clicking HERE).  From there, you can eventually get to the detail, which is really about the NFV solution that the new approach is evolving from.  Still, in that detail, you can dig out some things that are important, perhaps not to the media but to the buyer.

Here’s why I think this is important, regardless of positioning.  The buyer matters.  What operators actually need at this point is a way to hedge their bets.  They really want to be in the clouds, but they’re more than wary, they’re frightened.  Right now, the cloud looks like this giant foggy mess that obscures more than it resolves.  Operators have jousted with cloud concepts for a decade and had absolutely no success.  They’re still among those who believe that you make something cloud-native by stuffing it in a container and perhaps using Kubernetes to orchestrate it.  “CNF” stands for “Containerized Network Function”, not “CNNF” for “cloud-native network function”.  If you think the cloud is fog, after all, what do you think of cloud-native?  Properly applied, VMware could give them that.

“Properly applied” is a critical qualification here.  The CNF versus CNNF thing is one of my concerns with the VMware story.  The operator use of “CNF” to mean container-network-function is well-established.  That VMware uses it in the quote I provided above raises concerns that they’re redefining “container” to mean “cloud-native”, and sticking with the old NFV evolution story.  More on that in a minute.

Carrier cloud middleware could be a revolution.  You can deploy it in any public cloud, or even all of them, and so you have that toe-in-the-stream option that seems popular right now.  You can’t be locked into a cloud provider because it runs on them all.  You can’t be held hostage in the public cloud forever, seeing your profits reduced by public cloud profits on the services you consume, because you can move the stuff in-house.

Microsoft, as an example, is providing a 5G solution that includes 5G functions.  How portable that will be remains to be seen.  VMware is partnering with various players (as their Dish deal show) to create 5G solutions that are portable.  This approach could give VMware a direct path to the hearts and minds of the carriers who are looking at virtual-network technology to transform what they have, and build what they’re now getting into.  They’ve just got to sing better.

NFV’s goals are fine, but there’s no evolving from it in a carrier cloud sense because it’s only broadly used, as I’ve said, in CPE.  Yes, the operators want to shift to cloud-native, but they need more than containers to do that.  VMware needs a true cloud-native vision, and they need to explain it clearly and make it compelling.  Then, they have to be prepared to do a fair amount of executive education in their buyer community.

There are some technical shifts needed too.  VNFs are old; not only obsolete but never really helpful in carrier cloud.  Containerizing, meaning CNFs versus VNFs, are only a transitional step, a pathway to a true cloud-native (CNNF, in my terms) goal.  The transformation to CNNFs has to be accompanied by a shift from NFV-specific orchestration and management to cloud orchestration (Kubernetes) and management.  VMware has the platform in place to support the strategy, but they need to develop and then advocate the strategy or nobody will care.

The beauty of VMware’s partner approach to the network functions/features themselves is that if VMware is prepared to advance its platform to support CNNFs, CNFs, and VNFs, it can find partners in each of the areas and promote the cloud-native transformation in a way that acknowledges reality (the CNNF approach is the only cloud-native approach) but also blows the necessary political kisses at NFV proponents who want to justify all the time and effort spent on NFV and its evolution.  Politicians know they have to kiss all babies they’re presented with; vendors marketing to as diverse an interest group as the network operators should do the same.

But what exactly is a CNNF?  That’s perhaps the key question for VMware, because it’s difficult to see how a model of decomposed cloud-native features could be created without in-parallel conceptualizing of the way it would be hosted and the middleware needed to support it.  Obviously, because VMware is supporting an “embedded” or edge-ready version of Kubernetes, they see Kubernetes as a piece of the story.  How much real experience do we have with Kubernetes in cloud-native deployments?  Is most of the heavy lifting done in a service mesh layer instead?  Lots of questions.

This is the big barrier for VMware because they have to answer these questions or they have no end-game to present to the operators.  That their positioning doesn’t depend on operators deploying their own cloud is great, but it’s critical that operators have a vision of what they are deploying, as much as where.  Defining the next step isn’t enough.  Evolution as a long battle for local feature supremacy is an unattractive picture to present to a telco CFO.  Better to show progress toward a goal, and so that CNNF end-game play is where VMware needs to focus.

The challenge in focusing there is that while there’s no question that VMware’s platform can support CNNFs, there’s a lot of questions regarding what a CNNF is, and how one is architected into a service.  A critical first step in that, as I’ve said many times, is recognizing that the control and data planes of an IP network have to be separated in order to optimize cloud-native deployment.  The data plane is a simple forwarding process, linked pretty tightly to trunk locations.  The control plane is where cloud-native implementation could shine, but is it separated in VMware’s vision?  Because VMware is talking platform and not service implementation, we don’t know.  That could make it hard for VMware to promote a CNNF-based approach to infrastructure, and without that, their challenges with their 5G Telco Cloud Platform could be significant.

Operators Continue to Back Away from their Own Clouds

The service providers themselves may be giving carrier cloud its death blow, not tactically but strategically.  In the last two months, operators worldwide have been shifting their thinking and planning decisively away from large-scale data center deployments.  Carrier cloud deployments, which my model said could have generated a hundred thousand new data centers by 2030, now looks like it won’t happen.  And it’s not just that it will be temporarily outsourced to public cloud providers.  It’s G-O-N-E.

In mid-September, operators will begin (with various levels of formality) their normal fall technology planning cycle, which will take till mid-November and guide spending plans for the years beyond.  Over 85% of them now say that they don’t want to “make any significant investment in data centers”.  That doesn’t mean they won’t have them (the do already), but that they are not looking to create services and features that will require large-scale in-house hosting.

The current market dynamic was spawned by operators deciding that, rather than building clouds of their own to offer cloud computing, they’d partner with the cloud providers.  Then the operators started to show interest in hosting 5G features, and all three providers are now in a push (Google, most recently) to provide not only minimal hosting but also the 5G software itself.  When that pathway opened for them, they insisted it was just a transitional approach, a way of scaling costs as 5G deployed.  Now?

Now, they’ve been easing away from the cloud, obviously.  OSS/BSS systems, their own “enterprise applications” were the next thing to be ceded by many operators to public cloud hosting.  Hey, enterprises think the public cloud is the next big thing for their applications, so why should service providers be different?  Answer, of course, is that service providers had expected to deploy their own clouds and somehow lost the will…or the justification.

There were two reasons why operators said they weren’t interested in having data centers anymore, and they were roughly equally cited.  The first was that they lacked the skills to build and sustain cloud computing infrastructure, and were doubtful that they’d learn those skills by consuming the infrastructure from a third party.  The other was that they doubted they would ever really have the applications to justify their own carrier cloud infrastructure.  In either case, it boils down to the fact that they don’t want to get into the hosting business.

Part of the problem here is that back in 2013 when I first modeled the carrier cloud space, operators believed that they would be deploying data center resources to host NFV.  By this time, modeling their input on the topic, I came up with an NFV-driven data center expansion of a thousand data centers worldwide, up to now.  In point of fact, my operator contacts say we have on data centers we can attribute to NFV.  Without the pre-justification of data centers, the next application would have to bear the entire first cost.

5G, the chronologically next of the drivers, started off in planners’ minds as a pure RAN upgrade—the 5G Non-Stand-Alone or NSA version that ran 5G New Radio over 4G LTE infrastructure.  That was a reasonable evolutionary approach, but the operators came to believe that the competitive 5G market would force them to deploy 5G Core almost from the first.  Had the operators started off with carrier cloud using NFV as the driver, they could have hoped for another three or four thousand 5G-justified data centers by this point.  They started late, and didn’t have the pre-deployed data centers, so they’re behind on this too.

The rest of the application drivers for carrier cloud, the largest drivers, are all now seriously compromised.  IoT, video advertising and personalization services, and location/contextualization-based services, are all over-the-top services that operators have historically not offered and are culturally uncomfortable.  Does anyone think an operator would build out cloud infrastructure on a large scale to prepare for any of them?  They don’t believe it themselves, not anymore, and that’s the critical point.

If you need some specific evidence of this point, consider that AT&T is, according to the WSJ, looking to sell off its Xandr digital advertising unit.  This unit would have been a logical way to exploit new personalization/contextualization features that might be created or facilitated by virtual network infrastructure.  If you had even the faintest thought of future engagements in personalization/contextualization, would you kill off the easiest way to monetize your efforts?  I think not.  Recall, also, that AT&T is a leader in looking to public cloud providers to outsource its carrier cloud missions.

If you’re a software or equipment vendor, this is a disappointing outcome, but frankly one that those very players brought on themselves.  Sales of a new technology to a buyer is more than taking an order on a different order pad.  Vendors in the data center and cloud technology space just couldn’t engage the buyer effectively, largely because they didn’t speak the same language.  The fact that all these data center drivers will either go unrealized or be realized on public cloud infrastructure is a serious hit to the vendors who could have built those hundred thousand data centers.

This is also going to have a major impact on the transformation of the network, the shift from routers and devices to software-centric network-building.  When there was a carrier cloud to host on, it was logical to presume that the network of the future would be built largely on commercial servers.  Now, it’s almost certain that it will be built on white boxes and different elements of disaggregated software.

There’s always been a good chance that the to-me-mandatory control- and data-plane separation requirements of software-based network infrastructure would demand a special data plane “server”, a resource pool dedicated to fast packet handling.  The control plane could, in theory still have been hosted on carrier cloud, but if there’s no carrier cloud and the only alternative is to host the whole network control plane on a third-party provider, control-plane white-box deployment starts to make a lot of sense.

The question is how this comes about.  You can take a router software load and run it in the cloud, in which case your control and data planes are not separated.  You can also take traditional router software and run it in the cloud for control-plane handling alone, letting it then communicate with a local white-box data plane for the packet-handling.  Or you can build true cloud-native control-plane software, in which case whether you run it on a white box, your own server, your own cloud, or a cloud provider’s cloud wouldn’t matter much.  That could facilitate the evolution of the control plane into the binding element between legacy connection services and new over-the-top or higher-layer services.

Is the network of the future a data plane of white boxes, joined to a control plane that spans both dedicated white boxes and some sort of cloud, even the public cloud?  Does that cloud-centric piece then expand functionally to envelop traditional control and management functions, new services that grow out of the information drawn from current services, and things we’ve never seen or heard of?  Do operator services and services of over-the-top players somehow intermingle functionally in this control-plane-sandbox of the future?  I think that might very well happen, and I also think it might happen even without a specific will to bring it about.

This might also frame out some of the details of edge computing.  5G already has a near-real-time segment in its control plane, which to me implies that we’re starting to see network/control-plane technology divide into layers based on the latency tolerance of what runs there.  If we’re able to assign things to an appropriate layer dynamically, we can see how something like a mobile-edge node could host 5G features and also higher-layer application and service features that had similar latency requirements.  If we had a fairly well distributed edge, we might even see how failover or scaling could be accomplished, by knowing what facilities exist nearby that could conform to the latency-layer specifications of the component involved.  This might even end up influencing how we build normal applications in the cloud.

One question this all raises is whether the operators are in any position to supply the right infrastructure and platform architecture for carrier cloud.  A more important question, since I think the answer to the first question is obvious, is whether the operators are in any position to define how network features/functions are hosted in carrier cloud.  Should they let the cloud providers run with that, redefining things like NFV and zero-touch automation?  NFV, at least, was supposed to identify specs, not create new ones.  Might the trend toward public cloud hosting of 5G end up helping carrier cloud, and even helping operators transform, more than operators themselves could have?  I think that’s a distinct possibility.

Another question is whether the operators really think they can host all network features in a public cloud.  NFV hosted virtual devices, so it didn’t present network and latency issues greatly different from current networks.  If you start thinking cloud-native, if you start thinking even about IoT and optimum 5G applications, you have to ask whether some local hosting isn’t going to be needed.  We might well end up without “carrier cloud” but with a real “carrier edge” instead, and that could still generate a boatload of new data center opportunities.  We might also see specialized hosting in the form of white-box implementations of network transport features, things that benefit from their own chipsets.

The cloud is a petri dish, in a real sense.  Stuff lands in it and grows.  The goal of vendors, cloud providers, and the operators themselves must be to fill the dish with the right growth medium, the technical architecture (yes, that word again!) that can do what’s needed now and support what blows in from the outside.  I think that natural market forces just might be enough to align everyone with that mission, and so it’s going to be a matter of defining the relationships in that control-plane-cloud.  Who does that?  Probably, who gets there first.

What’s the Real Role of Virtual Network Infrastructure in New Services?

Does a true virtual network infrastructure promote new services?  To make the question easier, can it promote new services better than traditional infrastructure?  You hear that claim all the time, but the frequency with which a given statement is made doesn’t relate to its truth these days.  Rather than try to synthesize the truth by collecting all possible lies, let’s look at some details and draw some (hopefully) technically meaningful conclusions.

The opening piece of the puzzle this question raises is also the most complicated—what exactly is a new service?  Operators tend to answer this by example—it’s things like improved or elastic QoS, wideband voice, and all the other stuff that’s really nothing more than a feature extension of current services.  All this sort of stuff has been tried, and none of it has changed the downward trajectory of profit per bit.

Analysts and writers have tended to answer the question in a totally different way.  “New services” are services that 1) operators don’t currently offer, and 2) that have some credibility in terms of revenue potential.  These would normally be part of that hazy category called “over-the-top” (OTT) services, because they are not extensions to connection services.  This is, in a sense at least, the right answer, but we have to qualify that “rightness”.

We have a growing industry of companies, the “Internet” companies, that have offered OTT services for decades.  Web hosting, email, videoconference and web conference, and even cloud computing, are examples of this traditional level of OTT.  Operators could get into these spaces, but only by competing against entrenched incumbents.  Given the competitive expertise of the average operator, that’s a non-starter.

What remains of the OTT space after we wash out the Internet establishment are services that for some reason haven’t been considered attractive.  In the past, I’ve identified these services as falling into three categories.  The first is personalization for advertising and video-related services, the second is “contextualization” for location- or activity-centric services, and the last is IoT.  All these services have a few attributes that make them unattractive to Internet startups, but perhaps valuable to operators, and it’s these attributes that would have to link somehow with virtual-infrastructure implementations of traditional service features if the virtual network of the future is really a path to new services.

The first of these attributes is that information obtained or obtainable from legacy services form the basis for the service.  The movement of a person, a group of people, or an anonymous crowd is one example.  Past and current calling/messaging behavior is a second example.  Location, motion, and pattern of places visited is a third.  All these are stuff that we can know from today’s networks or their connected devices.

The second attribute is that this critical information is subject to significant regulation for security and privacy.  What you’ve searched for or purchased online is, for many, a potential privacy violation waiting to happen.  Imagine extending it to who you’re talking with, where you’ve stopped in your stroll, and where you are at this moment.  This sort of thing would require explicit permission, and most Internet companies do everything short of fraud (well, most of the time, short) to avoid posing the question “Will you accept sharing this?”

The third attribute is that the service is likely useful only to the extent that it’s available pervasively.  A good example is contextual services that rely on location and behavior.  If they work within one block or one town, they don’t provide enough utility to an average user to encourage them to pay.

Which brings about the final attribute: there is credible reason to believe users would pay directly for the service.  Ad sponsorship has one under-appreciated limit, which is that the global ad spend has for years been largely static in the $600 billion range.  Everything can’t be ad-sponsored; there’s not enough new money on the table, so new stuff would cannibalize the advertising revenue of older things.

All this leads us to now address the opening question, and I think many of you can see a lot of the handwriting on the wall.  There are three pathways for virtual network infrastructure to facilitate the development of new services.  First, the new infrastructure could do a better job of obtaining and publishing the information needed for new services.  Second, the new infrastructure could create a better framework for delivering the services, perhaps by tighter coupling with the infrastructure in a cloud-native way.  Finally, the new infrastructure might be built on cloud “web service” features, a kind of function-PaaS, that could also be used in constructing the new services.

As regulated entities, operators understand privacy and compliance.  They actually hold all the information that anyone would need, it’s just a matter of getting it exposed.  Further, if we had strong APIs to provide a linkage between a higher-level service and the RAN, registration, mobility, and transport usage of cells and customers, that data would be useful even without personalization.

Operators also bill for services now, so having to deliver services for bucks would be no paradigm shift for them.  They have to make all manner of regulatory disclosures with respect to information, and they’d have a conduit to the user to obtain permission for data use.  The beauty of having the operators take this data and convert it into something that could then spawn personalization or contextualization is that the raw data wouldn’t have to be available through the operators’ services at all.  Third-party apps couldn’t compromise what they don’t have.

How does virtual network infrastructure contribute to these four points?  “Virtual” network infrastructure means at the minimum that network features are cloud-hosted, and if we want to maximize benefits, it means that the implementation is cloud-native.  As I’ve noted in many blogs, this doesn’t mean that all the elements of a service are running on commercial servers.  I think it’s likely that data-plane features would still be hosted on white boxes that were specialized via silicon to the forwarding mission.  It’s going to come down to the control plane.

Most of what a network “knows” it knows via control-plane exchanges.  It’s possible to access control-plane data even from boxes, via a specialized interface.  In a virtual network implementation, the access would be via API and presumably be supported in the same way that control-plane messaging was supported.  I think most developers would agree that the latter would at least enable a cleaner and more efficient interface, and it would (as I’ve noted before) also enable this control-plane-hosting framework to become a sort of DMZ between the network and what’s over top of, or supplementing in a feature sense, the network.

Let’s look at those four points with this control-plane-unity concept in mind.  First, if the control plane is indeed the place from which most network information would be derived, then certainly having the mechanism to tightly couple to the control plane would maximize information flow.  We can say, I think, that this first point is a vote in favor of virtual-network support for new services.

The second of our four points is the management of critical information.  In our control-plane-unity model, the service provider could introduce microservices that would consolidate and anonymize the information, so that if the information is exposed either to a higher-layer business unit or to another company (wholesale service to an OTT), the information has lost its personalized link, or the degree of personalization can be controlled by a party who is already regulated.  That means our second point is also addressed.

Point four (I know you think I’ve missed one, but bear with me!) is the question of willingness to pay.  This one is a bit more complicated, because of course users want everything free.  The reason why free-ness is difficult for these new services is that personalization to the extent of the individual is what makes focused advertising valuable.  It is possible to anonymize people in information services, of course, but unless there’s some great global repository of alias-versus-real mappings, every source of information would necessarily pick their own anonymizing strategy, and no broad knowledge of behavior could be provided.  Some work is obviously needed here.

In the meantime, of course, there’s always the chance that people would pay.  We pay for streaming video today (in most cases), so there’s at least credible reason to believe that a service could be offered for a fee if the service’s perceived value was high enough.  Operators couldn’t make this value guarantee unless they either offered the retail service themselves, or at least devised a retail service model that they could build lower-layer elements into.  More work is needed here too.

Point number three is the hardest to address.  It’s difficult to build a service that has a very narrow geographic scope, particularly if that service is aimed in large part at mobile users.  No new network technology is going to get fork-lifted into place, after the old has been fork-lifted into the trash.  A gradual introduction of virtual-network technology defeats virtual-network-specialized service offerings by excess localization.

The best solution here is to focus more on 5G, not only on 5G infrastructure but on the areas of metro (in particular) networking that 5G would likely impact.  If an entire city is virtual-network-ized, then the credibility of new services driven by virtual-network technology is higher, because a large population of users is within the service area for a long time.

The ideal approach is to play on the basis of virtualization, which is the concept of abstraction.  Some of the control-plane information that could be made available to higher-layer applications/services via APIs could also be extracted from existing networks, either from the devices themselves or from management systems, appliances, or applications (like the IMS/EPC implementations, which could be either software or device-based).  If an abstraction of service information APIs can be mapped to both old (with likely some loss of precision) and new (with greater likely scope and better ability to expand the scope to new information types), then we could build new services that would work everywhere, but work better where virtualization of infrastructure had proceeded significantly.

The conclusion to my opening question isn’t as satisfying as I’d like it to be, frankly.  New virtual-network architecture implementations could offer a better platform for new services, but there are barriers to getting those architectures in place and realizing the benefits fully.  The biggest problem, though, may be that operators haven’t been comfortable with the kind of new services we’re talking about.  Thus, the irony is that the biggest question we might be facing is whether, without a strong new-services commitment by operators, we can hope to ever fully realize virtual-network infrastructure.