Could Network Transformation Actually Transform the Network?

There are a lot of ways to transform networks, but the most fundamental way would clearly be to build them differently, meaning not only different vendors but different network architectures.  We’ve kicked around a bunch of suggested new approaches but none of them have really gained traction.  The best proof of that is that Cisco still turns in what’s probably the best vendor performance, and they’re legendary for changing positioning a lot and product only a little.

The problem with a transformational architecture, of course, is that it’s hard to convince network buyers to make massive changes, particularly to something that’s largely (at this point, surely) untried.  When you try to ease into transformation, you run into the problem of balancing the benefits and costs/risks.  Each step of a transformation, according to network buyers, has to be justified on its own.  Each step has to present either (at the early stages) an avenue for graceful retreat, or (at later stages) an opportunity to hunker down on the current state and stay there.

All of this is complicated by the fact that we’ve been transforming for decades now, but doing so without really planning out a path.  In the ‘80s, networking was based on public TDM trunks and user-added nodal devices.  By the end of the decade, that single path had divided.  We still had trunk-and-node networking, but we added two streams.  One was what we could call the “packet services” model that included frame relay and ATM, and the other was the “Internet model” based on IP.

There are a lot of reasons why frame relay and ATM failed, but the biggest reason was that the Internet was a new application that brought its own service model along.  The explosion in Internet use created a revenue opportunity and traffic flows that were not only bigger than the old business private-network past, but created a network that could carry that same business VPN traffic.  However, part of the problem was that the transformation of the current (then-current, now old private-network) model took too long and didn’t present risk-free and easily justified steps to buyers.

We can see the same sort of thing today.  Could you “replace” IP networks with SDN?  On the scale of the Internet or even a large VPN, companies and operators agree privately that’s a doubtful proposition.  Could you replace routers with hosted instances?  The same people say it wouldn’t be possible overall, though for smaller-scale nodes it would work.  What can we do then to “transform” networks?

Everyone I talk with believes there are two basic truths.  First, you have to separate service networks from transport networks.  That offers two benefits.  First, traditional technology like fiber and new options like the 5G/FTTN hybrid can be introduced without impacting services themselves, which is critical.  Second, service networks are per-tenant or per-mission and smaller in traffic scale than transport, so you could hope to use more hosting and less appliances.  Second, you have to make a significant improvement in opex to help fund your changes, and finally you have to link the transformation to new revenue, meaning new services.

Virtualization is a big part of all of these points.  Separating service networks from transport is really saying that we need to elevate our notion of transport to subsume some of the features of service networks.  Virtualization, meaning the creating of tunnels, “pseudowires” (in memory of my good friend Ping Pan, now passed), or virtual routes, is a way of constructing the underlayment of a service network in a way that’s most convenient for the specific service network mission.  If this is extended to include rerouting traffic and resizing paths, then the features of the transport network would improve to the point where simplification of the service network is possible.

Service/transport separation can thus reduce costs by reducing the complexity of the overlay (Level 3 in the current network).  You could imagine that a mature virtualization-of-transport strategy could let us build service networks with something very much like SD-WAN technology linked to agile transport paths.  These SD-WAN-like nodes would be simpler than today’s routers and also fully hostable, which means they could be done with open technology.

One thing I think would enhance the value of the nodes in this approach is the use of the P4 flow-programming language.  The value of P4 is that you can use it to implement a full suite of router features, but also to implement firewall-like forwarding (SD-WAN) and even SDN OpenFlow forwarding.  P4 can be adapted easily to custom silicon for accelerating the forwarding process.

Of course, the opex benefits of this approach could also be significant.  Application of service automation to the virtual transport layer is fairly easy, and you could use cloud orchestration to deploy the service nodes.  That’s because you’re building virtual routes upward from the optical layer, and you have little or no legacy technology in the network to have to co-manage.  Instead, you’d simply let legacy stuff continue to use the optical network directly, running in parallel with the new service network.

From an evolution perspective, that means a bottom-up play by optical vendors would be very smart.  That’s why I’ve been disappointed in the lack of aggression Ciena has shown in positioning this approach.  However, it would also be easy to build to this model starting with the service network at the top, and that may be what’s in store for us if operators like AT&T continue to promote a new cell-node model.  Some sort of virtual network is explicit in 5G overall, and if 5G deployment happens on a fast-enough pace we might see enough 5G-virtual influx to establish a top-down path into our preferred model.

Enterprise use of hybrid cloud computing could also create a service-virtual-network top-down opportunity.  Again we could look at SD-WAN, which is a simple model for supporting enterprise connectivity without MPLS VPNs.  Today, SD-WAN almost always rides on the Internet, but an SD-WAN node could also be connected to agile virtual transport tunnels.  Such a move would make it possible for operators to build VPNs that were more than tunnels over the Internet but were also more deterministic and used cheaper infrastructure versus MPLS VPNs.

But with all of this, we’re still missing our last point.  Remember that what made IP the ruler of the network world wasn’t good evolutionary planning; we didn’t have any such thing.  It was the Internet as a new application, the first and only populist data service.  Today, the challenge for “new revenue” or “new services” is that there’s nothing likely to emerge in that category that’s not a service on/over the Internet.  That means it’s very likely to be an OTT service, open to all the OTT giants to compete in.  Thus, it’s not likely to drive a transformation in network technology.

Absent the radical driver of new revenue, all the other benefits of transforming the network itself would have to step up to fill the gap, and that would require an organized approach to a new way of networking.  Vendors are clearly not going to deliver on that, and operators themselves seem (in the end) to be either sitting back and waiting for somebody to propose a new model, or trying to build a “new” model based on old principles.  NFV, for example, virtualizes devices and thus tends to sustain the old way of networking.

And that, my friends, is the challenge of fundamental transformation.  We have a global Internet, global ISPs, global operators, and all of these rely on traditional IP networks.  I firmly believe that a two-tier transport/service model would work better, and that vendors could define a migration path to it that would satisfy operators’ cost/benefit requirements and even provide a fairly safe transition strategy, one that you could fall back from early on if needed.  However, I’m fairly certain that incumbent router vendors aren’t interested in rocking the boat.  That means it will take another class of vendor to promote the transition.  Optical, cloud, virtual-network, 5G…there are plenty of candidates, but is there the will to take a chance, even in one of these groups that has little to lose and a lot to gain?  We may find out as 2019 merges into 2020.

Is the “New Facebook” Really New at All?

Facebook has announced a “privacy-focused” vision for its future, a response to what can only be called an avalanche of privacy, fake news, and abusive posts problems.  Some people have responded to these issues by dropping out of Facebook, but it’s less likely that has stimulated Facebook to act than that they fear government regulation, particularly internationally.  There are a number of questions the new vision raises, including whether Facebook means it, whether it will work as planned, and whether the company can deliver to investors based on the shift.  A bigger question not often asked is whether Facebook’s issues are symptomatic of problems in the Internet overall.

In business terms, the Internet is a two-layer structure.  At the bottom, there’s the access network that provides users with a portal onto the second layer.  This access network is provided by Internet service providers (ISPs) for both home and business users at fixed sites (“wireline”), and to mobile device users (“mobile broadband”) via cellular mobile services.  At the top is a layer of information and experience providers accessed over the bottom layer.  This group, typically called “over-the-top” or OTT players, don’t provide network connectivity, but rather provide the stimulus for being on the Internet to start with.

Among the OTT players, we can find three categories of resources.  First, there are company or organizational websites that promote the products, services, membership, or whatever in the entity that creates the site.  These websites are cost-justified by what they promote.  Second, there are information sites sponsored by advertising, which include many publications and social networking sites, search engines, and so forth.  These are paid for through the revenues obtained by presenting ads to site visitors.  Finally, there are fee-for-service sites that collect fees for what they provide.  These include the streaming TV services like DirecTV Now and YouTube TV, most VoIP sites, many online news and magazine sites, and so forth.

Many sites, including Facebook, have their foot in a number of these spaces, but Facebook is primarily a player in that second ad-sponsored group.  This happens to be the group that poses the most regulatory and public policy issues, because of the way that advertising is valued and because of the inherent limits in the total addressable market (TAM) for ad-funded services.

Total global advertising revenue for 2019, according to my modeling, is $612 billion.  This number has seen fairly steady growth of about 4% annually, and it includes print, TV, and digital/online advertising in all their permutations.  There does not appear to be anything much that can be done to increase the TAM growth rate, and obviously OTT players wouldn’t be satisfied with 4% annual revenue growth, which means that the ad-sponsored OTTs need to gain market share at somebody’s expense.  For social media and search players and for OTTs that supply ads to websites that depend on them, the best way to gain market share is to improve targeting and ad click-through success.

Ad targeting means more information about the targets, which is where privacy issues have come in.  Any company that can gather personal information, information about recent interests and activities, purchases, and so forth can leverage that to place ads that are more effective.  That means more advertisers will use them, which means their market share and revenue will rise.

A corollary point is that more eyeballs also mean more ad benefit.  You can have great targeting, but if nobody goes to your page(s) then it does you or your advertisers no good.  That means that people and topics that generate views are valuable to OTTs like Facebook, which many believe is why the company has been slow to respond to issues of fake news, propaganda, and so forth.  It’s even more important to have hot topics when those topics also drive advertising.  You want a bunch of emotionally drive people spending a lot of time online, being a target for ads and at the same time revealing hot-button factors about themselves that can be leveraged to get even more attention.

This point is where we jump off to the “tribal” model of social media.  Suppose you’re a believer in UFOs from the planet Mars.  It’s not exactly mainstream, so it’s not unlikely that your family and friends don’t share that view.  In many cases, this lack of social support will diminish your commitment to those flying Martians.  But go to social media and you have a potential way of engaging people with like interests, no matter how bizarre or even deviant they might be.  Reinforced, you become a member of the Flying Martians Conspiracy group, and that group generates activity that the social media player can leverage with ads.

The value of ads, then, is related to the targeting and the number of ad presentations you can promise.  It’s fairly easy to see that both these factors favor larger players, that consolidation in OTTs is the consequence of companies’ need to mass up to compete effectively.  That means that breaking up “big tech”, at least in OTT ad-sponsored areas, would be very difficult.  It also means that the same forces that got companies bit would tend to make them bigger still.  Combine that with the desire to improve targeting by gathering information and you can see how Facebook ends up where it has.

You can also see why Facebook would find it incredibly difficult to abandon its current practices.  If you read the post by Facebook’s CEO Mark Zuckerberg, it seems to me that what it’s promising is a “building a privacy-focused messaging and social networking platform” that includes encryption, non-persistent messages, and options for group communications that’s not “in the open”.  What it doesn’t say, and in my own view can’t say because it’s not true, is that this new model will replace the current Facebook model or cure the issues users have with privacy and fake or hurtful or manipulative posts.

Offering a platform that turns Facebook into a secure chat room may sound good from a privacy perspective, and it might even sound good from the perspective of avoiding the tribalistic abuses we see like trolling.  The problem is that offering that secure chat room isn’t the same as switching to it, and users have chat facilities available already.  Presenting a choice Facebook knows users won’t make not only fails to address the issues, it may submerge the things that could really help.  However, I’m not at all convinced that any real change by Facebook or other ad-sponsored sites, or even TV networks, is possible.

Neither Facebook nor any other ad-sponsored experience source can sustain its share price by trashing its main revenue stream, and so nobody will do that.  This new model is going to be little more than a parallel pathway.  Ironically, instead of pulling regulators off Facebook’s scent, it’s likely to increase their angst.  A messaging platform based on encryption, secure from everyone—even Facebook—from monitoring?  Might the new Facebook platform be immune from any oversight, and thus become a place where things that both public policy and user (particularly parent) consensus says should not be on social media can now happen?

The real question here is how a society can respond to ad sponsorship in a practical way.  You will surrender privacy for experiences sponsored by ads, period.  What’s done to protect your privacy will ultimately reduce the value of ads and thus the quantity and quality of experiences you can obtain by using ads to sponsor you.  Do you want good stuff?  You’d have to pay for it, and nobody wants to do that either.

You also face a less visible but perhaps more significant risk in what we could call “content bias”.  Content bias means that types of content that are likely to generate more ad revenue are more likely to be presented.  We see that literally every day in the tech industry, but we also have it on television stations and even news programs.

Go back five or six years and ask what used to be covered on the news; the answer was all sorts of events.  Look at news TV today and it’s almost 100% politics.  Why?  Because once you hear about a “normal” news event, you don’t need to keep hearing about it.  Politics can offer endless opportunities for reanalysis.  Those immersed in the political debates (on either side) are more likely to tune in than those who just want a quick summary of what’s happening in the world overall.  News networks learned that during the 2016 campaign, and they responded predictably.  Thus, you will get political news, whatever you might like to get personally.

In tech, content bias has literally crippled one of the major engines of the industry.  Back in the 1980s, certain tech publications ranked second only to “experience of a known and trusted peer” as the most important influence on buyers’ strategic thinking.  Today, the influence of the media on strategic thinking and planning is just a bit above statistical significance.  The top reason why IT and network professionals give for reviewing an article or analyst report isn’t to support their planning, but to convince non-technical management to make a decision.  Those top publications of the ‘80s aren’t even published anymore.

Lawmakers won’t fix this, not because they couldn’t but because voters won’t let them.  The reduction in ad revenue that enforced privacy rights would bring would starve people for free stuff.  Most people today would happily trade privacy for free videos and music, and so all people will make that trade.  If anyone in Congress was brave enough to take an effective position against ad sponsorship and in favor of effective privacy protection, they’d either find little or no support among other lawmakers, or be swept out of office at the next election.  It would take a major, destructive, breach of trust to change people’s minds, which nobody wants to see.

Instead, we’re going to see consolidation among OTTs, as we already are, as people fight for market share in a contained ad revenue space.  We’re going to see more and more invasive advertising, which we’re already seeing, and we’re going to see Facebook promising to do stuff that’s not going to change anything…which just happened and will happen again and again.  I just wish Mark had been frank and said “Look, we’ve got you, and you’ve got us, and we know it’s symbiotic interest that drives us both, so we’ll do a little dance here and then it’s business as usual.”

But it’s not Facebook’s fault but ours.  At the core of the Facebook problem is the fact that we want something for nothing, but that’s not what’s happening.  Payment is there, but it’s neither direct nor obvious, and from that point it’s only a short romp to “insidious”.  Robert Heinlein said it best in an old sifi novel called “The Moon is a Harsh Mistress”; “there’s no such thing as a free lunch.”

Should, or Can, We Add CI/CD to NFV?

ETSI is now looking at how network features, specifically virtual network functions (VNFs) could be updated without disrupting the services they’re used in.  In the cloud and application development, this has been a requirement for some time, and it fits into something called “continuous integration/continuous development” or CI/CD.  The idea is to allow changes to be made quickly to something that’s functionally a piece of a whole, without breaking the whole.

This seems like a great step forward, but that’s not as simple as it sounds, in part because networks and applications are actually very different.  Furthermore, there are different views on what the focus of virtualizing network features should be, and very different views on how to go about it.  Finally, to me at least, there’s a question as to whether feature updates in NFV are the real problem, because service disruptions from other sources are more likely.  The remedies for the latter should clearly be considered as at least a part of the strategy for dealing with CI/CD issues.

There are two pieces to CI/CD.  One piece deals with the functional congruence of a new version of a feature or application component with the rest of the application or service.  Is an enhancement implemented in a way that permits its introduction without disruption?  That’s a matter of first design, and second of testing.  The second piece deals with the question of whether the deployment of a new feature would, by disrupting operations in some way, break at least temporarily the thing it’s a part of.

The challenge with our functional congruence side, in relating to network services versus applications, is the fact that the ETSI NFV ISG has presumed that VNFs are the analog of physical network functions or devices, and thus should be interchangeable at a VNF level.  Testing a VNF in a CI/CD framework is difficult because it’s difficult (or perhaps even impossible) to determine what partner functions its features have to mesh with.  How many variations on VNF combinations might there be in a service?  A lot, particularly if there are many vendors offering the same VNF.

It’s my own view, as a software type, that VNFs really have to be tested against class, meaning that a VNF should “implement” a general class or object that presents as an intent model and has a very specific set of external inputs, outputs, and characteristics.  Any VNF that matches class requirements would necessarily work in concert with other VNFs that could “see” or “depend on” only those external representations.  However, if you wanted to add in CI/CD procedures to enhance this test-against-class paradigm, it would at least be possible.  Would it be worth it?  That’s up to you, or rather the operator deploying the stuff, to decide.

The other side of this is the question of whether you could introduce anything into a running service and have an acceptable level of impact.  When you upgrade the functionality of an application component and want to introduce it, you can either replace the current instance of the component (swap out/in) or you can parallel that instance and phase to it gracefully.  Those two choices map to stuff we do all the time in cloud applications that are resilient (you can replace a broken part by redeployment), scalable (you can add instances to increase processing capacity), or both.  Thus, we can look at the way that cloud redeployment and scaling is handled to see if it would work for network services and their features.

Applications today are almost totally focused on transaction processing, which means the handling of multi-step business activities that often align with what we used to call “commercial paper”, the orders and receipts and invoices and payments that underpin the flow of goods and services.  Because we’ve tended over time to introduce transaction processing applications at the point of transaction, in real time, many of the steps in transaction processing can tolerate delays in handling with minimal impact.  We call this online transaction processing or OLTP.

In networking, the primary goal is to move packets.  This is, like OLTP, is real-time, but a flow of packets is often far less tolerant of delay.  Flow control (provided by TCP in traditional IP networks) can handle some variability in net transport rates, but the protocol itself may be impacted if delays are too long.  This “data plane” process is unlike traditional applications, in short.  However, there are often parallel activities related to user access and resource control that resemble transactions.  In today’s 4G wireless networks we have distinct “signaling” flows and distinct “data” flows.

In transactional or signaling applications, it’s often possible to scale or even replace components without impacting users noticeably.  In data-plane applications, it’s very difficult to do either of these things without at least the risk of significant impact because of the problem of in-flow packets.

Suppose I’m sending a sequence of packets, numbered from “1” to “20” for this example.  I get to Number 5 and I recognize that a feature in the packet flow has failed and has to be replaced, so I spin up another copy, connect it in, and I’m set, right?  Not so fast.

Where exactly did I recognize the problem?  Let’s assume that I had six “feature nodes” labeled “A” through “F” in the direction of my flow, and it was “D” that broke.  Probably Node C saw the problem, and it now routes packets to a new Node G that replaces F.  No worries, right?  Wrong.

First, my failed Node D probably contains some in-flight packets in buffers.  Those were lost, of course, and they can only be recovered if we’ve duplicated the packet flow within the network (paralleled our nodes” so we could switch from one flow to the other, and even then, only if we knew exactly where to stop delivering the old-flow packets and start with the new ones.  That requires some content awareness.

It gets worse.  Suppose that when Node D failed, the configuration A-B-C-G-E-F was very inefficient in terms of route, or perhaps not even possible.  Suppose that we needed to go back and replace C as well, so we have A-B-H-G-E-F as our sequence?  We still have packets in route from B to C, which might be lost or might continue in the flow.  They could arrive out of sequence with respect to the originating flow if the new A-B-H path was faster.  That can mess up applications and protocols too.

The point is that you can’t treat data-plane features like application components.  Non-disruptive replacement of things, either because you have a new version or because you’re just replacing or scaling, isn’t always possible and is likely never easy.  How much effort, then, can you justify in CI/CD?

Then there’s the question of the mission of the application.  Most business applications are intrinsically multi-user.  You don’t have your own banking application, you share an application with other users.  That sharing means that there’s a significant value in keeping the application up and running and accurate and stable.  However, much of the focus of NFV has been on single-tenant services.  vCPE is like “real” CPE in that it’s not shared with others.  When you have single-tenant services and when every feature and every operations process designed to assure the continuity of those features is charged to one service for one user, it can be radically different.  In fact, you may be unable to justify the cost at all.

Finally, there’s the question of the CI/CD process overall.  Software development for a cloud-agile environment is an established process at the application level, but if you define a model of management and deployment and scaling and redeployment that’s not aligned with these established practices, how much of CI/CD can you actually harness, even if you have credible benefits to reap?  NFV continues to develop procedures that are not aligned with the cloud, and so it continues to diverge from the cloud-native tools that are evolving to support the alignment of applications with business goals.  Given that, isn’t NFV going to have to make a choice here—to either cease to diverge and in fact converge with the cloud, or to develop the whole CI/CD ecosystem independent of the cloud?  It seems to me that’s not really a choice at all.

Does New Demand Mean New Positioning or New Strategy?

Evolutions are comfortable and revolutions less so, but it’s pretty obvious that in many ways we’re facing a revolution in networking.  And yes, network technologies are among the things that are changing radically, but the foundation of the revolution is fundamental changes in the role of connectivity.  Those changes are impacting both the consumer market for Internet services and the business market for both Internet and private-network connectivity.  What we need to do now, to prepare for the changes to come, is trace the demand-side pressures on networking.  When we’ve done that, though, we have to deal with the basic question of whether we respond to the market with positioning which is largely a new story, or with strategy which implies new technologies and even products.

The core of our demand-side revolution is the smartphone, which (unlike other mechanisms for connecting users to information) is fully mobile, making experience delivery mobile as well.  Even though smartphones are not replacing everything else (use of personal computers and televisions has been impacted only marginally, for example), the need to connect a device that moves around easily creates the need for agile connectivity, and that need impacts both the way we deliver experiences and the way the experiences themselves are framed.

Linear TV is the easiest example of the challenge.  A smartphone doesn’t have a TV receiver, and the bandwidth needed to broadcast every cable TV channel in a lineup to a smartphone would be prohibitive even if we had the ability to process it there.  Phones need streaming video, and if you have to deliver video in streaming IP-friendly form to phones that are roaming all over the place, it makes sense to think of the same format for delivering video to fixed devices, whether they’re personal computers or TV sets.

Another driver of this change is the fact that even personal computers in homes need an Internet connection, and that connection is most conveniently provided using WiFi, a wireless technology that has short range but (in modern versions in particular) high bandwidth.  WiFi isn’t suitable for delivering linear video, and as people become accustomed to having video on a device instead of a TV, they expect to be able to use their computers to watch things.  If we can extend WiFi to TV sets, which of course the current “smart” TV models can do, we can stream to TV.  If we format all video in streaming form, we could eliminate all those in-home cable jacks.

In the business world, we’ve long recognized that having Ethernet jacks in every office and conference room and making people fight for jacks when they assemble in groups is inefficient.  People take their laptops with them everywhere, even often to a cafeteria, and they now have their phones available at all times.  Some of their uses are business, and some are things like catching up on a TV show they fell asleep on the prior evening.  Phones, of course, are also moving in and out of WiFi range, roaming across WiFi cells, and being generally disorderly.

This disorder is mirrored in the experiences we’re trying to connect with.  With the virtualization of hosting and the advent of the public cloud, we have created a situation where the sources of information and experiences are floating around in an increasingly vast pool of resources.  Getting people connected to the stuff they want, keeping the connection through scaling and redeployment, and maintaining connection security and tenant/application isolation, is becoming difficult if one presumes that the physical network (IP VPNs and the Internet) provide both access and connectivity.  It’s easier if we assume that another layer, the “virtual network” or “software-defined network” does the latter.

Can associate all of this with some of the developments in the market, including financial results and M&A.

The interest both AT&T and Verizon have in millimeter wave 5G can be linked to the fact that mobile streaming in IP and the potential for sharing infrastructure between wireline and wireless offers operators the potential to improve wireline home and business connectivity while consolidating much of the infrastructure needed with that of wireless mobile networks.  With millimeter 5G/FTTH, you drop linear TV delivery from the network requirement.  That means data-formatted live TV instead of two different forms, caching of video in a single form, and eliminating the difference in video handling between live TV and deferred-viewing stored or recorded TV.

Our demand-side changes introduce the potential competition between 5G wireless and WiFi too.  WiFi is already a kind of micro-cell technology.  If every worker’s phone has 5G, might it be better to provide a 5G radio (built-in for new devices, or added on as WiFi adapters used to be) to laptops?  We might even see our 5G/FTTN hybrid service offer mobile 5G from the same nodes, which would make infrastructure for wireline and wireless converge more and faster.  Homes connected through the hybrid with 5G could also then benefit from computers and even TVs with 5G connectivity.

Even if we did see this sort of thing develop over time, the near-term impact is to create a need to build seamless connectivity between WiFi and 5G.  That goes back to virtual networking, to the notion that somehow connectivity rises (in an OSI layer sense, not metaphysically!) to a higher level and overlays multiple delivery technology.  However, it also means (particularly for business) that it’s important to deal with the way capacity is allocated where the pool is finite.  That means, whatever it is, has to be applied to all the wireless options you expect to use, meaning in this case both WiFi and 5G.

This might help explain the Juniper decision to buy Mist Networks, a managed-WiFi player.  Wink uses AI to manage WiFi capacity, but Juniper’s comments on the acquisition include at least the implications of applying AI to things like “software-defined enterprise” and even “multi-cloud”.  These comments leave me with two alternative Juniper motives to consider.

Motive one is that Juniper is thinking that everything that can be connected to their current positioning story, which includes those two things, has to be connected.  They bought Mist because it had AI in the story, and that’s hot.  Those workers are probably accessing the cloud, so multi-cloud is logical, and software-defining an enterprise is surely logical in today’s software-defined-everything age.  This interpretation isn’t implausible given that Juniper has historically messed up virtually all the M&A they’ve done and that they are indeed bad at positioning.

Motive number two is that Juniper is really looking ahead to the WiFi/5G tension I’ve described, is seeing a broader virtual-network connectivity future emerging from both the access side and the cloud side, and sees a linkage between WiFi, 5G, and Contrail (their virtual-network concept) a strong strategy, not just a pretty positioning.  This is a bit less plausible in my view, only because there were so many things Juniper might have said to make the case for the strategy strongly.  Instead they fell into painting graffiti on the wall facing the nearest tech publication.

If virtual networking is really coming, and if part if it is related to 5G, then that might explain why Ciena turned in a good quarter.  No matter how much wireless you might think you have, every 5G node or WiFi cell is going to connect to fiber somewhere.  Not only that, one way to reduce the cost of network operations is to oversupply the network with capacity and route redundancy.  If you have highly reliable, under-filled, pipes at Level 1, you can kiss a lot of complexity at the higher layers goodbye, because the complexity relates to capacity and availability management.

This is why I’ve been saying that Ciena needs a strong virtual-network story.  With such a position, they could be the provider of “infrastructure services” in the future.  Without it, somebody else will provide the virtual-network part, and if that party also does fiber or forms effective alliances with those who do, Ciena is then plumbing.  That doesn’t reduce fiber total addressable market, but it does reduce Ciena’s differentiation.

It’s worth noting here that VMware released its new disconnected-from-vSphere version of NSX, NSX-T, at MWC.  NSX has always been a credible virtual-network play, but its value is limited if you have to buy into the whole VMware hosting and software paradigm to get it.  Unleashing it makes it more competitive with offerings like Nokia’s Nuage, which has been an independent network product since Nuage was acquired.  In fact, it might make it more than just competitive with Nuage, because VMware is including SD-WAN, data center networking, and hosting integration.  Nokia doesn’t have a specific data center position to exploit, and they’re another player who’s done a far less than stellar job at positioning their own assets.

Marketing and positioning are different from product planning, but the two eventually have to end up in the same place.  With Juniper, Ciena, and VMware, we have companies who might have a great strategy but who have adopted a marketing/positioning approach that doesn’t leverage it (or even recognize it).  Is that the best approach?  We only have to look at what happens when you do the opposite to answer that.  Cisco is legendary for erecting exciting new positioning billboards that represent little or no changes in actual technology.  They turned in a pretty good quarter, didn’t they?  For other network vendors, at the very least, that adds up to a mandate for positioning effectively for the visible future, no matter how smart you think your “strategy” is.

VMware’s Doing Well, but Needs to Take Care

VMware turned in a good quarter, no question.  There is a question on why that’s true.  According to a piece in CRN, it’s their multi-cloud strategy, but I’m not convinced.  Multi-cloud is a requirement for many enterprises, but is it enough to pull through a whole virtualization story?  Not according to the enterprises I’ve talked with.  So, what is VMware doing right?

The big thing, if you analyze both the company’s earnings call and the enterprise input I get, is “multi-cloud” but perhaps in a different sense than most would think.  The media uses the term to describe an enterprise commitment to multiple cloud providers at the same time.  Enterprises aren’t broadly committed to that, and in fact most enterprises tell me that they’d prefer to have only a single cloud partner.  They’re building hybrids between their own enterprise-hosted applications/components and a public cloud provider, and different enterprises pick different providers.  Even the same enterprise admits that they might be induced to change providers.  Thus, “multi-cloud” is an attribute of a hybrid cloud strategy that can hybridize with all or at least most of the public cloud providers.

The two cloud-provider market leaders in hybrid cloud are Amazon and Microsoft, and VMware has a hook to both of these.  Enterprises tell me that VMware is also interested in hybridizing with Google’s cloud services and even with IBMs.  A couple claim to have been offered special integration quotes to both, in fact.  Given the importance of hybrid cloud to enterprises, and given the fact that VMware is certainly a credible player in the data center, it’s not a surprise that as hybrid cloud interest grows, VMware would appear to be a credible partner.

Having what might be more accurately termed “any-cloud” support doesn’t hurt VMware a bit, but I think it’s the Amazon partnership that’s really churning up the opportunity.  Microsoft has all the tools for effective hybridization in their own portfolio and they have a strong premises presence too.  Amazon has no real traction in the data center, and so they need somebody like VMware to serve as their outpost on the premises.  That’s a great position to have, of course, and the one I think is profiting VMware the most.

So does VMware now sweep all before it?  I don’t think so, for a couple of reasons I’ll dig into a bit further.  The first is that any-cloud hybridization is increasingly becoming an extension of container orchestration, specifically Kubernetes.  The second is that competitors to both Amazon and VMware are surely looking at ways to derail the momentum VMware has shown.  Finally, at least a part of VMware’s success is the classic low-apple story.

Let’s take the last of these first.  VMware has a good premises base, but it’s not universal.  There’s no way to tell from the earnings calls, but I’d say about three-quarters of the enterprises who tell me they’re in bed with the VMware/Amazon duo were VMware customers before, and of course an even larger percentage were Amazon customers.  That means that all the stars are aligned for an easier sale and faster uptake on the result of the deal.  As the low apples in the opportunity base of combined customers is plucked, it will get harder to sustain the early level of deals.

The competition angle, my second point, comes not only from Microsoft as the leader of the combined-hybrid story, but also from players like IBM/Red Hat and HPE.  The former is the one who probably keeps VMware execs up at night, because IBM needs to exploit Red Hat’s broad industry footprint in the data center, a footprint broader than VMware has.

A Forbes article on the IBM cloud strategy seems hopeful.  It focuses on containers, which is smart.  It recognizes the role of Red Hat, which is smart, and it seems to admit that IBM can’t make this purely about IBM’s own cloud offerings.  In particular, the decision to make Watson available on multiple cloud providers seems to point the way toward a broad application of Red Hat’s all-cloud model.  To be sure, IBM might decide to try to keep Red Hat hybrid cloud as an IBM-cloud on-ramp, but that would seriously limit the benefit of the deal, and the Street already has doubts.

Red Hat’s OpenShift is not only the best-known production-grade container system out there, it’s also the one that enterprises tell me they find most credible.  VMware doesn’t have the kind of container creds that Red Hat has, and so the increased buzz around containers plays directly to Red Hat’s strengths.  If Red Hat were independent now, I’d say they’d probably have a clear edge, but since we still don’t know that IBM won’t snatch defeat from the deal, I have to wait and see.

Containers offer users three important things.  First, containers have lower overhead than virtual machines, meaning you can stuff more application components into a server.  Second, containers can be used to subdivide cloud IaaS services, letting users obtain virtual machines as their “infrastructure” in the cloud and then manage deployment and scaling by using the VMs as container hosts.  Finally, containers have attracted so much interest that they’re the foundation of a whole ecosystem of enhanced middleware and tools.

That, of course, raises the Kubernetes point.  I’ve said many times in my blogs that Kubernetes’ greatest strength is its ability to support a glue-on ecosystem through its open design.  Workflow management, networking, and just about everything else you’d need in a hybrid cloud is already available in some Kubernetes-related open-source tool.  The fact that there are many such tools and that some assembly is required is slowing down the realization that the Kubernetes ecosystem is turning into a massive distributed competitor.

Kubernetes is advancing on many fronts.  There are initiatives designed to deploy across multiple Kubernetes-managed-clouds and on premises, to deploy on public clouds inside VMs as well as on premises, to create a workflow fabric, to integrate with OpenStack, to build virtual networks that are agile enough to keep pace with scaling and redeployment, and of course to support continuous development and integration.  All of this is open-source and available to VMware, but also to everyone else.

The final point is that “multi-cloud” has become one of those marketing slogans that take on their own life.  Even companies that don’t have any hosting platform business (Juniper comes to mind) are singing the song.  There is always a marketing problem associated with riding a hype wave, which is that once the wave curls it can bury you in foam.  VMware can’t rest on multi-cloud positioning.  Nobody can, but that means finding another positioning story that works, and that’s never easy.

Is MWC Showing it’s Age?

Another MWC has come and gone, replete with the usual spectacular hype and theater.  At the end of the day, what did we get out of it?  One good way to answer that is to explore what the financial industry thought.  While they’re no less biased in their views, the fact that they’re biased in a different way from that of the media and vendor communities may help us glimpse wireless-industry truth.

The classic net-net comment of the Street was that it is unlikely that 5G use cases will create compelling demand for 5G.  A related comment is that real 5G revenue probably won’t show up for operators until 2020-2021.  Another is that over half of all businesses surveyed about 5G said there was very little it would let them do that 4G didn’t cover already.  It’s pretty clear that the hype on a 5G avalanche was nonsense.  In fact, 5G deployment is much more likely to be an evolutionary process, gradual upgrading and replacing, than a wave of change sweeping the industry.  That has implications for network operators, vendors, and of course the media.

Operators have long budgeted for 5G, some more aggressively than others.  As we moved from the heady hype-driven days of 2016 to the verge of actual deployments last year, it became clear that operator attempts to drive new revenue by leveraging the so-called wave of things like cellular connected IoT or connected cars were going to fall far short.  In fact, what we’re seeing with 4G/4G transition reminds me of the IPv4/IPv6 transformation.  Everyone said that the explosion of home devices would demand full IPv6 conversion, and clearly it didn’t.  The Street is seeing the same thing for 5G.

A 5G evolution favors evolutionary strategies.  In particular, it means that 5G mobile infrastructure is likely to be deployed based on the so-called “Non-Stand-Alone” or 5G NSA model, which changes out the 4G radio access network (RAN) for the 5G New Radio (NR) model.  This leaves 5G Core and things like Network Slicing and NFV, waiting in the wings.  It also means that applications like the 5G millimeter-wave hybrid with fiber-to-the-node (5G/FTTN aka fixed wireless access, FWA) that focus on a specific and recognized problem; in this case, the cost of over-50Mbps Internet to homes.

The question, in fact, that 5G proponents now have to face is whether 5G NSA will lance the driver boil of wireless evolution, doing most of the things that we actually need a new wireless standard to do.  If so, then 5G core evolution will be limited to some capacity-driven enhancements to backhaul and metro rather than the 5G-Core-with-a-Capital-“C” specification that includes slicing and NFV.  We may never see much of either of those, and surely won’t see them in any quantity before perhaps 2023.

From a vendor perspective, the 5G reality the Street is facing is troubling because many vendors have been pointing to 5G as the driver of future revenue gains, and at least some of that assumption is now baked into their share prices.  Nokia was seen by the Street as having backed off some early positive assumptions on revenue growth because of the late development of 5G.  Cisco said it would be creating a fund of $5 billion for 5G financing, which isn’t what you do if you think the market is a slam dunk.

The Street sees Ericsson as the biggest potential 5G winner, based on the new market reality.  Ericsson’s radio-network products use a software-defined radio that’s agile in the 4G/5G range, which means that operators can deploy the stuff now (and in fact could have done that for several years, because the feature isn’t new) and retain 5G evolution capability.  If 5G NSA is the future, then that pretty much covers operators’ investment needs.  Ericsson is also seeing a faster revenue up-ramp in mobile infrastructure than Nokia.

There are two wild cards in the mobile 5G picture from the perspective of the Street.  One is Huawei, who has been a leader in the 5G space but now faces major problems because of the ties between the company and the Chinese government.  The US may take the step of barring them from buying US semiconductor products, and they’re pushing allies to bar Huawei from 5G build-outs.  The more extreme the reaction of the market to these issues, the more likely competitors (particularly Ericsson) would be to benefit from the Huawei loss of market share.

The Street tells me that they believe that many countries will refuse to bar Huawei because they’re the price leader, and the tight profit-per-bit picture that operators face makes it very difficult to accept any increase in price.  However, they admit that countries like the US and perhaps some close allies may in fact impose and enforce restrictions.  I can’t predict this one; perhaps the progress of the US/China trade talks will provide a data point.

The second wild card is the various open-source initiatives for virtual, open, 5G radio networks.  At the moment, none of these are particularly far along or appear highly credible to network operators, but my own operator contacts suggest that they’d really like to have an open and virtual 5G radio option available.  If 5G is delayed even in NSA form, there may be time for a viable open model to be worked out.

Cisco is, of course, a promoter of an open 5G wireless model.  As a backhaul/metro leader, Cisco benefits from 5G adoption but could be hurt if a 5G NR provider pulled through competitive gear.  Huawei (with whom Cisco has historically been at odds) is a provider who has gear that competes with Cisco, and so is Nokia.  Ericsson has an affiliation with Cisco, but also with other players.  In my own view, Cisco would stand to gain more from the 5G/FTTN hybrid because that could shift linear TV to IP traffic.

On that very 5G/FTTN FWA front, there’s more Street focus on millimeter-wave handsets than on FWA.  In the US, Verizon has indicated it has some customers who get more than 300Mbps from 5G/FTTN, and also suggests that handset/mobile 5G linked to the same spectrum may be a good strategy.  It seems that there are already variations in what a “node” means as in “FTTN”.  Some operators think of nodes as a neighborhood thing, while others are talking about very high towers to extend the range significantly.  That would make the technology more broadly suitable for smartphones.

There is interest from at least some Street analysts in the 5G/FTTN opportunity for businesses.  The technology could offer a backup access connection at a much more reasonable cost, at least for companies who have Carrier Ethernet connections today rather than TDM.  It could also provide much better speeds to branch office locations.  Home broadband, the most important application, is important in the US but perhaps less so in the rest of the world, for now.

That may raise the biggest problem with MWC, which is its focus on “mobile” instead of “wireless”.  The most transformational thing in wireless is very possibly the least mobile thing—that 5G/FTTN hybrid.  A shift to that delivery mechanism, on a large scale, means that streaming video will supplant linear video.  That means that ad insertion in video will become a key to monetization, and that ad personalization will be as critical in video as it is in web pages today.  It also means that the access and metro infrastructure will have to be augmented significantly, and that caching and edge computing will expand radically.

For the mobile operators that attended, and promoted, MWC, all these changes will elevate the notion of “service” from connection bits to video experiences.  As that happens, the infrastructure that matters is elevated too, above the radio networks and slices and virtualization, to caching and personalization and contextualization.  None of that is specific to “mobile”, and so it may be the lesson of MWC this year is that maybe it’s time to think about the “M”.