Have the Cable Companies Unlocked the Secret of Function Virtualization?

Could the cable industry be trying to do function virtualization the right way?  Light Reading reports some insight on that issue HERE, and references an Altran open-source project in support of the industry’s efforts.  One thing that strikes me immediately on reviewing some of the detail is that CableLabs is apparently starting off with something more cloud-state-of-the-art, and that might mean the effort will end in something that’s actually useful.

The cable providers have many of the same challenges as the telcos, but they also have an important advantage; the CATV plant is well-suited for broadband delivery, even delivery at fairly high speeds.  The plant is shared, to be sure, but over the years cablecos have been segmenting their networks to limit the number of customers on spans, especially in areas where cable can deliver services to businesses.  In contrast to the telcos, whose DSL plants are largely inadequate for commercially competitive broadband, cable guys are sitting pretty.

Not so in the rest of the infrastructure.  In fact, cable companies have complained about all the same kinds of issues with proprietary hardware, vendor lock-in, rising equipment prices and opex, and declining profit per bit.  Unlike the telcos, whose ability to directly collaborate on technology is limited by regulations (Bellcore, in the US, had to tread lightly and as a result didn’t have as much influence, and was eventually bought by Ericsson, a vendor), cablecos have used their own technology group, CableLabs, to set standards and plan technology evolution.

The CableLabs initiative, in which Altran plays a key role, is an open-source project code-named “Adrenaline”, and it’s an interesting mixture of hybrid fiber/coax (HFC) evolution, edge computing, and function virtualization.  According to Belal Hamzeh, CTO and SVP at CableLabs, “the virtualization infrastructure extends from the regional office all the way down to the modem in the household.”

Adrenaline is explicitly about hosting virtual service functions, but unlike the telco NFV initiative that was initially directed at the cloud and then morphed more into universal CPE, Adrenaline is about distributing everywhere, as the quote from Hamzeh suggests.  CableLabs feels that things like FPGAs and GPUs, which weren’t in the initial NFV stuff and are still not really accommodated, are fundamental to cableco needs for function hosting.  So are containers and Kubernetes, which is why I think this is a more cloud-centric shot at function hosting.

It would be difficult to create a homogeneous hardware/hosting model from the home to the cable regional office, but that doesn’t appear to be the goal.  Instead, Adrenaline seems to be providing something like a combination of a PaaS platform and P4-driver-ish features, which would make at least some functions transportable to different locations because the hardware presented a common set of APIs.

This approach adds significantly to the agility of infrastructure.  Any cloud-hosted function is dynamic and horizontally scalable if it’s written properly.  With Adrenaline, functions can also scale in the “vertical” direction, meaning moved out of hosting and outward toward the user, even to the cable modem at the end of the connection.  They could also be moved back, of course, if something required hardware features not available at a given point in the connection path.

CableLabs’ contribution to Adrenaline seems to be focused on the SNAPS-Kubernetes support for using Kubernetes to deploy applications for FPGAs and GPUs, something that (as I’ve already noted) CableLabs deems critical for virtualization of the cable infrastructure and its functions.  It is critical, IMHO, because of the fact that unlike NFV, Adrenaline is explicitly based on a kind of hierarchy of hosting—district to edge to home and everything in between.  That means it really is an edge architecture as well as a cable-cloud architecture and universal CPE, and since some of these missions are certain to involve specialized semiconductor support, the ability to orchestrate stuff for specialized chips is critical indeed.

The Kubernetes reference here is also critical, because Adrenaline is designed to be built on containers and orchestrated by the near-universal container orchestrator, Kubernetes.  This combination simplifies the function-hosting vision of CableLabs versus the NFV ISG, which was virtual-machine focused and has only recently been trying to accommodate mainstream cloud, which is all about containers and Kubernetes.  I think that it’s clear that Adrenaline is a software architect’s vision of function hosting, where NFV was a hardware-deployer’s vision.

White boxes and open-model networking are also central to Adrenaline.  In fact, it’s explicitly expected that white box technology will be used for everything that can accommodate the mission, which eventually is likely to be everything.  It may be that it’s the fact that cable network elements have special requirements, and that cable companies have two (coax and fiber) or three (add cellular) delivery frameworks to contend with, that created such interest in custom chips, and from that to chip-centric (or at least chip-accommodating) deployment.

Why does this seem to be working better than the telco version?  I think there are three reasons.

First, Adrenaline has the advantage of seeing where NFV went wrong.  There’s nothing like watching the climber before you fall into a crevasse to encourage you to take a slightly different route.   Cable players were in fact interested in NFV at first, and I think they lost interest when progress seemed too slow.

The second reason is that the cable business is rooted in television and the telco business is rooted in voice calls.  A set-top box is a kind of virtual channel storefront, delivering an experience that’s not unlike cloud portals to applications.  Cable’s TV delivery conditioned its coax infrastructure for broadband (remember @Home?) and that also helped cablecos get a preview of every carrier’s future.

The third reason is that CableLabs is the cableco standards organization, where for the telco world there are numerous standards bodies that each have representation from the carriers’ professional standards teams.  You can elevate a single team to software-centricity and cloud-centricity more easily than you can elevate all those telco standards geeks.

There is every reason to wonder if the telco community would swallow their pride and adopt a cable industry standard for function hosting.  At the fundamentals level, I think there’s no question that Adrenaline is a better approach, but I also think that it would have to address the same questions of practicality (things like management and onboarding) that NFV floundered on.  At the very least, though, it might inject a new variable in the current battle for the telco cloud.

I’ve recounted initiatives from Google and Microsoft to grab a piece of the telco cloud, to host telco functions and services rather than having the telcos build out infrastructure on their own.  VMware and IBM/Red Hat also have designs on the space, hoping to supply software that will run either in the cloud or on telco infrastructure.  Adrenaline could actually be a good framework to boost either plan, and if it were expanded a bit to accommodate telco cloud services beyond VNFs and 5G Core, it could serve as the framework for a generalized service-provider higher-layer strategy.

The Adrenaline package is in at least lightweight use, but I’m not able to find a full set of documents on it at this point, so I can’t assess just how suitable to the full function-virtualization or carrier-cloud missions it is, or how much more might be required.  Still, this is a very promising start, and if it does deliver a generalized higher-layer PaaS, it could be a revolution in carrier cloud, impacting both software vendors and the cloud providers—and of course, the network operators themselves.

Are We Getting “Fast Lanes” Regardless of Regulation?

We’ve stamped out paid prioritization and settlement on the Internet, right?  Well, the Internet is not our only consumeristic IP network.  In fact, more and more traffic we think of as “Internet” traffic is carried on something else, something I’ll call the “Undernet”.  Is this development good or bad, a threat to us or a major benefit?  Who is doing it, and why?  Let’s take a look, starting with a story from Light Reading.

The Light Reading story is about Akamai expanding its own fiber deployment.  Akamai, of course, is a content delivery network (CDN) provider, and while everyone doesn’t realize it, the majority of video traffic likely rides on CDNs.  In fact, what we call “the Internet” is a kind of composed interface to a complex multi-layered structure, and it only looks like a single network.  CDNs have changed the Internet, and they may be only the tip of the iceberg.

The “original” Internet was a single network, and for quite a while (until the ‘90s, in fact) we really did have an “Internet backbone” and access links to it.  That’s still sort-of-in-place, but CDNs changed the structure, most dramatically when video content became the dominant type of traffic.

The problem with backbone structures is that they have to carry the sum of all the traffic, and they introduce latency and reliability/QoS issues.  If all the stuff we consumed on the Internet was random, there might not be much of an option.  Grab a page here and another there, some images, and a video or a song, and you get traffic.  The reality is that most of what we do on the Internet is what others do too, and that means that it’s often more efficient to store or “cache” material in places close to the users of the content.  Video in particular, of course, because a smallish number of videos make up a lot of the content accessed by users.  That’s what gave rise to CDNs.

A CDN provider will provide content storage at places where there’s a lot of content demand, like major metro areas, and connect or “peer” with the access ISPs there.  That reduces the handling needed to give someone their content, and that improves the experience.  There are dozens of CDN providers who offer content caching services, but many companies who deliver video are now doing their own CDN caching too.

And so are search companies like Google.  Google’s network is so extensive that it actually duplicates a lot of the Internet itself, which is why Google can often deliver a stored copy of a website if the site itself is offline.  Google has used technologies like SDN more extensively in their own network than network operators and ISPs have, and Google’s network is among the most efficient in traffic handling.  Google’s network caches content, but it also supports delivery of Google services, and that’s where CDNs blur with “parallel Internets”.

The cloud providers are the coming force in creating these parallel Internets.  Most public cloud access is via the Internet, and cloud providers who have a favored on-ramp to their cloud from the major areas that access it have an asset that could differentiate them from competitors.  It’s very likely that eventually every major public cloud provider will have direct parallel-Internet connectivity to all their major market areas, and of course between all their own data centers.

Is this sort of thing good for us, the users of the Internet and cloud services?  It is in the sense that it significantly improves our Internet experience.  Content, particularly video content, would suffer major QoS issues without having all that direct fiber.  But is it bad for “the Internet” as a conceptual network?  That’s harder to say.

It’s easier to say why we don’t have all this parallel fiber being deployed as a part of the Internet.  The reason is 1) that “the Internet” isn’t a single network but a bunch of interconnected providers, and 2) that there’s no real settlement for Internet traffic, meaning people don’t get paid to carry it.  You have to pay to ride on a CDN, or build one yourself.  Therefore, people build CDNs.  Profit begats investment.

There have been concerns about what having an incentive to build expressways in parallel with the Internet, making the “old” version something like local roads.  The regulatory requirements on these parallel networks are minimal, and of course they’re even less on “private” parallel networks built by cloud companies, search providers, or even content providers.  The existence of parallel networks that are paid priority traffic paths also works against the view that any form of paid prioritization hurts the little guy.  In fact, that whole argument on paid prioritization could be mooted by these parallel networks.

Then there’s privacy and security.  Do parallel-path Internet alternatives have any rights to your information?  Are Internet protections (whatever they are in a given national administration) protecting you on these networks?  You don’t even know when you’re using one, so self-insurance here is a non-starter.

I’ve been a proponent of settlement among ISPs for literally decades, and a proponent of paid prioritization as well.  It’s unrealistic to assume that if better QoS is valuable, someone won’t find a way to make money providing it.  We have turned a blind eye to these parallel networks, and that’s fine except that we’re talking like we’re regulating the Internet to prevent something, when what we’re really doing is driving the “something” off the Internet proper, and dodging our regulatory attempts.

What kind of Internet would we have if settlement and paid prioritization made it profitable to create high-quality paths that combine to create the modern model of an “Internet backbone?”  Would this have made content QoS more “populist” as regulatory proponents have argued?  Would it have reduced profit-per-bit pressure on access ISPs?  We’ll almost certainly never know.  It’s likely too late to reverse the decisions of the past, even if we were to change the rules on settlement and prioritization.  After all, we’ve changed them before, then changed them back….  Nobody would trust any position taken at this point.

What’s to come after the cloud providers build their backbones?  Perhaps it’s the turn of the access ISPs, including mobile providers.  We may have dozens of new parallel networks taking root as we speak, and I think that eventually we’ll have to reckon with the multiplication, both in terms of technology and in terms of regulations.

Can Ciena Take Optics to the Next Level?

Ciena beat earnings estimates and revenues for the quarter, which certainly earns the company some Street creds.  It also says a lot about network infrastructure directions, network equipment vendors, and even Ciena’s own future.

What’s impressive about Ciena’s quarter is that it actually did better than usual in earnings, and nearly matched past high quarters in revenue.  This, at a time when everyone was worried about avoiding a catastrophic dip in both revenue and EPS.  Part of the reason is that Ciena is in a good place in the network market.

Networks are about capacity.  Everything that isn’t a bit-supplier is really a bit-optimizer.  Higher layers in the network serve addressing and connecting functions, but most of all they provide the ability to aggregate packet traffic onto high-capacity optical trunks.  When bits are expensive, efforts to conserve them are more important.  When bits are cheap, you can trade raw capacity for higher-level complexity, to the point where you can even reduce opex at least somewhat.

I’ve said in the past that fiber players like Ciena have a natural advantage in today’s market because of the very thing that gives vendors at higher layers nightmares, the ever-declining profit per bit.  The primary reason for profit-per-bit declines is that while consumer appetite for bandwidth is growing, their willingness to pay for their connectivity isn’t.  Look at 5G; users expect it to provide them four to ten times the capacity of 4G, but at the same price.  The only thing that can make that work is a major upgrade in optical capacity, an upgrade that improves economy of scale for bit production.

This sounds like the brass-or-even-gold ring, but the company didn’t guide earnings to a breakout, and in the past years they’ve not had a string of sterling quarters either.  You don’t get much color from their earnings call either; the majority of the call was about the way the company executed through the pressures of COVID-19, which was fair since they clearly did a good job there.  There was little product- or technology-related detail provided, no indication of new directions or thoughts about why they turned in a good quarter, other than good execution.  Business as usual is good enough in unusual times.

Usual times will return and are likely returning now, though.  That means that the Street is likely to demand more from Ciena and others, and reward those who can now address the real market that will develop out of our current mess.  There’s a lot that could be positive for Ciena and optical players, and there are also a lot of risks.

On the positive side, the consumer has almost certainly shifted decisively toward streaming on-demand video, which means that there’s likely to be a lot more video traffic in the metro area, creating an incentive for fiber buildouts.  5G will also likely require more metro and backhaul fiber, and with operators looking to cut opex, the promise of trading capacity for costly capacity management is attractive.

On the negative side, we have the obvious optical-network piece of the emerging open-network model.  The Telecom Infrastructure Project has an optical side, and operators tell me that they’re very interested in that angle, though they admit they assign open optics a lower priority than open-device models for the higher layers.  I think that optical vendors will have to face their own open pressures in three or four years but right now, it’s the router guys’ turn.

The positive and negative issues combine in my next point, which is the potential for optical advances to redefine the network model.  Ciena is seeing steady growth in its traditional role, but optics could have a larger play in the network of the future, taking over some of the tasks traditionally assigned to the electrical layers above.  That’s a plus.  The thing is, those higher layers could also try to take over some optical functions.  That’s a minus, and not just because it could build interest in an open-model optical network.

The real problem in a competitive sense is the vendors at those higher layers.  An independent optical layer demands an independent optical mission, the kind we used to have in the glory days of SONET/SDH.  If every optical connection is to feed electrical-layer packet devices, it’s hard not to wonder whether the fiber should just run to those boxes, and cut out the middleman.

5G feeds this risk because nobody wants to stick an optical ADM in a cell site.  The operators are pushing hard for open hardware in the cell locations, and it’s not a major stretch to add an open optical interface to that, and feed the backhaul path directly.  Then, at the other end where a real ADM role could exist, it’s a smaller stretch to presume that you have an open-out to match the open-in you already committed to.

I suggested months ago that optical vendors should consider the question of whether electrical-layer players’ inevitable encroachment on optical turf should result in massive retaliation, in the form of packet electrical grooming in optical devices.  The boundary between two competitive pieces of the network is accessible to those on either side, and so I think optical players like Ciena should make a big play there, which so far isn’t happening.

The same risk occurs with cloud computing.  The future of the cloud is obviously tighter and more efficient linkage with the network, which suggests that the data center switches might end up with direct high-speed optical interfaces, creating a distributed data center fabric.  That’s bad enough, but if we increase our reliance on cloud computing, could the same cloud-network model push out to at least the larger enterprises?

We’re homogenizing traffic, creating a packet world for optics to live in, and the question now is whether that will favor packets or optics.  Every layer in a network is a sink for capex and a generator of opex.  Fewer layers equals lower cost, and it’s hard to see that argument not winning out, eventually.  The big question is whether we’ll continue to build networks with an independent optical layer, or try to save both capex and opex by consolidating.  The associated question is where the consolidation takes place—who defines the new boundary between transport and connectivity.

Open-model electrical devices would be simplified if optics took a greater role.  Ciena had decent free cash flow in the last quarter.  Maybe they should consider spending some cash on an acquisition to cement the role of their devices in the new network to come.

What Happens When We Unlock?

It may be time to take a new look at the impact of coronavirus and the lockdown, since it appears (we all hope and pray) that things are starting to open up.  I’ve been gathering information and running my models, and there are some interesting results.  My data is US-centric, so unless I comment otherwise, this is a US view.

First, in a broad economic sense, the Fed and Congress have injected literally trillions of dollars into the economy to offset the loss of spending created by the lockdown.  That injection would be expected to create moderate inflation when things pick up, but the biggest near-term impact will be to create a significant uptick in economic activity.  It’s already starting slowly, and it will build through the third quarter to be obvious around late September.

The liquidity-induced uptick will not only increase commercial activity (sales), it will fuel the stock market to more than recover past highs.  Stocks tend to do better in inflationary periods than bonds, and inflation also tends to devalue debt, which makes it easier for companies with a high debt load to manage it.  However, interest rates are likely to rise in Q3, so whatever borrowing anyone plans to do should be done quickly.

In a technical sense, the changes focus on the impact of the lockdown itself.  It’s been widely predicted that everything will move to robotics because robots don’t get viruses, but that’s not realistic.  Yes, we can expect to see more industrial automation, but remember that if retail businesses are shut down and tens of millions are unemployed, there’s far less reason to be worried about keeping the production lines rolling.  Robots might be able to produce goods, but selling to robot buyers isn’t a plan I’d want to take to my national sales manager.

What we can expect to see is a focus on improving virtual work, a general term I’m using here rather than “WFH”, because the goal is less to accommodate home workers than to accommodate distributed workplaces.  Many companies are looking to shift their focus from massive headquarters or regional sites with huge staffs, to suburban locations.  Not only would this get workers closer to home, it would get them closer to customers, and in any pandemic, concentrations of humans means a concentration of risk.

It’s hard to say how this will end up looking.  I’ve kicked around this process for decades, and my own focus has been on what I called “jobspace”, which is the sum of the information, communications, and control that a worker must exercise to do their job.  There was, in the original ExperiaSphere project, an entire section that related to jobspace management (with my usual love for whimsy, it was called “SocioPATH”, a term I trademarked).  The point of a virtual workspace is to be able to assemble workers’ jobspaces wherever it’s convenient, and to make the result as productive (or even more productive) than being physically collected.

There will also likely be a permanent shift in entertainment.  The lockdown forced people to seek at-home recreation, and our centerpiece in home recreation has always been TV/video.  If you think about it, TV has always tried to fit into our not-at-work hours; “prime time” is the time when most people are home from work.  When people were forced to either work from home or just stay home, there was less fixed work activity to fit things around, and so they were forced to find stuff to do/watch at odd times.  This has expanded the value of on-demand content considerably, weaning some away from the “what’s on now?” viewing of the past.

Most people I talked with (who, remember, are tech workers and not the average people in the population) say that TV shows have declined in quality and interest, even before the pandemic.  Most say it’s gotten worse, largely because shortened production seasons (and limits on concentrating the casts to shoot) make it harder to create content.  The literally vast library of past popular material makes up for this, and many shows (including popular drama, comedy, and crime shows) are almost timeless in that they don’t jar us because they’ve been shot a decade or two ago.

In the long run, where we may see humans replaced more is in entertainment video.  We already have very realistic virtual-reality productions and video games, and I expect that we’ll see an expansion in both areas and an attempt to combine the two into a successor concept for “live TV”.  In that live-TV area, in fact, it’s very possible that only sports and news (mostly local news) will survive in as strong a form as they now have.  Even new content is likely to be produced as an episode sequence released at one time (as Amazon and Hulu often do now) than as a weekly show.

What about business survival?  It’s likely that retail storefronts are going to take a major hit, as people and sellers combine to redefine their shopping relationship.  Many people have told me that they buy things online now that they’d never have considered for online purchasing before, mostly food and clothing.  Could we see a “retail experience” more like an opportunity to “see” something to validate the purchase, then have it delivered?  I think so.  If this is true, then what does the place where we “see” goods look like?  A kind of clothing-Home-Depot?

Small businesses are going to have a tough time, no matter what space they’re in.  With limited access to capital and minimal cash reserves, those with higher fixed costs will surely have a problem, and Chapter 11 doesn’t work if there’s no feasible way to emerge.  This will likely make retail real estate a problem area in the late summer, but new businesses will spring up to replace the old, and we can expect many to be franchises, which offer new “small business owners” an easier path to starting something up.  Your old favorite dress shop or camera store may disappear, but another of the same kind of shop may spring up in its place.

The tech companies that will do best are the software companies and the public cloud providers.  The lockdown (and the risk of future ones) is going to create massive shifts in how we work and shop.  Functionality is the domain of software.  The problems of having stranded capital assets is going to induce many companies to opt to shift more to the cloud, and to an expense-based realization of functionality.  This doesn’t happen at the point of worker empowerment; the laptop, tablet, and smartphone area all safe.  It will tend to shift spending from enterprise computing to cloud data centers, though.

These changes aren’t going to come about overnight.  There will be a period of adjustment through the end of Q3, when everyone holds their breath that COVID-19 won’t surge again.  If it does, then that will surely accelerate the changes I’ve noted, and even if it doesn’t, this isn’t going to be something we forget.

I remember my parents and grandparents talking about the Great Depression.  We may be telling our descendants stories about the pandemic and lockdown for a long time, and its impact may stamp our lives and theirs, just as that earlier event stamped those of our ancestors.  Just remember that our ancestors survived, and so will we.

Could We See Real Competition for Implementing Operators’ 5G Core?

One key application of “carrier cloud” is 5G Core services, and that’s the most telco-ish of all the carrier cloud applications.  Is it realistic for non-telco vendors to hope to grab that opportunity?

There seem to be a lot of candidates to supply 5G Core these days, including many who don’t provide 5G network components. Oracle is perhaps the newest according to Light Reading.  This raises two important questions.  First, why would any company believe they could win a 5G Core battle against a full-scope 5G supplier.  Second, given the difficulty in making a business case for any incremental 5G spending, why would operators move ahead with 5G Core and not stay with the 5G-over-IMS/EPC (5G NSA) approach?  Here’s a combination of operator views and my own noodling.

To set the stage, only two of 28 operators I’ve heard from on the topic say they are convinced that there are 5G Core service revenue justifications in place today.  Despite that, the number of operators who now say they’ll be deploying 5G Core has grown significantly since the first of the year.  Of those 28 operators, only 10 felt they’d move to deploy 5G Core in 2020, where today 24 say they will.  What’s driving these operators, if they don’t have a business case for 5G Core?

A big part of it is competition.  Of the 22 operators who say they are betting a business case will come along, rather than that one is already in place, 17 of them say the competitive risk of an “incomplete” 5G deployment is unacceptable.  If any of their competitors do a full 5G Core rollout, they’ll suffer in the 5G market, even among consumers who don’t know 5G when they have it, in many cases.

There is also some optimism that business cases will develop.  All 22 operators who don’t yet have a convincing 5G Core business case believe they will have one within a year, with most saying it will come along in early 2021.  About half (multiple answers were acceptable) think that the pandemic and lockdown will increase 5G Core value, a third think that big enterprises are liable to deploy private 5G or even enhanced WiFi if their operators don’t support network slicing, a quarter still hope for IoT, and four are just confident.

While this is progress, at least for 5G proponents, it still adds up to a toe-in-the-pool commitment and not a full-on dive.  That’s the thing, according to the operators, that’s encouraging everyone from cloud providers to software vendors to try to get a piece of the 5G Core side.  According to the operators, there is serious interest in doing 5G on the cheap in many geographies, and in particular for Tier Two and below.

There are different paths to 5G Core, of course, which is what’s creating the broader interest in jumping in.  Where is this coming from?  One fear of even optimistic 5G Core candidates is that to offer 5G Core across their entire 5G footprint, out of the box, will create a significant carrier-cloud infrastructure cost when the actual Core features needed for most of 2020 will be minimal.  In other words, “first cost” is very high in proportion to the Core-specific revenue growth rate.  Given that, starting off with cloud-hosted 5G software would make a lot of sense, and operators have been testing the public cloud waters on that matter, which gave rise to the sudden surge of 5G hosting strategies we’ve read about, and some of the M&A.

About half of operators aren’t as worried about first cost as they are worried about total cost of ownership.  Here, the “Huawei problem” is cited.  Huawei has long been the price leader in most of networking and they’re particularly so in 5G.  For operators who were considering Huawei as their supplier, the current global angst about Huawei in 5G networks means they have to consider other options, and these other options include a la carte 5G, meaning open RAN and at least competitive software for the core.  This group isn’t as big on carrier cloud hosted by public cloud providers (though some say they might do that for a year or so) as they are about inexpensive (preferably open-source) 5G Core software from a variety of sources.  They fear that other traditional 5G providers, with Huawei off the competitive table, will suddenly discount less.

It’s not only the “Huawei problem” that’s putting pressure on 5G, though.  Operators are increasingly aware of the fact that they’re probably at greater risk to predatory pricing and vendor lock-in for 5G than for any other network technology that’s ever come along.  About a year ago, over 80% of operators said that they believed that the 5G market would be “competitive”, and today less than half that number believe it will be, if you don’t have open-model 5G to keep them honest.  Then, of course, if you assume that open-model 5G is the price-leading competitor, why just use it to try to beat a discount out of a higher-priced vendor?  Why not adopt it?

That, in my view, is the big news in the 5G space.  Operators are being pushed beyond simple 5G NSA into full 5G Core by a variety of factors, and they’re uneasy about the business case for the move.  They see that mobile networking, like all of networking, is going to be a drain on profit per bit.  When I asked if the operators thought they could charge a premium for 5G service, versus 4G, they said that unless a buyer needed a specific 5G feature like network slicing, they did not expect 5G to cost more.  Given that the consumer value proposition for 5G is more data in less time, this seems guaranteed to erode profit per bit even further.  To prevent that, cost per bit has to dip, and it’s not realistic to assume that doing the same old business with the same old vendors is going to accomplish that.  Thus, competition for 5G Core, and interest in open-model networking.

The question is whether “interest” translates into “opportunity”.  There seems to be a near-conviction among Tier Two operators (and below) that they will be implementing 5G Core with someone other than one of the big mobile network vendors (Huawei and Ericsson).  Tier One operators are less committed but no less interested.  Only one Tier One I talked with foreclosed the idea of going with a non-traditional player for 5G Core, and surprisingly (to me) there was actually more interest in having a cloud-hosted option than having a software solution they’d have to host themselves.

One wild card in this is that almost all the larger operators say that cloud hosting, meaning public-cloud 5G Core, was a “short-term solution”, and that would mean they’d prefer to get 5G Core from a software provider who could either host the stuff in a public cloud or provide it to the operators in carrier-cloud-data-center form.  I think this is what both VMware and HPE see as their specific opportunity in the 5G space, and IBM likely has the same vision (but their own cloud may not be pervasive enough to make it a compelling hosting option).

Whatever the goal of the players and operators, there’s still a lot of assembly required in this approach, and that worries operators.  EU telcos in particular resist the idea of using a collection of elements to build 5G Core, believing any issues with the process in a market as competitive as the EU could be fatal.  IMHO, any player who wants to be a 5G Core provider is going to need to kiss a lot of carrier babies in their marketing initiatives and collateral, or they’re going to face an uncomfortably long and rocky sales cycle.

More Signs that Microsoft is Looking at the Telco Market?

We just can’t shake stories that relate to public cloud providers and their possible interest in hosting network operator applications that would otherwise justify “carrier cloud”. It would be easy to read Microsoft’s rumored interest in Jio Platforms as a further indication that it covets the carrier cloud space.  Jio is a giant in digital services in India, majority-owned by Reliance Industries Limited and with a lot of recent prominent investors, including Facebook.  It has a lot of the stuff that would make up “higher-layer” services, they own a big mobile operator, and they already have an Azure relationship with Microsoft.  The question, though, is whether Microsoft is taking a stake in Jio for platform reasons, or just to buy into India’s digital services space.  The answer may lie in what we think “higher-layer” services should be.

Jio is a kind of OTT success story.  The parent company (Reliance Industries Ltd.) wanted to own India’s online experience, but realized that there wasn’t enough connectivity to create the experience in the first place.  While Google in the US toyed with becoming a network operator (considering bidding on spectrum and actually moving ahead with limited fiber broadband) they never really committed.  Jio’s parent did, obtaining wireless spectrum and becoming a successful mobile operator, but also promoting those higher-layer digital services.  Some might see them as an example that established operators could follow, and that “some” might include Microsoft, already committed to hosting some carrier-cloud functions.

Pretty much everything we do on IP and the Internet is a “higher-layer” service.  Web hosting and viewing, video and music content, digital storefronts, chat in any form, and even email are part of what most users think of as the Internet, and thus “over” IP.  These are all services we have and use now, and they happen to be the higher-layer stuff that Jio offers too.  However, everyone knows that network operators (the Internet service providers or ISPs) aren’t exactly giant players in these service spaces.  For decades, operators have grouched over having been “disintermediated” by over-the-top (OTT) players, in fact.  Could Microsoft be seeing Jio-like stuff as a conduit for operators to get into the OTT services?

A possible data point supporting the notion that Jio might be seen as a source of “OTT application platform elements” is Apple’s rumored interest in the cloud.  Apple, as I noted in the blog referenced here, seems unlikely to be thinking of getting into the platform business.  They’d be more comfortable with a service business anchored in some new device or device set.  If the stories about Apple and the cloud are true, Microsoft might be wondering if apple plans on helicoptering to the summit of Everest while Microsoft slogs up the snowy slopes.  You can’t get disintermediated at the top of the food chain; the retail service positions.

The flip-side question goes back to carrier cloud and the six justifying drivers that operators have accepted for almost a decade.  If we forget “higher-layer” and look instead at what operators thought/think could justify major data center deployments, the items on the list are less services direct to customers than they are “services” as feature elements or foundation pieces of something else.  Things like IoT and 5G are on that list, for example.  If operators are thinking “platforms”, then they believe that they have to build a retail service story from the bottom.  A cloud provider like Microsoft, going after OTT applications, might actually be taking a risk by looking like they might be trying to leapfrog their prospective customers.

How valuable might Jio be as a platform?  It’s difficult to say how their digital services have been built, and whether they are based on reusable components that could become carrier cloud ground-floor technology of the kind operators seem to think they want.  I don’t have a lot of contact with the internals of Jio, but industry-wide, it’s been rare for people to develop retail OTT services based on a grand architecture to promote reuse.  Competition in the space demands quick action.

There’s also a problem with what a higher-layer service platform would even look like.  Jorge Cardoso, years ago, working in an EU university, did a prototype application framework built on a combination of TOSCA and USDL.  This framework was tied to examples of retail higher-layer services.  Initiatives in operator standards groups have tended to focus on how to deploy higher-level services as a set of cooperative components, but not on defining the services themselves.  NFV described how to do virtual functions, but not what to do with them, and TMF work has a similar deployment-versus-functionality bias.

The key point, though, is that there’s a hole-in-the-middle here.  The Cardoso model and the operator model (or the cloud container/orchestration model) leave a gap, which is the way that retail services relate to platform features.  Service middleware, in other words.

We have “service middleware” in public cloud computing.  All the dozens of web services offered by the cloud providers are tools from which cloud applications (or services) can be composed.  If you look at the set offered by Amazon, Google, and Microsoft, you see that some are generic (database, compute) and some are specialized (content, IoT).  Anyone who wanted to build a generic platform for credible higher-layer services would need to define middleware, and to do that would also need to frame an architecture that ties the pieces together.

“Cloud-native” is not that architecture, nor is it a useful step toward functionally derived middleware.  What good would Amazon’s or the other cloud providers’ web services have been if they didn’t map to functionality cloud buyers would build into their applications?  You have to have functionality in mind, and we’re still missing that piece.

Interestingly, the same TOSCA that Jorge Cardoso used could actually help create functional models.  I’ve been talking with a long-time “virtual friend” and TOSCA expert, and I’m working through the notion of doing a TOSCA guide for service planners.  In the process, I realized that when you attempt to model a service, you’re forced to consider functional blocks.  Model several services, and you start to see the potential for creating common/shared elements.

I think this could be another avenue for operators (at least those serious about “higher-layer” services) could pursue.  Service models are, as noted earlier in this blog, a bit “in-the-middle” with respect to the work on higher-layer services.  You have to see the top layer to do the model, and you have to understand the goals of the lower (hosting and connection) layers to realize those goals.  The models let you play in abstract rather than getting bogged down in details.

Because TOSCA supports the concept of “inheritance” common to programming languages, TOSCA could also encourage network operators to think about defining a hierarchy for device and service types.  For example, “Edge_Device” might be a superclass, subdivided into things like “Firewall”, “Access_Router”, and so forth.  Things that implement a given device type would be expected to fit into a TOSCA-defined deployment without tweaking, which would simplify the onboarding issue.  Subclasses would be expected to implement the properties of the superclass hierarchy above, which simplifies model-building and management.

But to get back to Microsoft, I think the model-and-middleware approach suggests a pathway that Microsoft could follow to leverage Jio to play a role in platform-building.  A role like that would have greater value to Microsoft if they believe that they have a shot at hosting a bunch of operator carrier-cloud missions.  But whether Jio could contribute significantly to a middleware strategy is difficult to say.  We’ll have to await further developments.

What’s Separating Winning Tech from Losing Tech in the Pandemic?

Why did Dell and VMware beat Street estimates, while HPE and IBM fell short?  The companies themselves are clearly interested in answering that question.  Investors obviously have an interest in this question, but anyone in tech should be interested too, because the answer may a signal on future technology direction.

It’s not difficult to see the major difference between Dell and the HPE/IBM pair.  Dell sells both servers and data center equipment and personal computing products, while both HPE and IBM have focused primarily on enterprise data centers.  It’s not difficult to see the difference between the VMware/IBM pair and Dell/HPE; the former two are software/service-centric and the latter hardware centric.  But these two pairings show that the “major differences” in products and strategy don’t account for the results.  Or do they?  Maybe we just have to think of two different factors in juxtaposition.

One factor, of course, is the difference between the software/service and the hardware/server business model.  Hardware is generally a capital expense, meaning that it has to be depreciated and represents a kind of strategic inertia.  You buy it and you’re stuck for the depreciation period unless you want to take a hit.  Software is often either open-source or subscription, and support in either case is an expense.  Because the major determinant in the last quarter’s results is the onset of the pandemic and lockdown, it’s not surprising that a bend toward software/services might come about.

Another factor is the difference between the data center and the desktop.  I know that many people thought that work-from-home (WFH) would be a boon to virtual desktops as users sought to support workers at home without having everyone get a PC.  That apparently wasn’t the case; sale of PCs was strong, and servers were generally much weaker.

But why did IBM, a software/service business, do worse than VMware, with a similar bend?  It’s almost certain that one reason is total addressable market.  IBM’s footprint tends to be the giants, the largest enterprises.  As I’ve noted in blogs before, the problem with a “Fortune 500” market is that there are only 500 of them.  VMware has a broader market base, something IBM lost when it exited the server business, and which Red Hat’s acquisition is only now helping them regain.

Still, IBM/Red Hat and HPE also sell software, and with Red Hat included for IBM, both have a similar potential market scope.  Why didn’t they perform as well as VMware?  For IBM, the reason is the typical after-acquisition organizational turmoil.  You can’t help feeling threatened when your company gets bought, or makes a buy that suggests your own role might be under threat.  Sales strategies have to change, which always makes sales organizations uncomfortable and always demands new marketing programs to develop leads.  What might Red Hat have done on its own?  We’ll never know.

For HPE, the issues are more complicated.  HPE is primarily a server company who also sells platform software.  Enterprises I talked with over the years have said that while they often consider buying a server-and-platform-software package from the same vendor, they would be less likely to buy software from a server vendor without getting the server piece.  That means that from a marketing perspective, HPE has to fight hard to stay even if capex pressure reduces interest in servers, and that marketing would have to be very effective and very software-targeted.

HPE hasn’t done that, and part of the reason is the usual problem of competing interests.  If you sell software as a server vendor, you may be considered an enemy by other software vendors (like Red Hat and VMware).  They’ll try to keep you out of data center deals to protect themselves, so you tend to take a lower software profile to protect your server business.  It’s hard to gain traction in software and services by hiding your offerings under a bushel.  To make matters worse, IMHO, HPE has never been a strong marketing player in software, or software-centric markets.

We can say that the combination of an expense-versus-capex focus and a PC-versus-data-center focus worked out, but that may sell the winners short, particularly VMware.  Could VMware be setting up for a drive to shift the focus of “hosting” away from the hardware and onto the platform?  Servers, then, could be something like raised flooring in data centers; you need them but they’re really just part of the landscape.

That would be a pretty smart play for VMware, and of course their vSphere stuff gives them significant experience in the platform-centric data center space.  Over the last year, they seem to have jumped on the whole container issue aggressively too, and they’ve also launched initiatives aimed at providing the platform for the carrier cloud.  All of this gives them good collateral to justify engagement, and virtualization can be played either as a cost-savings approach (great during a lockdown) or as a way to prep for the future (always popular).  VMware acknowledged the evolution of a new vSphere vision, and a new container vision (built around their Tanzu framework) on the earnings call, which proves that management knows the potential value of the software-platform-centric vision of hosting.

I think this all helped them in the quarter, but it also illustrates a potential vulnerability.  In fact, VMware has two vulnerable points that I see.  The last quarter shows that they’re in an early position of strength, which is a great place to be when addressing vulnerabilities, but they can’t afford to rest on their laurels.

First, they’re flailing around a bit with respect to containers and Kubernetes, the two things that you absolutely have to get right in enterprise computing these days.  They were a bit slow in centering on Kubernetes, and their recent acquisition of Pivotal (which was at one point spun out of EMC/VMware) has required they tidy up the Pivotal stuff to create a unified positioning.  Pivotal was the prime supporter of the Cloud Foundry Foundation, which was the source of an early platform offering for cloud-native.  Hopefully the Pivotal deal was to create a true cloud-native ecosystem, and hopefully that ecosystem will be revealed shortly.  If not, then VMware has handed IBM/Red Hat a pass to exploit the cloud-native space, which they certainly seem to be working to do.

The second problem relates to carrier cloud.  One of the biggest mistakes HPE made was to presume that the NFV ISG standards process would build a big market for servers, when in fact the direction of activity was promoting white-box universal CPE.  Even before HPE got started in earnest to promote its carrier sales efforts, operators were telling me that capex savings associated with function hosting could not be expected to justify NFV deployment or carrier cloud.  NFV, then, is not a helpful driver for carrier sales of data center servers or platform software.

VMware isn’t as dogmatic about NFV as HPE was, but they’re still more NFV-centric in their carrier positioning than I think is wise.  What operators need isn’t NFV platform software, it’s cloud-native platform software that could perform the mission that NFV was defined to form.  They could blow some positioning kisses at NFV (as Microsoft has, for example, in its own marketing to the telcos) and then close with a strong carrier-cloud-cloud-native offering.  Since they’re already committed to cloud-native (via their Pivotal move), why not go all-in?

Tanzu, VMware’s umbrella architecture, seems to be where everything is going, enterprise-wise.  The acquisition of Octarine and (re)acquisition of Pivotal could enhance that framework considerably, if played optimally.  That framework, as a kind of “Tanzu-for-operators” could then be a carrier cloud platform.  If that positioning was done quickly and effectively, then operators who wanted to use public-cloud hosting for early carrier-cloud applications could simply buy IaaS services and host VMware’s solution in the cloud, with the promise of moving it to the data center when scale builds.  That would be a good story for VMware, and something operators would love, but it can’t be told if operators commit to an Amazon, Google, or Microsoft carrier-cloud platform that can’t be migrated easily to the data center.

On their earnings call, CEO Pat Gelsinger said “We continue to see extraordinary interest in a software-driven approach to the 5G network and VMware’s telco cloud. Carriers deploying our telco cloud solution have seen substantial improvements CapEx and OpEx as well as agility in deploying new services. We now see the opportunity to extend these benefits to the radio access network using virtualized RAN deployed on telco cloud. We’re pleased to be expanding our partnership with Intel to address carrier needs in this area. We’ll provide additional details regarding our partnership at a future date.”  All this is great, as long as it doesn’t fall back into a “we host 5G and telco cloud VNFs” story.  There are too many stronger players already doing that.

Any of the top telco software vendors could make a decisive move in the carrier cloud space.  Given that most of them have been pushing NFV, and given the fact that these vendors already have strong sales to the telcos, mostly in the OSS/BSS area, they’re going to get a hearing when they make a sales call.  To counter that, VMware needs to focus not on what others are saying, but what they seem reluctant to say, which is that NFV isn’t the right answer and most operators know that.  Sure, they want to “fix” it, but that’s where blowing an NFV kiss comes in.  A fix is a nice front for a new approach.

If Tanzu is the key to success for VMware, then perhaps Kabanero is the key for IBM, and they could be exploiting it, or at least cloud and carrier cloud symbiosis.  Dell’s doing OK, obviously (they can ride VMware’s software coat-tails), so that leaves HPE.  They need their own branded and promoted super-approach to containers and carrier cloud, and they need it quickly.  They don’t want to be the only player of the four we’ve been talking about that misses another quarter.

What’s Cisco Seeing with ThousandEyes?

Cisco may have declared itself with regard to its growth and cloud strategy.  Thursday late, they announced they’re going to acquire ThousandEyes, a company with a big name in “experience monitoring”, meaning quality of experience (QoE).  Monitoring, particularly monitoring related to QoE, has good potential to serve as the centerpiece of an “ecosystem”, something that could combine with Cisco’s previous AppDynamics deal to help differentiate Cisco as it faces increased open-model competition.  It also raises an interesting point, which is that the Internet and public clouds are a growing part of almost every enterprise’s digital storefront, but something out of their direct control.

The Internet and cloud computing have complicated a QoE and monitoring problem that was already complicated enough to be rated among the top four problems in CIO surveys I’ve done over the last five years.  QoE measurement in a company network is a matter of network QoS, application performance, server load, and other factors.  It’s been common to insert monitoring agents at various places to measure QoE (user response time and variability) and to pin down any factors that are creating QoE issues.  If we assume that we have Internet access via web servers or mobile apps, cloud front-end handling with various policies for scaling under load or positioning processing in the same geography as users, and then we hand off to the traditional back-end piece of the application…well, you can see the problem.

Maybe you can’t, because the biggest challenge here is that you can’t stick agents in someone else’s infrastructure.  What ThousandEyes has tried (with good success by most accounts) to do is to monitor things like the Internet and cloud infrastructure at critical places and create a kind of map of QoE-related parameters, which can then be combine with whatever specific monitoring the user can deploy.  This is the literal “thousand eyes”.  They also offer software-hosted agents that infrastructure owners (like users) can install, a kind of “private eye” extension.  The real and virtual worlds combine in today’s application, and they combine to influence QoE.  That has to be reflected in any QoE monitoring.

Cisco, of course, has the ability to gather a boatload of good QoE-related data from its own network products, and the network, of course, is the best place to get a general idea of what’s going on, QoE-wise, because it connects everything.  You might not be able to measure specific transaction response times easily without deep inspection and correlation, but you could tell whether arriving packets at a server were queued or how long it seemed to take for transit.  But AppDynamics can do a lot more, even to the point of looking inside the code (where it’s accessible) to see where specific bottlenecks are.

OK, this lets you do a really great job of monitoring QoE in all its guises and disguises, but how does that help Cisco avoid commoditization?  That’s the (rumored) billion-dollar question.  Would ThousandEyes’ revenue, at Cisco’s multiplier, be accretive to share price?  Probably not by itself.  Would Cisco’s salesforce be able to sell enough extra “eyes” to make the transaction a positive?  Possibly, even perhaps more-than-possible.  Could other factors make it a killer deal?  That’s also possible, because Cisco can add a lot of AppDynamics and network data to the mix.

Knowing something has gone wrong with QoE, knowing roughly where it went wrong, and being able to call the responsible team to work on it is valuable in itself.  “It’s raining, so bring in the laundry.”  Knowing something looks like it will go wrong, and you have a more effective response.  “Got the south 40 plowed yet; looks like rain.”  But how about being able to do the right thing in either situation, without intervention?  Rain takes in the laundry, threat of rain schedules plowing.  Efficiency.

The question is how far along that value chain Cisco might be comfortable going.  Cisco’s Application-Centric Infrastructure model is more than just recognizing (or even anticipating) conditions, it’s about policy-based responses.  How would “Application-Centric-Universe” sound in marketing pitches?  Your infrastructure, your cloud, your Internet, your user’s devices…all of this could be touched by Cisco’s mighty hand.  Doesn’t just the thought of that pitch make you want to run out and sell?

Networks, data centers, and applications are all piece-parts.  Sure, you can try to sell someone a piece of their IT pie, but if someone else is offering the pie, ice cream, and a seat for you to enjoy, it’s going to resonate.  Business buyers want business solutions, not the building-blocks needed to assemble them.  QoE monitoring could give Cisco the core of an experience-guarantee ecosystem, and that’s what CIOs are likely to be very interested in hearing.

The theory that the ThousandEyes deal is “to make a deeper push into software” as CNBS says, seems a bit simplistic at best, and downright lame at worst.  You can only make money on an acquisition by doing more business after than before, relative to what you paid.  Sticking a “we’re into software” sticker on Cisco’s annual report isn’t going to cut it, and I think Cisco knows that.

Does it know how far it needs to go, though?  I raised that question earlier, and it’s a good point to close with.  Cisco’s ThousandEyes excursion deep into experience management builds value steadily as you move from monitoring current conditions, to anticipating issues, and on to dealing with them.  So does the cost and time needed to move.  Starting small is always a possibility, but it gives more agile opponents an opportunity to step ahead, and the company with the first big move gets to define the competition.  Being bold is also a possibility, but that puts Cisco where it doesn’t want to be—responsible for developing a market.

I’m betting on the middle ground—QoE monitoring plus AI analysis to predict issues before that happens.  I think Cisco would be comfortable with that, and that the market would reward the initiative.

https://www.appdynamics.com/product

https://www.thousandeyes.com/

https://www.cnbc.com/2020/05/28/cisco-acquires-thousandeyes-to-make-deeper-push-into-software.html

https://blogs.cisco.com/news/cisco-corporate-news-announcement-may-2020