Is Cisco’s Software-Centric Strategy Really a Strategy?

Cisco’s Investor Day was all about their growing position in the software space. Software grew from 20% of revenues in 2017 to 30% in 2021, which is certainly a validation of their claim of software growth. What’s far less clear is whether Cisco’s avowed shift to software is offensive or defensive, and whether it can sustain bottom-line revenue growth for Cisco in the longer term.

Cisco’s product revenues, which include software, were $36.014 billion for the year ending July 31st, and were $39.005 billion for the year ending July 31, 2019, a decline of just short of three billion. If software revenues are indeed growing, then hardware sales declined more than that between 2019 and 2021. The key point, I think, is that Cisco is expecting to have its hardware “persist” longer in accounts, and will rely on software subscriptions for the devices for annual revenues.

This theme is attractive to Wall Street, who believe that hardware is under pressure both from budgets and competition, and who apparently think that the sky’s the limit with respect to software. If that were true, then Cisco is rushing down the Yellow Brick Road, but is it? There are three challenges.

Challenge One is feature exhaustion. Most of Cisco’s “software” is really what some (including me) have cynically called “underware”, the software designed to create functionality for a hardware platform like a switch or router, or to manage networks of those platforms. In a real sense, it’s like a form of operating system. Even before Cisco separated its hardware and software, there were plenty of users of its IOS network operating system family who didn’t upgrade. The thing that keeps users upgrading and paying for subscriptions is new capabilities. It’s not easy to add those new things year after year, and maintain a sense of user value.

Many of my readers may not remember Novell, who stepped out in the 1980s as the darling of workgroup networking. NetWare was the network operating system of choice for enterprises, the source of print sharing and file sharing, the start of the notion of resource catalogs, and a lot more. The problem was that Novell made money by selling and upgrading software, and over time they used up more and more of the stuff users valued. Eventually, there wasn’t much left to add.

That leads us to the second challenge, which is exploding competition. Novell was hit hard when Microsoft added basic resource sharing to Windows, which is one example of exploding competition. Cisco can expect other network equipment vendors to counter its own “disaggregation” of software and hardware, but there’s a more serious competitive risk. In order to create value-add to justify continued subscription revenue, Cisco will have to expand beyond basic routing/switching. That leads it upward into hosting, which of course they’ve offered via their UCS servers.

Well, maybe. At Cisco’s recent investor conference, many were surprised to see that Cisco made almost no mention of UCS servers. It’s hard to see how Cisco could really be aiming to be a serious software player if the only “software” they offer is that “underware”. Competition and the need to create update pressure for customers would drive Cisco upward, into areas where the software is more generally linked to computing tasks. How could that be done without servers to run it on? Why, if you had servers in your inventory already, wouldn’t you prepare a place for yourself in the general or at least edge-focused hosting market, by pushing the fact that you’re a server vendor already?

The edge-focused piece is of particular importance because Cisco, like all network vendors, would probably find the easiest path out of pure packet-pushing to be edge computing, which is evolving from 5G hosting missions that are (as I’ve noted) already budgeted. Not only that, server and software vendors like Dell, HPE, IBM/Red Hat, and VMware are all going after the 5G hosting and telecom opportunities, and their efforts threaten network equipment.

That threat is multiplied by the possibility that the same software would be hosted in both servers and white boxes. If major software players offer that sort of dualistic software, then a Cisco retreat from hosting might well result in software players creating a growing customer interest in white-box switches and routers. That could cut into Cisco’s device sales and make them even more dependent on a strong, expanding, software strategy.

The final challenge is facing internal push-back. Cisco has tried the software game for ages, and it’s never measured up to their hopes. I think that a part of that is due to resistance from the traditional hardware types that have dominated Cisco engineering for decades. Today, as I’ve already noted, Cisco is really not a software company at all, but a hardware company who separated out their previously bundled software. That move didn’t create the same back-pressure that earlier and broader software initiatives created, but Cisco can’t stay on that limited software track and keep revenue flowing.

The further Cisco’s software aspirations diverge from “underware”, the harder it will be for Cisco to rely on the skills it has in house, and the more new people will be needed. As that influx shifts the balance of power, it only magnifies the resistance of the employees who have been with Cisco the longest, and who have likely worked up to senior positions. Will those people embrace the new software dominance? Doubtful.

The net of all of this, for Cisco, is that making software claims is easier than making software the center of a future revenue universe. The most problematic thing about their investor-meeting story, in fact, is the lack of emphasis on UCS. That’s Cisco’s biggest, and most unique, asset among the network-equipment-non-mobile vendors, and it would logically seem it should have been a focus of the discussions, which it decidedly was not. There are three possible reasons why it wasn’t.

First, Cisco might have no intention of broadening its software position beyond “underware”. If that’s the case, then their only justification for their story to investors would be to buy time while they try to figure out where they go next. That’s not a good thing, obviously.

Second, Cisco might actually believe that they only need “underware” to succeed in software. If that’s true, then I think that instead of looking at a rebirth, as Cisco and the Street have suggested, we might be looking at the start of a major slip from Cisco dominance. Think Novell, and that’s a very bad thing.

Third, Cisco might be preparing a true software blitz that will indeed involve UCS, and are just not ready to expose their plans. That would avoid having competitors in the server/software space jump in to build barriers to Cisco before Cisco really has anything to compete with the competitive offerings. That’s semi-OK as a reason for their seemingly ignoring UCS, providing that they actually have that “true software blitz” in the wings, and quickly.

A software strategy for Cisco obviously has to meet Cisco’s own revenue/profit goals, but to do that it has to meet the goals of the buyers, deliver the ROI they’ll demand. Right now, Cisco has a software transitioning strategy that’s not transitioning to a state that will clearly deliver on that, and they need to fix that quickly or they’ll not only fail to deliver on their promises in 2022, they’ll risk putting their whole software-centric vision at long-term risk.

Juniper Dips Another Toe into 5G Metro (But Not the Whole Foot)

Juniper’s decision to harmonize its implementation of the Open RAN RIC (Radio/RAN Intelligent Controller) with Intel’s FlexRAN program raises again a question I’ve asked in prior blogs, which is whether a network vendor who isn’t a mobile-network incumbent can play in 5G, and by extension whether they could play at the edge. I believe that network success (for any sort of vendor) is impossible without a 5G strategy, because 5G is what has funding to change the network in 2022 and 2023. Does the Juniper move with Intel move the ball for its 5G strategy?

5G is a kind of transitional technology in that it transitions networks from being strictly appliance-based to being a combination of devices and hosted features. The impact of this transformation is likely to be felt primarily in the metro portion of the network, deep enough to justify hosting resources but close enough to 5G RAN access to be impacted by O-RAN and 5G Core implementations. Because metro hosting is also where edge computing likely ends up, the ability of 5G to pull through an edge hosting model that could be generalized may be critical to exploiting the edge.

5G tends to call out NFV hosting, but many 5G RAN and open-Core implementations aren’t NFV-specific. That means that there could be a common model adopted for the orchestration and lifecycle management of 5G and edge applications, if such a model could be defined. However, the issue of orchestration and lifecycle management isn’t the only issue, or even the largest issue, IMHO. That honor goes to the relationship between networking and hosting in the metro zone, and it’s that area that Juniper as a network vendor has the biggest stake.

Operators’ fall technology planning cycle gets underway mid-September, and 5G is the most secure budget item on the 2022 list, and the most significant technology topic of this fall’s cycle. Network vendors without a seat at the 5G table face a significant risk of disintermediation. Ciena, for example, announced last week that it was acquiring the Vyatta switch/router software assets that AT&T had picked up long ago, noting that “The acquisition reflects continued investment in our Routing and Switching roadmap to address the growing market opportunity with Metro and Edge use cases, including 5G networks and cloud environments.”

One big problem for network vendors in becoming 5G powerhouses is that they have little opportunity for differentiation, since they don’t have either specific mobile network experience or a particularly large product footprint in the space. Juniper’s decision to roll its own near-real-time RIC into the Intel 5G ecosystem is a way for the company to become a part of a credible, broad, publicized, 5G model. Intel saw 5G as an opportunity to break out of the processor-for-PCs-and-servers market and into the device space. That was important because devices, including white-box switches/routers, could end up as big winners in open-model 5G and O-RAN, and that would risk having non-Intel processors gaining traction in metro hosting. That’s not only a threat to Intel expansion, but also to its core server business.

It’s also a threat to the network equipment business, particularly that related to metro. If you have both servers and networking in the same place, and if 5G standards favor hosting of many 5G elements, you could certainly speculate that servers with the proper software could absorb the mission of network devices. Vyatta is proper software for sure. I know a lot of operators who spent money on Vyatta, going as far back as 2012. Ciena’s move thus makes sense even if you assume servers could take over for routers and switches. Since Vyatta software would also run, or could be made to run, on white boxes, it could have a big play in a 5G-driven metro play, if we consider that 5G is where most of the budget for near-term network change in the metro is coming from.

The problem that vendors like Juniper, Ciena, and Cisco face in 5G goes back to the question I asked above, which is whether 5G in the metro creates a bastion of hosting at the edge, or a bastion of networking in content data centers. Or both. If metro infrastructure is hosting-centric and if 5G white-box thinking dominate there, then open-model devices could own the metro and the current network vendors could see little gain from the metro/edge build-out. It’s that risk that network vendors have to worry about, and despite Ciena’s positioning of their Vyatta deal, they still have to establish a 5G positioning, not just a switch/router software positioning.

The keys to the metro kingdom, so to speak, lie in the RIC, or more properly, in the two RICs. The RIC concept is a product of the O-RAN Alliance, designed primarily to prevent a monolithic lock-in from developing in the 5G RAN, particularly in the “E2 nodes”, the Distributed Unit, Central Unit, and Radio Unit (DU/CU/RU) elements. The near-real-time RIC (nearRT RIC) is responsible for the management of microservice apps (xApps) that are hosted within this “edge” part of the 5G RAN, opening that portion up to broader competition by making sure its pieces are more interchangeable. You could say that the nearRT RIC is a bridge between traditional cloud hosting and management and the more device-centric implementations likely to be found outward toward the 5G towers.

You could also say that the non-real-time RIC (nonRT RIC) is a bridge between the 5G RAN infrastructure and the broader network and service management framework. It’s a part of the Service Management and Orchestration layer of O-RAN, and its A1 interface is the channel through which both operators’ OSS/BSS and NMS frameworks act on RAN elements, with the aid of the nearRT RIC.

Inside a metro area, both RICs and the implementation of 5G Core would create/manage the hosting of “features and functions” that are components of 5G service. It would be optimal, given the fact that most IP traffic growth comes form metro-hosted content, for the content hosting, 5G feature/function hosting, and edge computing missions of metro infrastructure to harmonize on a single structure for both hosting and connection.

This is what’s behind a lot of maneuvering by vendors like Juniper, Ciena, and even Cisco. If metro is going to evolve through the parallel forces of hosting and connecting, having nothing to say in the hosting area is a decided disadvantage. Similarly, being stuck too low on the stack, down at the fiber transport level as Ciena is, relegates you to a plumbing mission and surely takes you out of meaningful metro planning dialogs.

You actually need to think about going up in the metro-success game. Hosting of almost anything that’s distributed demands some sort of multi-tenant network model for the data center/cloud, and that’s what actually spawned the whole SDN thing with Nicira ages ago. The ONF’s 5G approach is based on SDN control, demonstrating that you could take a network-centric view of the future of metro infrastructure for 5G hosting, and likely then on to edge computing.

Let’s make an important point here: Juniper, of all the non-mobile network infrastructure players, has the strongest product portfolio to address the New Metro moves. They have their Cloud Metro fabric concept for connectivity at the physical level, they have Apstra for data center automation and virtualization, they have both Contrail and 128 Technology for virtual networking with security and prioritization, and they have Mist/Marvis and AI automation overall for operations efficiency. They also have a RIC from a deal with the Nestia subsidiary of Turk Telecom in January, and it’s this effort that they’re now harmonizing with Intel’s FlexRAN model.

The fusion is a good strategy for Juniper, because Intel has a higher 5G profile and a better platform on which to promote its RIC model to the market. Juniper has been fairly quiet about the details of its RIC so far; it’s even hard to find any information on Juniper’s website. Given the RIC’s strategic position in 5G and the edge, they need a dose of publicity and credibility if they’re to reap the full benefits of their RIC deal, and exploit that to improve their 5G, Edge, and metro position.

Any non-mobile-incumbent network vendor, which Juniper is, has a challenge in the 5G space. They not only face competition from the mobile incumbents, who are so far winning the majority of the deals, but also competition from software hosting-or-white-box players, including giants like Dell, HPE, Red Hat, and VMware. The former group have pride of place, and the latter group have the inside track on the open-model 5G space because they’re hardware-independent. For the non-mobile-incumbents, like Juniper, there has to be a powerful reason for a buyer to give them a seat at the table, and it’s not enough to say “we have O-RAN and RIC too.” So does everyone else.

The Intel move could help Juniper validate it’s RIC approach, but it doesn’t explain it. That’s something Juniper has to do, and they also need to create a better Cloud Metro positioning that reflects the reality of the metro space. It’s where the money is, and will be. It’s where differentiation matters, and is also possible, and it’s where every vendor in the server, software, and network space is hoping for a win. Juniper has magnified its metro assets, but not yet fully developed or exploited them, and they need to do that.

Why Are Security Problems So Hard to Solve?

Why are network, application, and data security problems so difficult to solve? As I’ve noted in previous blogs, many companies say they spend as much on security as on network equipment, and many also tell me that they don’t believe that they, or their vendors, really have a handle on the issue. “We’re adding layers like band-aids, and all we’re doing is pacing a problem space we’ve been behind in from the first,” is how one CSO put it.

Staying behind by the same amount isn’t the same as getting ahead, for sure, but there’s not as much consensus as I’d have thought on the question of what needs to be done. I’d estimate that perhaps a quarter or less of enterprises really think about security in a fundamental way. Most are just sticking new fingers in new holes in the dike, and that’s probably never going to work. I did have a dozen useful conversations with thoughtful security experts, though, and their views are worth talking about.

If you distill the perspective of these dozen experts, security comes down to knowing who is doing what, who’s allowed to do what, and who’s doing something they don’t usually do. The experts say that hacking is the big problem, and that if hacking could be significantly reduced, it would immeasurably improve security and reduce risk. Bad actors are the problem, according to the enterprise experts, and their bad acting leaves, or could/should leave, behavioral traces that we’re typically not seeing or even looking for. Let’s try to understand that by looking at the three “knows” I just cited.

One problem experts always note is that it’s often very difficult to tell just who is initiating a specific request or taking a specific action on a network or with a resource. Many security schemes focus on identifying a person rather than identifying both the person and the client resource being used. We see examples of the latter in many web-based services, where we are asked for special authentication if we try to sign on from a device we don’t usually use. Multi-factor authentication (MFA) is inconvenient, but it can serve to improve our confidence that a given login is really from the person who owns the ID/password, and not an impostor who’s stolen it.

The problem of having someone walk away and leave their system accessible to intruders would be largely resolved if multi-factor authentication were applied via a mobile phone as the second “factor”, since few people would leave their phones. However, if an application is left open, or if a browser tab that referenced a secure site/application is open and it’s possible to back up from the current screen into the secure app, there’s a problem. There are technical ways of addressing these issues, and they’re widely understood. They should be universally applied, and my group of experts say that more time is spent on new band-aids than on making sure the earlier ones stick.

The network could improve this situation, too. If a virtual-network layer could identify both user and application connection addresses and associate them with their owners, the network could be told which user/resource relationships were valid, and could prevent connections not on the list—a zero-trust strategy. It could also journal all attempts to connect to something not permitted, and this could be used to “decertify” a network access point that might be compromised.

Journals are also a way of looking at access patterns and history, something that AI could facilitate. Depending on the risk posed by a particular resource/asset, accesses that break the normal pattern could be a signal for a review of what’s going on. This kind of analysis could even detect the “distributed intruder” style of hack, where multiple compromised systems are used to spread out access scanning to reduce the chance of detection.

A special source of identity and malware problem is the systems/devices that are used both in the office and elsewhere, since the use of and traffic associated with those systems/devices aren’t visible when they’re located outside a protected facility. That problem can be reduced if all devices used for access to company assets are on the company VPN, with the kind of zero-trust access visibility I’ve described. If the WFH strategy in play for systems outside the office puts the systems inside the zero-trust boundary, then the risk of misbehavior is reduced because the chances of detecting it are much lower.

The “dualism” of devices, the fact that many are used for both personal and business reasons, it one of the major sources of risk, one that even zero-trust network security can’t totally mitigate. Many of the security experts I’ve talked with believe that work and personal uses of devices absolutely do not mix, and in fact should not be able to install any applications other than those approved by IT. Those same experts are forced to admit that it’s virtually impossible to cut off Internet access, however, and that creates an ongoing risk of malware and hacks.

One suggestion experts had was to require that all systems used for business, whoever owns them, access all email through a company application. Emailed malware, either in the form of contaminated files or links, represent a major attack vector, and in fact security experts say it may well be the dominant way that malware enters a company. The problem here again is the difficulty in enforcing the rule. Some who have tried, by blocking common email ports, have found that employees learn how to circumvent these rules using web-based email. Others say that social-media access, which is hard to block, means that it may not be worthwhile to try to control email access to avoid malware.

So what’s the answer to the opening question, why security is so hard? Because we’ve made it hard, not only by ignoring things that were obviously going to be problems down the line (like BYOD), but also by fixing symptoms instead of problems when the folly of the “head-in-the-sand” approach became clear. I think that we need to accept that while the network isn’t going to be able to stamp out all risk, it’s the place where the most can be done to actually move the ball. Zero-trust strategies are the answer, and no matter how much pushback they may generate because of the need to configure connectivity policies, there’s no other way we’re going to get a handle on security.

Is It Good that 5G Handsets are Taking Off?

Anyone who reads tech news or watches TV probably realizes that 5G smartphones are taking off. A part of the reason is that most of the major smartphone vendors make 5G a feature of their newest models, which makes 5G less a choice than something that a new phone pulls through. The question, of course, is whether the growth of 5G phones will somehow alleviate the growing cynicism about 5G’s promise, or make it worse.

In my blog on the Street view of 5G earlier this week, I noted that Wall Street’s forecasts show that 5G investment will peak in 2023, which likely means that any impact of new technology on the 5G market, and on the evolution of 5G toward a general edge computing model, will have to happen by then. One interesting question this point raises is whether early 5G adoption might actually hurt edge computing, and even 5G innovation, by accelerating deployment when few innovative offerings have matured enough to be credible. Could 5G’s early handset success be showing this risk could be mounting?

I’ve had a chance to ask several hundred people who got 5G smartphones whether the 5G capability had changed their experience. About ten percent said they had, and the rest said “No!” While it’s difficult to get this data accurately, it appears that only about thirty percent of those with 5G phones actually had 5G service available to them for any significant amount of time. The very few who were tracking whether they were on 5G or not indicated that they could not see any clear difference in how their phones performed when they were on 5G or LTE. What this seems to prove is that the 5G user experience doesn’t justify the hype, but that doesn’t mean that 5G phone success won’t be good for 5G…in the long run.

There is no question that 5G’s benefits to the average user have been seriously overhyped, and frankly no question that it was destined to be overhyped from the first. “News” means “novelty”, not truth, and in an ad-sponsored world, clicks are everything. You don’t get clicks by telling boring truths; exciting lies work way better. People justify the hype by suggesting that in order to get technology advance, we have to make the advances seem consumable, populist. OK, we did that, but now reality is setting in.

How 5G handsets will impact 5G overall, and in particular the development of “real” 5G-specific applications, depends on the tension between the negative force of disillusionment and two forces that act to promote 5G. One of those forces is the force of availability, and the other the force of desperation.

Every mobile technology succeeds primarily because it becomes pervasive. It’s difficult to imagine how 5G applications (whatever they are, and whoever might create them) could develop without extensive 5G availability, and it’s hard to see how that can develop without a large number of 5G smartphones. Were we to see a very slow uptick in 5G handsets, we could justifiably wonder if the whole 5G thing was a hoax. That’s not going to be the case.

We are already seeing a lot of 5G service availability, and that’s going to continue thanks to the growth in 5G handsets. With that, we’re eradicating the most basic battle for 5G applications, which is some 5G to run them on. Every 5G tower is a point from which a 5G future could be projected, and every 5G-committed operator is a competitor in a market that, to avoid price commoditization as users figure out that 5G really doesn’t do much (if anything) for them, is sure to try some innovation.

That attempt is going to be delayed in the near term, though, because a big part of the 5G handset success is attributable to smartphone incentives launched by the operators. If the services themselves aren’t differentiating, maybe the phones can be. Of course, there are only a certain number of phones that have star quality, so differentiating on phones isn’t going to last, and desperation will set in.

It might have set in already, in some ways and for some operators, and the relationships between 5G operators and public cloud providers may be an indicator. Some see this as a pure cost-driven alliance, but that’s a bit of an oversimplification. Yes, public cloud relationships offer operators a first-cost benefit, reducing the cost of building out a wide-geography carrier cloud when there are only a limited number of uses for it. In the long run, if hosting features is the best/only path forward, then network operators will bear higher costs because of public cloud profit margins.

The alliances are justified in the long run if operators can harness public cloud providers’ capabilities and knowledge to explore additional applications, services, and features. What I’ve heard from operators this year suggests that there are some in the organizations who see that, who understand that 5G “applications” have to be first new, and second, uniquely linked to some aspect of 5G. It’s very difficult to see how those requirements could both be met through anything other than edge computing applications.

There’s been a lot of operator interest in function hosting, including the NFV initiative, but the fact is that for a variety of reasons, the software side of 5G hosting and edge computing hasn’t even come close to the richness that the cloud has. While some major vendors launched “telecom” initiatives, these have (perhaps not surprisingly) focused on 5G functionality rather than the hosting ecosystem that would likely lead to a generalized edge strategy. Two of every three operators tell me that they can’t get what they need from the software vendors, and in many cases this is true even when the software vendors have what operators need.

This is the reason why the relationships between public cloud providers and network operators for 5G function hosting could loom large. We don’t have edge computing today because we have no significant resource-pool deployments at the edge. Operators own real estate there and could potentially build out facilities to host 5G elements, which could then be used for those “true” 5G applications. If operators don’t build out, then either we don’t get edge pools at all, or they’re owned by the cloud providers.

There is, of course, the option that the cloud providers in some way “lease” the resources from the operators and provide the software needed, either cloud middleware that can run in these operator-built-cloud-leased edges, or a complete 5GaaS. In this case, whether the edge resource pool would be committed to the operator in a longer-term sense would depend on the specifics of the deal.

The faster an open-model 5G that relies extensively on traditional resource pools deploys, the more incentive there is for operators do do cloud provider deals, because their do-it-yourself hosting decisions would likely take longer to realize. Would that hurt edge computing? It might actually help it a little, because there is near-zero chance that operators themselves will be able to navigate the complicated symbiotic evolution of 5G and the edge. It could hurt if cloud-software giants like Red Hat and VMware decide that they’re too late to grab the brass ring, and stop trying to merge telecom and cloud in a software-centric, hosting-independent way. I suspect that from now through mid-2022 will be the decisive period, and I’ll report regularly on what I’m seeing.

Reading Wall Street Tea Leaves on 5G, Metro, and the Edge

Wall Street has credible reasons to believe that O-RAN is going to end the dominance of the big mobile network equipment vendors, and it may re-jiggle the vendors’ ranking too. The bigger question is the impact it might have in the networking industry overall. It may be that there’s no success in networking in the future without a credible O-RAN story of some sort. It may also be that O-RAN will need to change to realize the full benefits of all of this. Street data may help us assess these questions, and maybe answer them too.

The telcos have a decades-long history of messing up their own futures by launching and supporting “transformation” projects that were totally disconnected from reality. One could argue that the whole of 5G specifications, from the 3GPP, are an example. These specifications failed to truly define an open framework for 5G even as they mandated hosting principles that almost had to be open to work. They also built on NFV work that was itself out of touch with reality.

When the O-RAN Alliance was founded (by AT&T, China Mobile, Deutsche Telekom, NTT DOCOMO and Orange) in 2018, it faced the same sort of risk that contaminated previous telco standards and transformation projects. Despite that, for reasons I’ll get to, O-RAN actually produced something that was not only useful but potentially transformational. The biggest reason for that wasn’t the technical excellence of the work, but the specific target.

The 3GPP specifications tended to open up 5G infrastructure with respect to control-plane function hosting. Yes, it did that with NFV, but the operators who founded O-RAN weren’t trying to replace NFV but to address something that they believed the 3GPP work didn’t open fully, which was the 5G New Radio (NR), the 5G Radio Access Network (RAN) specification. In the 3GPP work, the X2 interface provided for base-station coordination of resources and resource management, but the interface was interpreted differently by different vendors, and the exact mechanism for the resource orchestration, configuration, and management was opaque. O-RAN’s contribution to all of this was to open up that opaque process through the concept of the Resource Intelligent Controller or RIC.

Opening up the 5G RAN opened up the critical (and, as I’ve said, opaque) part of RAN implementation. In doing so, it broke a potential proprietary RAN lock-in, and that was the thing that really changed the mobile network market forever. The majority of 5G is still proprietary because of the past vendor-specific infrastructure model and past vendor relationships, but it’s very clear to the Street (and to me, and many others) that we’re entering the open age with regard to mobile networks.

Over the last decade, mobile network infrastructure has actually become less “open”, meaning it’s been more dominated by a few large vendors. Street data says that a decade ago, Ericsson, Nokia, and Huawei had about 65% of the market for mobile infrastructure, and today they have nearly 80%. Nokia’s success, and Ericsson’s to an extent, can be traced in part to their at-least-titular support for O-RAN and openness, and the Street is now projecting that vendor lock-in for mobile networks will not operate in 5G.

An elephant in the 5G infrastructure room is Huawei, who has had the largest market share in mobile infrastructure over the entire last decade. However, it’s market share has been fairly static from 2016 because of tensions between the US and China that have spilled over to other 5G infrastructure buyers. Huawei was the price leader in mobile infrastructure, and the diminution of its influence put cost pressure on operators, pressure that an open 5G infrastructure market could be expected to relieve. If price-sensitive infrastructure deals favor O-RAN, then that only adds to its importance.

As I’ve noted in past blogs, 5G is the only area of telecom spending that’s actually budgeted, and that alone means that it will have a massive impact on vendor opportunity. A vendor who wants to sell to network operators who have any mobile service commitment will need to have a 5G strategy, period. If the Street is right (and I believe they are) about the open-5G drive among operators, then that means those vendors will need an O-RAN positioning, and a positioning toward where O-RAN is leading us. Thus, it’s critical to know where that is.

5G is made up of two distinct pieces, the RAN/NR piece and 5G Core. The term “5G Core” is perhaps unfortunate, because virtually nothing in it relates to what we’d think of as “the core” of a network, the deep transport router functionality. 5G Core is really a metro strategy because its primary role is subscriber and mobility management, which is functionally focused on metro areas because that’s the typical scope of movement of mobile customers. 5G RAN, and O-RAN, are “edge” or “access” strategies that extend from the metro area outward to the towers. Arguably, it’s the metro that’s the focus of 5G infrastructure because everything touches it.

That’s even more true when you consider that the most visible of the (arguable) 5G drivers is low-latency applications. It makes little sense to spend dollars to control mobile network latency, only to throw away the benefits by hauling application traffic a thousand miles to a cloud host. You’d want the applications to be run proximate to the point of RAN attachment, which is (you guessed it!) the metro.

The big innovation of O-RAN, technically speaking, is that RAN Intelligent Controller or RIC, but a second benefit is that the “hostable” elements of the RAN, which are the functions of the “baseband” portion and the so-called “Central Unit” and “Distributed Unit” or CU/DU, are controlled by the RIC but virtualized onto open devices. That means that specialized chips aren’t mandatory in the structure; as long as you can virtualize the CU/DU software functions onto a hardware device, you can use it.

What hardware device? There are three classes of network device that could be used in 5G RAN and Core. One (obviously) is the server, the commercial-off-the-shelf or COTS hosting point. A second is the white box, the open-hardware-modeled switching device, and the final one is the legacy network switch/router. That last device is responsible for providing IP connectivity, and it’s really a lower layer of the User Plane of 5G. The other two device options differ in the extent to which they’re generalized resources versus specialized communications devices. The former is most useful when there’s more than one thing a device at a specific point in the mobile topology could be expected to do, which will generally mean further inward, to the metro. The latter is the default strategy for something further out, which means the RAN, toward the tower.

What makes this resource independence and virtualization work is the “vRAN” concept, which is related to but not identical with O-RAN. The purpose of vRAN is to allow RAN (and, I believe eventually) 5G Core functions to be developed in a hardware-independent way, including independence of specialized switching chips. As I’ve suggested above, vRAN virtualization is a key part of the O-RAN value proposition, and so I believe that the two are joined at the hip. However, there are some vendors who are advancing vRAN concepts faster than O-RAN concepts, and some that go in the other direction.

The major mobile vendors have very different positions in all of this. Nokia is already aggressive with respect to vRAN and O-RAN, and the Street expects them to become even more so as the open-model strategy gains traction with operators. Ericsson and Huawei are far less committed; my operator contacts tend to agree that both vendors are just “blowing kisses” rather than making changes.

That suggests that there’s an opening for other vendors, and the Street likes some other non-network-specialized tech giants, Fujutsu, NEC, and Samsung, as well as up-and-comers Mavenir and perhaps Rakuten. The former group has the advantage of credibility of scale, something telcos generally value highly, but their success depends on O-RAN and open-model 5G in general, because otherwise big vendors tend to stir telco fears of vendor lock-in. The latter group represent the open wave in 5G better, but perhaps not as well as others could.

If we consider edge computing part of the mix, we have to look for more IT-centric players, particularly HPE, IBM/Red Hat, and VMware/Dell, but the timing may be critical. None of these players have realized a complete and mature 5G strategy. 5G spending is expected to peak in 2023, and 5G is already the only driver of increased mobile operator spending and the major driver of telco spending gains overall. It may well be that Ericsson and Huawei expect competitive pressure from operators to drive their deployment decisions faster than an O-RAN/vRAN movement can mature, reducing the threat of competitors from any source mentioned above.

The reason these three players, as well as the public cloud providers, are interested in all of this. Nobody really knows what a “5G application” would look like, nor how many of them could deploy, or when. The edge computing opportunity isn’t really driven exclusively by 5G, but because 5G is deploying and is defining an open-model, virtualized, edge function set, it could also define a model of edge hosting and even a middleware model. If there is a sea change coming in computing, it would likely come from edge-specific applications. My modeling has always said that the total revenue from this could hit a trillion dollars, but it won’t be easy to realize because it will require assembling a wide range of technologies into a cooperative system. The biggest liability the mobile network’s traditional infrastructure leaders face is that they don’t have a clue on this topic.

The final truth here relates to the “don’t-have-a-clue” group and the fact that the mobile network vendors aren’t the only ones in that category. Remember that legacy switching and routing, which form a big chunk of the 5G User Plane, is one of the product classes that will make up 5G infrastructure, which means legacy network vendors will see their own opportunities change with 5G deployment.

The future of the network is going to be decided by the fusion of switching/routing and hosting, and that fusion happens in the metro area and nowhere else. The big question is how it happens, at least from the perspective of the switch/router vendors. Is the metro network a network first, or a huge data center? If it’s the latter, there has to be an element of data center thinking in metro strategy. You can see Cisco’s’ recent application-oriented M&A in that light, but it’s more easily validated (if not yet effectively exploited) in Juniper’s metro-fabric, Apstra, AI, and 128 Technology stuff.

A great metro strategy is a great network strategy, and a great metro strategy has to be based on some credible network/hosting/edge vision and 5G/O-RAN/vRAN positioning. That’s where the build-out money for metro will come from through 2024. Get a piece, and you’re a winner. Fail, and you fail, convincingly and in the long term, no matter how good your product suite is overall. You can’t make money where it’s not being spent, and metro is that place, now and going forward.

The big fly in the ointment here may be Nokia’s announcement on the O-RAN Alliance, saying “we have no choice but to suspend all of our technical work activities” because of the threat of US sanctions applied to some Chinese companies who were members. Other international bodies like the 3GPP secured US licenses to permit Chinese participation without sanction risk, but the O-RAN Alliance has so far not done that. I don’t think this is going to derail O-RAN, and I don’t think that it will impact Nokia’s relationship with O-RAN, but it could delay broader progress in open-model 5G innovation. If the Alliance doesn’t address the sanctions risk, it might delay open-model 5G just enough to give those mobile vendors a shot at locking up the 5G market before it peaks in 2023, and that might impact edge computing by decoupling hosting and networking at the edge. In short, a lot’s at stake in the next couple years.

What’s Behind the Comcast-Masergy Deal

In a move that I found both surprising and unsurprising at the same time, Comcast acquired Masergy Communications, a provider of managed SD-WAN and SASE. It’s surprising because network operators haven’t traditionally purchased either technology or managed service providers. It’s unsurprising both because these aren’t traditional times in networking, and because Comcast knows it has a potentially major upside in the deal.

SD-WAN is a hot technology, one of the hottest in terms of user interest. Managed services are also hot and getting hotter, and I’ve blogged about both these truths in the past. Rather than reprise those stories, I’ll address the broad trends in the context of Comcast’s move, and its motivations.

There’s been a lot of talk about the global aspirations that Comcast supposedly has for the deal, speculation that it would somehow launch Comcast into a player in enterprise networking. I don’t think there’s a chance that’s true. The deal isn’t aimed at global opportunity at all; Comcast has a US footprint and they’d have no special credibility selling to buyers outside the US. As far as enterprise network opportunity, Comcast doesn’t stand any real chance of taking over enterprise network business from the telcos, and they know it well.

What the heck is this about, then? Occam’s Razor, in action. This is about SD-WAN.

Comcast is one of the premier suppliers of broadband Internet connectivity in the US. Since SD-WAN relies on broadband Internet, Comcast would appear to have a great opportunity in providing SD-WAN services to enterprises. Appearances can be deceptive, as Comcast learned with their initial SD-WAN push. The problem is that SD-WAN is something enterprises want to do once and only once. A supplier who can offer SD-WAN within the Comcast footprint is way less attractive than one who can offer it everywhere, which Masergy can do.

Suppose you’re a Comcast sales type calling on a US enterprise in your territory. You have this great SD-WAN strategy for the sites the company has that happen to match the Comcast footprint. You sing your best song, but the best outcome you can hope for is that 1) the buyer is willing to engage with multiple SD-WAN providers to get all their candidate sites covered, and 2) the other SD-WAN provider they contact for what your Comcast stuff can’t cover is stupid enough to leave any of the deal on your table. If either of these isn’t the case, you’re back to beating the bushes.

What Comcast needs is an SD-WAN strategy that covers the globe, so it doesn’t have to push buyers into a multiple-SD-WAN solution. Masergy has that. They’re a global MSP. They already run over any broadband provider in the world, including Comcast. They can provide sales and support globally, too. There are few SD-WAN managed service providers out there who are truly major players, and Masergy is one. Comcast needs an MSP, not SD-WAN technology, for that global support scope, and nobody is a better candidate.

Then there’s the competitive dynamic. Masergy is a truly great partner for a communications service provider like Comcast, and also for all of Comcast’s competitors. One could spend time better by wondering why nobody else did this deal first, than wondering why Comcast would want to do it now, and that’s for the Masergy SD-WAN MSP story alone.

It’s not the only story, either. The only fly in the Masergy-marriage ointment for Comcast is the price they have to pay for the deal, which hasn’t been disclosed by either party but which my Street sources tell me was “unexpectedly high”. It shouldn’t have been unexpected, of course, but it is a fact that without something to add some luster to the deal in the near term, the result might be a hit on Comcast’s bottom line and share price. How to avoid that? Have other stuff to sweeten the deal.

Comcast has an obvious revenue kicker in that they can promote their own cable broadband within their footprint, increasing their broadband sales to businesses. That not only creates direct revenue, it creates reference accounts and easier sales traction. However, that source is something the Street would expect, so it would have factored into their expectation of the strike price for the deal. There has to be something else.

There is, and it’s the other stuff that Masergy offers. They have eight SD-WAN elements in their order book, six unified communications elements, four contact center elements, and five managed security elements. I’ve pointed out before that the most critical thing an SD-WAN MSP needs is something to upsell into, to differentiate from the hoard of competitors and sweeten the revenue and commissions. Masergy can offer that, so much so that the deal can likely get Comcast salespeople an opening to call on enterprises. You can’t sell if the buyer won’t set up an appointment.

Some of those sweeteners are also valuable in their own right. Multi-cloud? It’s in there. AIOps? Also in there. Security services and SASE? In there, too. With the Masergy deal, Comcast steps up to be one of the most credible MSPs in the market…with the obvious “if…” qualifier.

If they can play it right. The big question is synergy. Masergy has probably already tried to engage enterprises for SD-WAN, and they cover the whole waterfront, footprint-wise. Our hypothetical SD-WAN/MSP salesperson, having called on enterprises, now leaves behind two pools; those that already bought and those that rejected Masergy. What can Comcast say to get their own shot? Not “Masergy is now us”, or even “look at what we are when you pull Masergy into our tent”. The whole has to be greater than the sum of the parts, and that has to be extremely clear from the very first.

They’re not doing that so far. Their press release on the Masergy deal is b..o..r..i..n..g. They talk about what Masergy adds to Comcast, but not what Comcast adds to Masergy, and that’s the question on which the whole value of this deal will rest.

If you’re a Comcast competitor, you have to see this deal as the classic shot-across-the-bow threat. Most competitors will be telcos, and rather than buy another MSP (who probably won’t have all the assets of Masergy), you’d likely engage with an SD-WAN technology vendor and launch your own MSP service. Masergy’s technology is hidden in their MSP positioning, but there are many (dozens of) SD-WAN products out there, as well as SASE solutions, security approaches, operations automation, and so forth. Big telcos also have global presence already. If I were a telco, I’d be dusting off my MSP business plans and looking for best-of-breeds. If I were a vendor who had something arresting in one of Masergy’s technology areas, I’d be prospecting telcos for my wares. Both will surely happen, starting now.

Comcast absolutely has to hit the ground running on this, and they’ve not done that with the bulliest of their bully pulpits, the announcement itself. If they’re not careful, this deal could hurt them more than help them, but if they are, it could make them a real player in the network of the future.

Is There a Role for Graph Databases in Lifecycle Automation and Event-Handling?

Everything old is new again, so they say, and that’s even more likely to be true when “new” means little more than “publicized”. When I talked to the 177 enterprises I’ve chatted with this year, I was somewhat surprised to find that well over two-thirds believed that artificial intelligence was “new”, meaning that it emerged in the last decade. In fact, it goes back at least into the 1980s. The same is true for “graph databases”, which actually go back even further, into the ‘70s, but Amazon’s Neptune implementation is bringing both AI and graph databases into the light. It might even light up networking and cloud computing.

We’re used to thinking of databases in terms of the relational model, which uses the familiar “tables” and “joins” concept to reflect a one-to-many relationship set. RDBMSs are surely the most recognized model of database, but there have been other models around for at least forty or fifty years, and one of them is the “semantic” model. I actually worked with a semantic database in my early programming career.

Semantic databases are all about relationships, context. Just like words in a sentence or conversation have meaning depending on context, so data in a semantic database has meaning based on its relationships with other data. The newly discussed “graph databases” are actually more properly called “semantic graph databases” because they extend that principle. One common graph database technology is the NoSQL graph database increasingly popular for IoT applications.

The same notion was the root of a concept called the “Semantic Web”, which many saw as the logical evolutionary result of web usage. In fact, the Resource Definition Framework (RDF) used in many (most) graph databases came about through the Semantic Web and W3C.

Graph databases shine at storing things that are highly correlated, particularly when it may be the correlations rather than the value of a data element that really matters. Amazon’s Neptune and Microsoft Azure’s Cosmos DB, as well as a number of cloud-compatible software graph database products (Neo4j is arguably the leader), will usually perform much better in contextual applications than RDBMS databases would. That makes them an ideal foundation for applications like IoT, and also for things like network event-handling and lifecycle management. While you don’t need graph databases for AI/ML applications, there’s little doubt that most of those applications would work better with graph databases, and my notion of “contextual services” would as well.

Network service lifecycle automation, a topic dear to my heart and the hearts of anyone with a network, would seem a natural for graph database technology. Network events, since they reflect a state change in a highly interconnected community of cooperative elements, are handled properly when they’re handled in context, so obviously something that can reflect relationships would be a better way of storing and analyzing them. Why then don’t we see all manner of vendor representations on the power of their graphical database technology in network management?

We do see an increased awareness of the contextual nature of lifecycle automation, and I’ve illustrated it through my blogs about finite-state machines (FSM) and state/event processing. You can also see it in the monolithic models of network automation, including the ONAP management framework, by the fact that the processing of an event often involves a query into the state of related elements. That begs the question of whether a graph database might serve as an alternative to both FSM and specific status polling.

One barrier to graph database application to network or service lifecycle automation, and one that would apply to application lifecycle automation as well, is the tendency to rely on specific polling for status, rather than on a database-centric analysis of status. Polling for status has major issues in multi-tenant infrastructure because excessive polling of shared resources can almost look like a denial-of-service attack. Back in 2013, I did some work with some Tier Ones on what I called “derived operations”, which was an application of a proposal in the IETF called “i2aex”, which stood (in their whimsical manner of naming) for “infrastructure to application exposure”. The idea was that status information obtained either by a poll or a pushed event would be stored in a database, and applications like lifecycle automation would query the database rather than do their own polling. I2aex never took off, and I didn’t follow through with serious thought about just what kind of database we might want to store these infrastructure events in. I think that graph database storage is an option that should be considered now (and that I should have explored then, candidly).

Conceptually, the “state” of a community of cooperative elements (of any kind, network or application) can be determined from the sum of the states of the elements themselves. The relationships between the elements and their states can surely be represented in a graph database, and in fact you could use a graph to represent a FSM and “read” from it to determine what process to invoke for a given state/event intersection. Why not create a graph database of the network and the service, and use it for lifecycle automation?

One potential issue is that the number of relationships among elements grows exponentially with the number of elements, which means that a graph representing a large network, service, or application might be very large indeed, and that a query into it, even given the high performance of graph databases, might be time-consuming. Still, the concept might have real merit if we could tweak things a bit.

One possible tweak would be to use the same techniques I’ve discussed for creating a “service hierarchical state machine” or HSM from individual service-component FSMs. In the approach I discussed in my blogs, the components of a service or service element reported state changes back to their superior element, which then only had to know about the subordinate elements and not their own interior components. The model constrains the complexity.

Another possible tweak would be to use AI principles. A service or an application, in the real world, is really a fusion of two models, one representing the resources themselves and another the way that service functionality is related to or impressed on those resources. I believe a graph database could model that, but it might be easier to use AI to do bridge correlations between a graph database representing each model.

I’ve always been a fan of state/event tables, but I’m not wedded to an approach if something better comes along. I’d like to hear about any state/event applications mapped to a graph database versus the traditional table, and hear comments from some of my LinkedIn friends on their views on the use of graph databases in applications traditionally seen as state/event-driven. Please do not advertise your product in response; if you want to reference a product, post a link to a blog you’ve done that explains the graph-database connection to state/event handling.

Making the Most of “Citizen” Strategies in IT

Remember “shadow IT?” Even the name seems a little sinister, and for many CIOs it was a threat from the get-go. What shadow IT is all about is the dispersal of information technology purchasing and control to line organizations, creating a parallel set of technology centers. We don’t hear as much about it these days, but a recent article in Protocol made me look back at what I’ve heard from enterprises this year, and it’s interesting.

Despite the seemingly recent origin of shadow IT, it’s actually been around for sixty years. A company I used to work for, a giant publishing firm, had a half-dozen very separate lines of business, and they tried out dispersing IT to the business units—several times in fact. Over a period of 20 years, they went back and forth between central and distributed IT, and gradually ended up in a kind of uneasy middle ground. The reason for all the vacillation is relevant to the current situation.

Line organizations often see the IT department as an enemy. In the 177 firms I’ve talked with this year, over a hundred said that they had “tension” on goals and costs between IT and line organizations. Almost a hundred said that they had some “shadow IT” activity in place, and none of them indicated they were trying to stamp it out, though half said they were trying to “control” it.

The reasons for the tension vary, but the top two items on the list are pretty consistent across enterprises. The number one issue was delay in getting critical line-sponsored projects implemented. Companies said that it took IT “over a year” in most cases to get a project completed, and the line organization targets were less than half of that. Issue number two was that IT costs were substantially higher than expected, often so high that the business case was threatened. Line departments felt that there was too much IT overhead, and that their allocated costs covered resources and activities that didn’t benefit the line organizations at all.

Line departments reacted to these issues in two specific ways. First, they promoted the notion of low-code/no-code development to take more control over project implementations. Second, they looked increasingly to the use of cloud computing, because as an expense, cloud computing bypassed some executive limitations on just what line organizations could do. “I can’t buy computers but I can rent computing” was a typical line comment.

There’s no question that low-code/no-code “citizen developers” have revolutionized the way line organizations handle many of their projects. Interestingly, enterprises often don’t consider this a form of shadow IT; twenty percent more companies say their line organizations use low-code/no-code than say they have shadow IT. IT organizations have gradually accepted the citizen-developer trend as well; the majority of the enterprises that use it said they didn’t put any special restrictions on use, though most did say that IT played a major role in selecting the tools.

The cloud is another matter. When asked whether cloud projects driven from line organizations were more successful or more likely to have issues, enterprises (and even a majority of CFOs) said that “citizen-cloud” projects often failed to deliver on expectations, were more likely to experience significant cost overruns, and “usually” required IT organizations step in to correct problems. CIOs’ biggest problem with citizen-cloud projects was the security/compliance issues, which came about because of data stored in the cloud without proper precautions.

The specific trend that Protocol talked about in the referenced article, the notion that software and even hardware vendors would start selling (or trying to sell) to line organizations rather than IT, wasn’t particularly common in my sampling. Only 27 of the 177 enterprises indicated that this had happened, but of course there’s always the chance that the attempts weren’t all recognized even by CIOs and CFOs. Both the CIO and CFO organizations indicated that they believed that their current policies on software and hardware purchase by line departments were satisfactory. In general, those policies required that the dollar amount be small (usually somewhere between five and ten thousand dollars maximum) and that the software and hardware be used entirely within the purchasing department. Network equipment purchasing by line organizations was rarely allowed (11 out of 177), and the same policy held with software that had to be run on IT-owned or multi-tenant facilities, though that was allowed by 38 of 177 enterprises if the hosting was done in the cloud.

I was interested in how the sellers might view the idea of going around IT, and there was considerably less consensus there than among the buyers. My sampling here is more limited (75 vendors), but over two-thirds of hardware and software vendors said that they would be “reluctant” to prospect line organizations, partly because they were afraid of alienating their major buyer (IT) and partly because they were afraid of creating a very visible failure that could taint their reputation within the company, or even in other firms in the same vertical market.

Cloud providers didn’t share this reluctance, though they were reluctant to talk about it and their policies might vary significantly across providers, sales regions, and even individual salespeople. The best I could do here was to identify 49 of 177 companies who said their line organizations had been prospected by cloud providers or cloud resellers.

Perhaps the most interesting thing about all these shadow-IT, citizen-IT plays is that they dependably spark IT initiatives to enhance IT responsiveness and manage costs. Every enterprise that supported citizen developers also had a “growing” community of low-code/no-code developers within the IT organization, aimed at providing a rapid response to the line project requests. These resources were primarily directed at projects associated with long-running applications rather than the one-off applications that (slightly) dominate line department projects. About a quarter of enterprises either had, or were exploring, ways of supporting citizen-cloud requirements out of IT.

If you dig into citizen/line/shadow IT, you find the classic irresistible force versus the classic immovable object principle in action. Line organizations see business problems and opportunities first, and because most have direct profit-and-loss responsibility, they also see considerable pressure to address them. Given that enterprise IT has enough problems in acquiring skilled technology workers, it would be surprising if line organizations had a rich pool of IT skills, so there’s rarely a good understanding of how to run an IT project, or how to assess whether one being run by the CIO is being run right. Some bickering here is inevitable, but that isn’t the main issue.

The main issue is that we’re continually trying to move IT closer to the worker. We’ve gone from batched punch-card retrospective reporting to “mainframe interactive” transaction processing, to distributed computing, to hybrid cloud. That evolution frames the capability to host more worker-interactive models, but it’s the applications that really establish the way workers and IT relate. What we have in shadow IT today is an attempt to make applications as dynamic as application hosting. That’s partly a low-code/no-code problem and partly a data architecture problem.

It’s hard to advance applications toward citizen development, even with coding tools, without creating “data middleware” that makes information access easy. The great majority of citizen developer projects are really data analysis problems, and so enterprises who are trying to use citizen developers or support the activity should take a hard look at their data analysis and modeling strategies, and try to create a unified data model to support line department information use. Without that, it may be more difficult to get full benefit of low-code/no-code and even citizen cloud empowerment.

Cisco Has Giant Plans, but Faces Giant Hurdles Too

If there’s a vendor in the networking space that we need to watch closely, it’s Cisco. Not only are they a giant, and a master at account control, they’re also the most inventive when it comes to addressing (or creating, or some might say, fabricating) future trends. Their latest earnings report adds some color, perhaps, to some of their recent strategies.

Cisco reported a revenue beat, driven entirely by their infrastructure platforms category, which were up over thirteen percent year-over-year. Applications, security, and other products all came in below estimates. The Street view was that even though Cisco’s revenue and EPS (slightly, in the EPS case) beat estimates overall, and even though outlook was good, gross margins guidance and supply chain comments somewhat offset the positives. I think it’s the source of their revenue beat that raises questions.

What’s kind of ironic here is that Cisco has been doing all manner of things, including virtually all its recent M&A, to make itself less dependent on infrastructure platforms, the area that actually did well for them in the quarter. The areas that were judged most problematic by the Street were subscriptions, software, security, and applications, and those of course were the areas that Cisco was trying to boost with announcements and acquisitions. The big year-over-year revenue gain also came from a very low reference point.

It’s hard not to see the picture that Cisco painted as being one of a simple recovery of deferred spending across all types of customers. Despite swings in the stock market overall, there’s a general view that global economies are recovering and that businesses are now looking to execute on capital programs that were deferred because of the virus. That’s the classic rising-tide story, at least where companies have the account control to execute on the changing plans, which Cisco clearly does.

Account control issues may play into the problematic factors in Cisco’s numbers, too. Cisco has great strategic influence in the network. Software and applications typically fall outside the normal zones of Cisco influence in an account, and subscription and security are spaces that are shifting in response to market factors that are best exploited where strong account control exists beyond simple networking. The big push in subscription is Cisco Plus, which is a subscription hardware service that’s arguably more focused on the data center than the traditional network, particularly in terms of the organizations who’d own decisions within each buyer.

To all appearances, Cisco is shifting in response to a combination of enterprise truths. First, it’s becoming very difficult for enterprises to acquire and retail skilled technical people, in networking and in IT/operations. This has driven increased enterprise and even network operator interest in “managed services” for the former, and cloud provider partnerships for the latter. Second, lack of strong internal technical skills has made many CFOs antsy about capital projects promoted by the technical teams. Combine that with the fact that enterprises are really consuming more VPN and SD-WAN services than creating network services of their own, and that most network growth comes in the data center network, and you can see why Cisco is looking at a broader footprint in terms of both products and influence.

Another shift that seems clear is Cisco’s “ecosystemic vision”. Decades ago, I read a SiFi story about a future age when “marketing” consisted of using all manner of tricks to create what they called a “circular trust” (meaning a feedback-based supporting set of monopolies) where a lock-in was created by having one product promote another product, which in turn then promoted the first one again in a feedback loop. Cisco seems to have this in mind, and want to link their server UCS strategy, their data center strategy, their subscription strategy, their cloud strategy, and their software strategy so that accepting any of them pulls the rest through.

There are obvious benefits to this kind of “herd of camel’s noses” approach, and there’s also a competitive dynamic to consider. Remember Cisco’s strength in account control and strategic influence? If you have better, far better, control than your rivals, then broadening the playing field works in your favor because you can respond broadly and your competitors can’t. Cisco’s rival Juniper, who actually has at least as good and likely better individual elements to frame camel-herd ecosystem trusts, doesn’t do nearly as well in ecosystemic positioning, so Cisco is punching into unprotected competitive anatomy here.

Ecosystemic positioning demands an ecosystem, of course, and perhaps your ecosystem has to embrace stuff that’s still not in your own portfolio, so your herd of camel-noses can start working their way into tents. The recent steps that Cisco has taken in monitoring (exploiting ThousandEyes and acquiring Epsagon) suggest that Cisco wants to spread a management/observability umbrella over both application components and networks. Their recent partnerships with cloud providers and interconnect players to enhance SD-WAN cloud routing suggests that they’re also trying to elevate their networking capabilities, extending from private elements and MPLS VPNs to cloud/Internet routing of application traffic. All of this is aimed at addressing what they clearly believe is a sea change in how enterprises deploy applications and build and use application connectivity.

A software success, a big one, would surely help them in their cloud and application management goals, and Cisco has done a ton of software M&A, even in their most recent quarter. A lot of their software revenue comes from that M&A, which has led some to suggest Cisco just bought its way into software revenue growth without much regard for what all the stuff did in an ecosystemic sense. There’s some validity to that point, but it’s also true that all the network vendors and network operators are starved for software engineering expertise, particularly people who know cloud-native techniques. Cisco may be trying to build a workforce as much as an ecosystem, or more.

It’s tough to call how this is going to play out. Cisco has a plan that, if executed well, could work. They have the account control needed to promote their plan to buyers, and they have the nakedly aggressive marketing needed to make that promotion easier for the salesforce to drive home. Extreme Networks’ recent M&A suggests it’s taking its own run at cloud networking, and an activist investor is rumored to be looking to split Juniper’s AI and cloud networking off the legacy switch/router business. That seems to prove it’s not just Cisco that sees the changes, and the opportunities. Still, Cisco could make a go of this.

“Could” is of course the operative term. Cisco, like many marketing giants, often mistakes saying something for doing something. There is no question that Wall Street likes the prospect of a bright new set of business opportunities, and that those same announcements offer additional credibility to Cisco’s sales pitches across the board. There’s also no question that at some point, Cisco has to deliver substantial progress here or let all this ecosystemic stuff gradually fade away. In which case, they’ll need something else to replace it.

5G Slicing, MVNOs, and Acquisition and Retention

The value of MVNO (mobile virtual network operator) services for wireline incumbents has always been a question mark, given that many cable operators who tried the notion out didn’t get what they’d hoped for. Still, as this Light Reading piece says, there have also been MVNO success stories among at least some of the larger cable companies. One of the 5G features that always gets touted, network slicing, has the potential to reduce the scale and cost of MVNO relationships with mobile operators. Could this pull through 5G Core, could it induce cable companies of all sizes to have a go at mobile 5G?

There are quite a few MVNOs out there today, and some analysts on Wall Street have told me that a sizable majority of them are offering prepaid mobile service. The reason is that the big mobile operators all want to own the post-pay customers whose spending is highest and whose loyalty is most important. The theory behind prepay MVNO business is that the operator really has to pay only marketing costs and the MVNO charges to their supplier. If an operator can identify and exploit a valuable niche market, or in particular if they have some current customer base they can exploit, they can make MVNO service a success.

Well, that’s been the theory, anyway. I’ve had a lot of discussions with mobile service planners, including MVNO planners, and there seems to be another dimension of the whole MVNO thing emerging. It may be that this new dimension will decide whether MVNO opportunities really drive network slicing in 5G, and how many smaller and more specialized non-mobile operators (including cablecos) will get into the MVNO space.

I’ve said in many of my blogs that operators almost universally spend more on operations expenditures (opex) than on capex. Mobile operators are no exception; their expensed costs exceed capex by an average of 45%. If you’re wondering why opex connects to MVNO drivers, a look at how those expenses are divided will answer part of the question. The biggest contributor to opex, meaning operations-related expenses, is customer acquisition and retention, which makes up about 40% of opex overall.

Suppose you’re a cable company or other non-mobile incumbent. You can spend that money on advertising and maybe special offers, which is the traditional approach, but suppose that you offered all of your customers a sweetheart mobile service deal through an MVNO relationship. Could it be that such a deal, offered to your wireline customers only, would increase your acquisition and retention rates?

Some cable giants have suggested to me that this is exactly what happens, and that the most valuable thing to them is that such mobile service plumbs will tend to improve customer retention significantly. One cable strategist puts it this way: “mobile services to our base is like the ad you never run but your customer always sees.” If your customer looks to buy a different wireline broadband service, they have to consider that they’ll also need a new mobile service, and that’s likely to be more costly. Of course any phone deals and the hassle of number portability can be an incentive to stay the course, too.

The potential for mobile services to increase wireline broadband and TV sales means that it doesn’t have to be as profitable in itself to be useful. Nobody I’ve talked with are prepared to say they would offer MVNO mobile service as an incentive, as a loss leader, but they’ve suggested that thin margins would be acceptable, and some said that if they found it improved retention in particular, the margins could be even thinner.

Obviously the profit margin for MVNO mobile service would depend in part on what the relationship with the real mobile operator costs. That’s where some (emphasis on “some”) operators think network slicing and 5G could come in. Some prospective MVNOs tell me that their mobile operator prospects have suggested that 5G network slicing could allow them to reduce their costs to MVNOs. Some mobile operators have said the same, but so far, nobody has been willing to offer exact numbers.

One mobile operator did say that if they found 5G network slicing reduced the cost of offering MVNO services by up to ten percent, they’d be inclined to drop their prices by the same amount to increase their business. If costs were to drop by more than that, they’d pass the “majority” of the reduction on to MVNO customers.

Both the mobile operators and prospective MVNOs said that WiFi hotspot and roaming support was as important to their cost/price relationships as network slicing. Anything that unloads data traffic, and in particular, video content, is attractive as a means of reducing network service costs to the MVNO. For this to work, both parties say, there has to be both WiFi calling and automatic switchover to WiFi when in a suitable hotspot. Smaller MVNO prospects would like to see mobile operators include both solicitation for hotspot locations and support for call/data management in their MVNO offers.

All of this suggests that mobile operators who wanted to capitalize on MVNO deals should create a kind of MVNO package that includes “slices”, hotspot deals, and WiFi integration into both call/text and data services. Larger MVNOs like Comcast or Charter may be able to do their own deals and also provide the technology support needed for other important features, but smaller ones wouldn’t likely have any chance, and certainly wouldn’t be comfortable with either task.

It also shows us (yet again) that you can’t just toss 5G features out there and expect everyone to figure out how to exploit them. Supporting MVNO relationships at a reasonable cost for both parties is critical to the expansion of MVNO services, and critical to early adoption of network slicing. It’s also a way to facilitate bundling mobile and wireline services for operators who can’t afford or justify their own mobile network deployment. However, there’s more to it than just slicing 5G.

Another lesson to be learned here is that feature differentiation of connection services is very difficult; bits are bits. When you can’t feature-differentiate, price differentiation and commoditization tend to follow, and that’s a path that no network operator wants to take. The fact that customer acquisition and retention costs are a larger percentage of opex than they were five years ago, and are likely to be an even larger portion five years hence, illustrates this point. MVNO deals may help reduce those costs, but a better differentiation strategy for connection-oriented operators are the only long-term solution.