A New ONF Spin-Out Could Transform Private 5G

I have to admit I was surprised when I read that the ONF was launching a private venture, Ananki, to bring its Aether 5G open-model network implementation to market. As you can see in the release, Ananki will target the “private 5G space” and will use the ONF’s Aether, SD-RAN, SD-Fabric and SD-Core technologies. The new company will be backed by venture funding, and it will apparently target the “machine to application” or IoT market for private 5G specifically.

I don’t know of another industry body like the ONF that has spun off a company to sell something based on its implementation. It would seem that a move like that could compromise the group’s vendor members and alter its mission, though the ONF says it won’t be doing anything different as a result of the spin-out. It certainly raises some interesting market questions in any case.

I’ve always been skeptical about private 5G, even for IoT applications. It’s not a matter of private-IoT 5G not working, as much as it not being worth the effort when other technologies are suitable and easier to adopt and use. WiFi is obviously the best example, and the ONF says its goal is “making private 5G as easy to consume as Wi-Fi for enterprises.” Ananki proposes to do that by deploying its private 5G on public cloud infrastructure, creating a kind of 5G-as-a-service model, and I think this could be a breakthrough in at least removing barriers to private 5G adoption. There is, of course, still the matter of justification.

Details are still a bit sparse, but the Ananki model is a 5G plug-and-play. You get private white-box radios from either Ananki or one of their certified suppliers, SIM cards for your devices, and they use SaaS to spin up a private 5G cloud-hosted framework on the cloud of your choice. You manage everything through a portal and pay based on usage. All this is admittedly pretty easy.

Easy is good here, because enterprises are almost universally uncomfortable with their ability to install and sustain a private 5G network. In fact, fewer than 10% of enterprises say they would know how to build a private 5G network. Since other research I’ve done suggests that very few enterprises (again less than 10%, which is roughly the statistical limit for this) who don’t know how to do something would even explore whether the “something” could make a business case, this reduces the prospect base for private 5G.

Which isn’t the same as saying that making 5G easy makes the business case, of course. The Ananki model is going to create a cost that WiFi or other traditional IoT network technologies might not create. That complicates the business case. The flip side is that private 5G might support some IoT applications more easily than one of the network alternatives. In the net, I think it’s safe to say that there is an opportunity base for the Ananki offering, and I also think it’s safe to say that it would be larger than the opportunity base for other private 5G models, such as those from the mobile network incumbents. How big, and how much bigger, respectively, I cannot say with the data I have.

Before we write this discussion off as an intellectual exercise, consider this truth. A 5GaaS offering based on open-source, fielded by a company that’s established as a “Public Benefit Corporation”. This isn’t a common designation, but in short it’s for-profit company whose board is free to consider the stated public benefit cited in its charter as the basis for decisions, and not just shareholder value. Were the technology foundation of Ananki proprietary, their cost base would be higher and their customer offering more expensive. Were Ananki a traditional corporation, it would have to evolve to maximize shareholder value and the board might be challenged by 5G- or open-network-promoting decisions that didn’t benefit shareholders first.

Open-source is a cheaper framework for something like this. Open-source ONF Aether technology is what Ananki packages. Could that technology be packaged by others? Sure, or it wouldn’t be open source. Could other open-source network technology be packaged by someone to do the same thing? Sure. Could an open body create a cookbook so enterprises could package the necessary technology on their own? Sure. In other words, this approach could be extended and made competitive, creating market buzz, alternatives in approach, other features, and so forth.

It might also be extended. The difference between public and private 5G comes down to spectrum and licensing. Aether is the basis for a DARPA (Defense Advance Research Projects Agency) project, Project Pronto, to create a platform for secure, reliable, research communications. DARPA’s origin was ARPA (without the “Defense”) and ARPANET is seen by most as the precursor to the Internet, so might Project Pronto launch something bigger, broader, and even become a model for service providers? Sure.

The ONF is, IMHO, an underappreciated player in the 5G space, as I noted in an earlier blog. Their Aether model takes open-model 5G beyond O-RAN and frames a very complete open infrastructure model for 5G. It’s possible that Ananki will drive that model to private 5G commercial reality, but even if it doesn’t, Ananki will validate the notion of open-model 5G from edge to core, and that will surely influence how service providers view 5G infrastructure. That might make it the most significant contribution to open-model 5G since O-RAN.

Mobile operators, and operators in general, are increasingly antsy about proprietary lock-in, as a recent story on Vodafone shows. Nokia’s decision to embrace O-RAN shows that even mobile network vendor incumbents recognize that there’s growing demand for a more open approach to network infrastructure. It could be that the Ananki model the ONF has devised will provide a more effective pathway to that.

Certainly it will be a test case, and one test that’s going to be interesting is the test of the way network equipment vendors respond. All network-operator-oriented standards-like groups tend to be dominated by equipment vendors because there are more vendors than network operators, and because network vendors have strong financial incentives to throw resources into these initiatives. It’s not only about contributing and learning, but also about controlling and obstructing. I’ve been involved in many of these initiatives, and I’ve never seen a single case where at least one vendor didn’t throw monkey wrenches into some works.

The big question with ONF/Ananki is whether the spin-out model that obviously worked for the ONF would now work for other standards bodies. The first time something like this is done, it could sneak through because opposition by vendors hasn’t really developed. If Ananki shows signs of failing, then vendors can paint the failure as a failure of open-model advocates to field anything that can actually be deployed. If it shows signs of succeeding, then will vendors try to ensure other bodies don’t follow the ONF’s approach?

Open model networking has a challenge that open-source doesn’t share. You can start something like Linux or even Kubernetes with a single system or cluster, respectively. Networks are communities, and so the first real implementation of a new strategy has to be a community implementation. Ananki is a path toward that, and while it may not be the only way to get to open-model networking in the future, it may be the only way that’s currently being presented for mobile infrastructure. In short, Ananki could revolutionize not just private 5G but open-model 5G overall.

Will the CNF Concept Fix NFV?

Recently, I blogged that the transformation of NFV from VNFs (VM-centric) to CNFs (“cloud-native” or “containerized” network functions, depending on your level of cynicism) was unlikely to be successful. One long-time LinkedIn contact and fellow member of standards groups said “If CNF never makes it… then the whole story is doomed and it’s a bad sign for Telcos’ future.” So…will CNF make it, and if it doesn’t then is the whole NFV story doomed, and if that is true, is it a bad sign for Telcos’ future? Let’s see.

Red Hat is one of the many cloud/software players who embraced NFV as part of its telco software story. They have a web page on the VNF/CNF evolution, and I want to use that as an example in the rest of this discussion. Not only is Red Hat a premier player, their stuff represents a “commercial” view of the situation rather than the view of standards-writers, which are often a bit obscure.

The Red Hat material starts with a true statement that, nevertheless, needs some amplification. “Virtual network functions (VNFs) are software applications that deliver network functions such as directory services, routers, firewalls, load balancers, and more.” That’s true, but it’s a specific example of the unstated general case that VNFs are a hosted/virtual form of a physical network function, or PNF, meaning some sort of device. The original NFV model was all about replacing devices, and routers were actually not on the original list.

The PNF origins meant that “In the initial transition from physical elements to VNFs, vendors often simply lifted embedded software systems entirely from appliances and created one large VM.” There was some debate on whether “decomposition” of existing PNFs into components should be required, but that was contrary to the base PNF-centric mission and had (obviously) little vendor support. Thus, VNFs were virtual devices, monoliths.

It took time, more than it should have frankly, but the NFV community eventually realized they needed something different. “Moving beyond virtualization to a fully cloud-native design helps push to a new level the efficiency and agility needed to rapidly deploy innovative, differentiated offers that markets and customers demand.” Since the cloud was shifting toward containers rather than VMs, “different” morphed into “containerized”. “CNF” could be said to stand for “containerized network function”, and to some it did mean that, but as the cloud became the specific target, CNF turned into “Cloud-native Network Function”.

Containers, of course, are not automatically cloud-native, and in fact my survey of enterprises suggests that most containerized applications aren’t cloud-native at all; they are not made up of microservices and are not fully scalable and resilient. Containers are a step forward for VNFs, but we might be better thinking that the real goal is a “CNNF”, which obviously means a cloud-native network function. The CNNF concept would admit to a service built from functions/lambdas, hosted serverlessly but not in containers, and also focus on the harmony with the cloud.

The final thing I want to pull from Red Hat is this interesting. Referencing the need for an open, consistent, foundation for telcos, they say: “Building that foundation on NFV (with VNFs) and especially cloud-native architectures (with CNFs) results in improved flexibility and agility.” This defines what I think is the critical shift in thinking. NFV means VNFs, and cloud-native or CNFs means not NFV, but cloud architectures. Red Hat is preparing a graceful transitioning out of NFV into the cloud, retaining the notion of network functions but not the baggage of NFV.

Or maybe not, or maybe only some. If we assume containerized cloud-native elements, then we can assume that services built with CNFs would have all the container elements needed to deploy on an arbitrary cluster of resources (the “telco cloud”); they carry their instructions with them. A service could be visualized either as a set of functions that created a virtual device (what NFV would have treated as a monolith), or as a set of functions, period. That would seem to substitute cloud resource management and orchestration for NFV’s MANO, a cluster or clusters for NFVI, and CNFs for VNFs. One thing left is the notion of VNFM.

The goal of VNFM was/is to present function management in the same way that device management was presented when a VNF was created from a PNF. We can’t expect the cloud to manage network functions with cloud-specific tools; the CNFs are “applications” to the cloud, and their management would be more specialized. There’s also the question of the extent to which function management has to be aware of function hosting, meaning the underlying resources on which the CNFs were deployed. NFV never really had a satisfactory approach to that, just a name and a loose concept of PNF/VNF management equivalence.

CNFs could, then, fall prey to this issue. Before NFV ever came about, I’d proposed that hosted network features had to have a management interface that was composed rather than expressed, using what I’d called “derived operations”. This was based on the IETF draft (which, sadly, didn’t go anywhere) called “Infrastructure to Application Exposure” or (in the whimsical world of the Internet) i2aex. You used management daemons to poll everything “real” for status and stored the result in a database. When a management interface was required, you did a query, formatted the result according to your needs, and that was your API.

The advantage of this approach is that it lets you grab status information from shared resources without exposing those resources to services, which could at best overload resources with status polls, and at worst lead to a security breach. We don’t have that in CNFs as the NFV ISG sees them, and Red Hat doesn’t appear to assume the approach either.

VNFM seems to be the only piece of NFV that’s 1) needed for cloud-native function virtualization, and 2) not specifically adapted to the cloud by the CNF initiative. Since I would argue that the NFV ISG version of VNFM wasn’t adequate for multi-tenant services in the first place, adapting that wouldn’t be worth the effort. Since the cloud really isn’t addressing the specific issues that VNFM did (inadequately, as I’ve said), we can’t expect the cloud to come up with its own strategy.

When I advocate that we forget NFV, write off the effort, I’m not suggesting that we don’t need something to support hosted virtual functions in networking, only that NFV isn’t and wasn’t it. I’d also suggest that if the NFV ISG and NFV supporters in the vendor community think that CNFs are necessary, then they should accept the fact that just having CNFs doesn’t make NFV a cloud technology. We need the equivalent of VNFM, and I think the i2aex model I mentioned is at least a credible pathway to getting that. It may not be the only one, but it’s at least an exemplar that we could use to undertake a broader search.

Where does this lead us in answering the question I raised at the start of this blog? Vendors are answering it by blowing kisses at NFV while quietly trying to do the right thing, IMHO. That’s better than doing the wrong thing, but it means that the NFV ISG initiatives aren’t working, and won’t work, and the reason is that standards-related bodies worldwide have always been reluctant (well, let’s be frank, unwilling) to admit their past efforts were wasted. We’ve spent a lot of time trying to make cosmetic changes to NFV rather than substantive ones, all the while ignoring the truth that the cloud has passed it by in almost every area, and leaving the one area where the cloud probably won’t help lie open. CNFs won’t fix NFV, and if that means it’s in trouble, then we’d better start dealing with that.

Nokia, 5G Disruption, and 5G Realization

Some on Wall Street think Nokia is a disruptor in disguise, reinventing itself quietly to seize control of networking through 5G. I don’t agree with the article’s like that “5G is the Next Industrial Revolution” (hey, this is media, so do we expect hype or what?) but I do think that the article makes some interesting points.

Network transformation requires money to fund the transforming, period. That’s the irreducible requirement, more so than any specific technology shift or service target. In fact, those things are relevant only to the extent that they contribute to the “money”. What makes 5G important isn’t that it’s revolutionary in itself, but that it’s a funded step on the way to a different network vision. The revolution isn’t 5G, but what 5G could do to change how we build network services. Emphasis on the “could”.

I’ve blogged about Nokia and 5G before (HERE and HERE), particularly with regard to its fairly aggressive O-RAN commitment. I believe that O-RAN is the key to getting a new open-model architecture for network infrastructure into play, and that it’s also the key to starting a 5G-driven transformation of network-building. But if 5G is only a stepping-stone, then Nokia needs to support the pedestal it leads to and not just the step. Do they know that, and can they do it?

If we had ubiquitous 5G, would it get used in IoT and other applications? Surely, providing those applications could create a service model that delivered that irreducible “money” element. 5G does not remove all the key barriers to any of these applications, and for most it doesn’t remove any barriers at all. Yes, enormous-scale public-sensor IoT (for example) could demand a different mobile technology to support it, but we don’t have that and we don’t have clear signs that we’re even headed there. That’s the challenge that Nokia faces for it to be able to exploit whatever the “Next Industrial Revolution” really is.

If that next industrial revolution isn’t driven by 5G, it’s largely because it’s not driven by connectivity alone. Applications are what create value, and delivering that value is the network’s mission. Does Nokia realize that, and have they taken steps to be an application-value player? Indications are there, but not prominent.

Nokia’s website has two “solutions” categories that could validate their effort in being a player in the creation of network-transforming applications, both under the main category of IoT. The first is IoT Analytics and the second IoT Platforms. Unfortunately, IoT Platforms is all about device and connection management and not about hosting IoT applications. IoT Analytics does have useful capabilities in event correlation, analysis, and business responses. Since the article I referenced at the start of this blog is really largely about IoT, you could take Nokia’s IoT analytics as a step toward realizing the “disruptor” claim.

The problem is that every public cloud provider offers the same sort of toolkit for IoT analytics, and there’s a substantial inventory of open-source and proprietary software that does the same thing. If you explore Nokia’s IoT strategy, it seems to me that it’s aimed less at the enterprises and more at service providers who want to serve those enterprises. Those service providers would still need to frame a service offering that included Nokia’s IoT elements, but couldn’t likely be limited to them because of competition from public cloud and open source. They’d also have to overcome their obvious reluctance to step beyond connection services, and that might be the tallest order of all.

There’s also a bit of a Catch-22 with 5G and IoT and other edge applications. The applications themselves would surely roll out faster if they weren’t 5G-specific, since connectivity can be provided by LTE or even WiFi in many cases. The problem for Nokia is that a decision to accelerate the applications by making them more generally dependent on connectivity would then mean they wouldn’t pull through Nokia’s 5G story. That could mean a lot of Nokia’s disruptor potential disruptor status might be threatened. It’s going to be interesting to see how Nokia balances this over the rest of 2021 and into next year.

Is Verizon’s MEC “Land Grab” Really Grabbing any Land?

Verizon thinks it’s out front in what it calls an enterprise “land grab” at the edge. Of course, everyone likes to say they’re in the lead of some race or another, and Verizon’s position in the edge is really set by a deal with Microsoft and Amazon. Does this mean that they’re just resellers and not in the lead at all, or that maybe there are factors in establishing leadership that we’ve not yet considered?

One thing that jumps out in the article is the references to Mobile (or Multi-Access) Edge Computing or MEC, versus the more general concept of edge computing. The article blurs the distinction a bit, quoting an investor transcript where Verizon’s CEO says “We are the pioneer and the only one in the world so far where we launch mobile edge compute, where we bring processing and compute to the edge of the network for new use cases.” That implies a more general edge mission. However, the same transcript quotes the Verizon CEO saying “First of all is the 5G adoption, which is everything from the mobility case, consumers and business and then fixed wireless access on 5G.” This seems to focus everything on 5G and even private 5G.

There’s some hope that other parts of the transcript could bring some clarity to the picture. Verizon also said that “We have Amazon and Microsoft being part of that offering. They are a little bit different. One is for the public mobile edge compute and one is for the private mobile edge compute.” That’s hardly an easy statement to decode, but let’s give it a go, based on Verizon’s website and previous press releases.

“Public” and “private” MEC here refer both to whether public or private 5G is used and to whether the cloud-provider-supplied MEC hosting is linked with Verizon’s actual service edge, or whether it’s hosted on the customer premises. The Amazon Wavelength offering is integrated at the Verizon edge (the public mobile edge), and the Microsoft Azure relationship uses Verizon’s 5G Edge to support a private RAN (LTE or 5G) and host Azure components on the customer’s premises (the private mobile edge).

In both cases, the goal of MEC is to introduce a processing point to latency-sensitive applications that sits close to the point of event origination, rather than deep inside the Internet/cloud. Where there’s a concentration of IoT devices in a single facility, having MEC hosted there makes a lot of sense. Where the IoT elements are distributed or where the user doesn’t have the literacy/staff to maintain MEC on premises, a cloud option might be better.

Supporting both, and with different cloud partnerships, seems aimed more at creating cloud provider relationships than about actually driving user adoption. Most users would likely prefer a single model, and of course either of Verizon’s MEC options could (with some assistance, likely) be made to work either as cloud services or as a premises extension of the cloud. That’s not really made clear, which seems to cede the responsibility for creating real users, real applications, to somebody else.

One interesting point the article and transcript make is that Verizon is saying it doesn’t expect to see meaningful revenue from edge services until next year. Add to that the fact that Verizon’s CEO says they’ve “created” the market and you have to wonder whether there’s a lot of wishful thinking going on here. The biggest wish, obviously, is that somebody actually builds an application that can drive the process, make the business case for MEC and itself. Who would that be? There are three broad options.

Option one is that the enterprises build their own applications and host them on Verizon’s MEC solution. Verizon’s material suggests that this is the preferred path, but since the material is web collateral it’s possible that the preference is just a bias toward the kind of organization who’d likely be looking for MEC offerings rather than IoT applications. Verizon’s role in this is valuable to the extent that it has some account control among these prospective buyers.

Option two is that public cloud providers would build their own applications and offer them as a part of Verizon’s MEC. This option could be more populist; smaller users without their own IoT development capability could easily adopt a third-party service. However, it could be a complicated option to realize because the cloud providers already have edge computing strategies and application tools, and Verizon is unlikely to have great influence in smaller firms to justify their taking a piece of the action. The ability to integrate with Verizon’s network (the Amazon Wavelength variant) could demonstrate a clear Verizon benefit, though.

This is the option that Verizon seems to be pursuing, at least as far as what they’ve told Wall Street. At the Goldman Sachs Communicopia event, they indicated that they were getting traction from enterprises on private 5G and were working with the cloud providers on edge computing applications. I can’t validate either from my own contacts with enterprises, but it does seem that the public cloud deals option would be the one most likely to bear fruit in the near term.

The final choice would be that third-party developers would use the Verizon MEC service. This would empower users of all kinds if it could be made to work, but it’s difficult to see how Verizon would be able to create a good program. Their IoT developer program was focused on pure IoT connectivity, and Verizon doesn’t have any particularly credible account relationship with the software/application side of enterprise CIO organizations.

If we assume that the most certain path to business success is to own your target market, it’s hard to see how Verizon’s “land grab” grabs any useful land under any of these three options. What seems to be on the table is simply a commission on selling cloud services someone else creates. It’s not that the options aren’t viable application pathways, as much as that they’re not particularly centered on Verizon and would be difficult to realign without considerable Verizon effort. Effort, sad to say, that’s not likely to be forthcoming. If we stay with the “land grab” analogy here, what Verizon seems to have grabbed is the cornfield in the Field of Dreams.

Why Not NFV?

I’ve blogged a lot about the relationship between 5G and edge computing. In most of my blogs I’ve focused on the importance of coming up with a common software model, a kind of PaaS, that would allow 5G deployment to pull through infrastructure that would support generalized edge computing. Most of those who have chatted with me on that point feel that “the cloud” offers the path to success, but a few wonder why 5G’s NFV (Network Function Virtualization) reference doesn’t mean that NFV is the solution. Obviously, we need to look at that question.

The fundamental goal of NFV was to convert network appliances (devices) from physical network functions or PNFs to virtual network functions or VNFs. The presumption inherent in the goal is that what is hosted in NFV is the equivalent of a device. There may be chains of VNFs (“service chaining”), but these chains represent the virtual equivalent of a chain of devices. Not only that, service chains were presumably connected by “interfaces” just like real devices, and that means that the concept of a “network” or “service” had to be applied from the outside, where knowledge of the (popularly named) “gozintos” (meaning “this goes into that”) is available.

One reason for this was that the NFV ISG wanted to preserve the management/operations framework of the network through the PNF-to-VNF transition. In short, a VNF should appear to a management system that managed PNFs as just another device. The only incremental management/operations requirement that NFV should create is associated with the aspects of a VNF that don’t apply to PNFs. You don’t “deploy” a PNF in a software sense, nor do you have to manage the hosting resources, so stuff like that was consigned to the Management and Orchestration (MANO) or VNF Manager (VNFM), and the Virtual Infrastructure Manager (VIM).

5G specifications from the 3GPP, which have evolved over a period of years, evolved as other 3GPP work did, meaning they assumed that the functional elements were devices with device-like interfaces. 5G used NFV because 5G defined what NFV was virtualizing, in short. If we could say that generalized edge applications were (like 5G) based on virtualizing devices, this model would work, at least to the same extent that NFV works overall.

Well, maybe not totally. One issue with NFV that emerged more from the evolution of the proof-of-concept trials and vendor interests was that NFV turned out to be focused on services deployed one-off to customers. The most popular concept in NFV is universal CPE (uCPE), which is a generalized device host for an inventory of per-customer service features. NFV didn’t really address the question of how you’d deploy shared functionality.

I’ve said many times that I do not believe that NFV created a useful model for virtual function deployment, so I won’t recap my reasons here. Instead, let me posit that if NFV were the right answer, we would first see a bunch of NFV adoptions, and we’d see NFV incorporated in 5G broadly. Neither is the case, but let me focus on the second point here.

O-RAN is the most successful virtual-function initiative in all of telecom. What’s interesting about it from the perspective of edge computing is that O-RAN’s virtualization model (O-Cloud) is explicitly not based on NFV elements. Yes, you could probably map O-Cloud to the NFV Infrastructure (NFVi) of the NFV ISG specs, but the actual connection point is described in current material using terms like “cloud stack”. That means that just as you could map O-Cloud to NFV, you could also map it to VMs, containers, Kubernetes, and so forth. It’s cloud and not PNF in its model.

One obvious consequence of this philosophical shift is that the MANO and VNFM elements of NFV are pushed down to become part of the infrastructure. Whether it says so or not, O-RAN is really defining a PaaS, not the server farm which is the presumptive NFVi framework. The VIM function in O-RAN is part of O-Cloud, and there is no reason why “O-Cloud” is anything other than some generalized cloud computing framework. Thus, at this level at least, O-RAN is a consumer of edge services where NFV defines device virtualization services.

From this so far, you might be inclined to argue that the differences between the cloud and NFV approaches are little more than semantics. Couldn’t you consider any feature/function as a device? Isn’t NFV already pushing to accept containerization and not just virtual machines? Well, that’s the problem with simplification; it can lead you astray. To understand what the issues are, we have to do some further digging.

NFV, strictly speaking, is about deploying virtual devices more than creating services. The service management functions required by operators are presumably coming from the outside, from the OSS/BSS systems. In the cloud world, an “application” is roughly synonymous with “service”, and orchestrators like Kubernetes or Linkerd deploy applications using a generalized tool.

O-RAN, strictly speaking, deploys 5G RAN elements, so it’s a bit of a one-trick pony. Its service knowledge is embedded in the RAN/Radio Intelligent Controller (RIC) components, both the near-real-time and non-real-time pieces. The responsibility for management and orchestration of the pieces of O-RAN rest with them, and so you could argue that the RICs combine to act almost like an OSS/BSS would act in the NFV world, were we talking about a customer service (what NFV targeted, you’ll recall) and not a multi-tenant service like 5G.

In order to make NFV work for O-RAN, and for 5G overall, you’d need to add service knowledge, a model of the service. Even ONAP, which presumes to be the layer above NFV’s elements in the ETSI approach to virtualized functions, doesn’t have that (which is why I told the ONAP people I wouldn’t take further briefings till they supplied and properly integrated the model concept). That would be possible, but in the end all it would to is allow other deeper issues with NFV to emerge.

The long and short of NFV is that it isn’t a cloud-centric approach to hosting functions, and since hosting functions of any sort is really a cloud function, that’s a crippling problem. The cloud has advanced enormously in the decade since NFV came along, and NFV has struggled to advance at all. Some of that is due to the fact that NFV efforts aren’t staffed by cloud experts, but most is due to the fact that there are simply not very many people working on NFV relative to the number working on the cloud. A whole industry has developed around cloud computing, and you can’t beat an industry with a cottage industry. That’s what NFV is, in the end.

Technically, what should NFV be doing? There is really nothing useful that could be done at this point, other than to admit that the initiative took the wrong path. Whatever progress we make in function hosting in the future, in 5G, edge computing, IoT, or anywhere else, is going to be made in the cloud.

What Can We Learn from O-RAN’s Success?

According to a Light Reading article on Open RAN, “The virtualized, modular RAN will be here sooner rather than later and vendors will be tripping over each other as they try to get on board.” I agree with that statement, and with much of the article too. That raises the question of just what the success of an open-model RAN (O-RAN, in particular) will mean to the marketplace, buyers and sellers.

There is no question that the relationship between hardware and software has changed dramatically, and the changes go back well beyond the dawn of Linux where Light Reading starts its discussion. Back in the 1970s, we had a host of “minicomputer” vendors, names like Data General, DEC, CDC, Perkin-Elmer, and more. You don’t hear much about those players these days, do you? The reason is software. In the early days of computing, companies wrote their own software, but that limited computing growth. Third-party software was essential in making computing pervasive, and nobody was going to write software for a system that hardly anyone had. The result was a shift to an open-model operating system that could make software portable, and it was UNIX at the time, not Linux, but Linux carries the water for open-model hosting today.

What we’re seeing now, with things like O-RAN and even white-box networking, is the application of that same principle to the networking space. 5G is demonstrating that hosted functions can play a major role in mobile networks, and they already play a major role in content delivery. Security software, which is an overlay on basic IP networking, is demonstrating that same point. How long will it be before we see the same kind of shift in networking that we’ve already seen in computing? This is the question that Cisco’s software-centric vision of the future (which I blogged on yesterday) should be asking. Short answer: Not more than a couple years.

The O-RAN model is particularly important here, not because it’s a new thing (as I just noted, it’s just the latest driver toward openness), but because it’s a bit of a poster-child for what’s needed for something that’s clearly in the best interest of the buyer to overcome seller resistance.

O-RAN as a standards-setter is dominated by operators, something that vendors have always hated and resisted. Past efforts to let network operators dominate their own infrastructure model have been met with resistance in the form of (at the minimum) vendor manipulation and (at worst) threats of regulatory or anti-trust intervention. While the O-RAN Alliance has recently had its share of tension, they seem to have navigated through it.

Why is this important? Well, Linux was the brainchild of Linus Torvalds, a legendary/visionary software architect who did the early work, building on the APIs that UNIX had already popularized. Other open-source projects have been projects, and increasingly projects under the umbrella of an organization like the Linux or Apache foundations. In short, we evolved a model of cooperative design and development, and one of the most important things about O-RAN is that it’s making that model work in the telecom space, where other attempts have failed.

It’s also important because of the unique role that 5G and O-RAN are likely to play in edge computing. Any salesperson will tell you that the first test of whether someone or some organization is a “good prospect” is whether they have money to spend. 5G has a budget and budget momentum, which means that a big chunk of carrier capex for the next three years or so will be focused on 5G infrastructure. What will that infrastructure look like? O-RAN’s goal is to ensure it doesn’t look like a traditional network, a vendor-proprietary collection of boxes designed to lock in users. Open-model 5G, including O-RAN, could deliver us closer to the point where software is what’s important in networking, and devices are just what you run the software on.

What does this have to do with the edge? The answer is that if O-RAN, and 5G in general, delivers a “middleware” or “PaaS” that can not only support 5G elements, but also elements of things like CDNs or general-purpose edge computing, or (dare we suggest!) IoT, then that set of software tools become the Linux of networking.

The rub here, of course, is that Linux had the UNIX APIs (actually, the POSIX standard set from them) to work from, and for networking we’re going to have to build the APIs from the tools, designing the framework for edge hosting based on (at least initially) a very limited application like 5G/O-RAN. Not only is that a challenge in itself, 5G in its 3GPP form mandates Network Function Virtualization (NFV), which is IMHO not only unsuitable for the edge mission overall, but unsuitable for 5G itself.

O-RAN has at least somewhat dodged the NFV problem by being focused on the RAN and the RAN/Radio Intelligent Controller or RIC, which is outside the 3GPP specs. This happy situation won’t last, though, because much of the RAN functionality (the CU piece of O-RAN) will likely be metro-hosted, and so will 5G Core. The latter is defined in NFV terms by the 3GPP. Will the 3GPP change its direction to work on 5G as an edge application? Doubtful, and even if it did, it would likely take five years to do, and thus be irrelevant from a market perspective.

It also seems unlikely that the O-RAN Alliance will expand its scope (and change it’s name?) to address either 5G Core or edge computing in general. There’s little sign that the operators, who drive the initiative, are all that interested, likely because they’ve supported NFV and don’t see any need to expand themselves into the edge at a time when they’re trying out cloud provider relationships to avoid that very thing. All these factors would tend to make another operator-driven alliance to address the edge issue unlikely to succeed as well.

So are we to wait for Linus Torvalds to rescue us? Well, maybe sort of, yes. It may be that a vendor or perhaps a few vendors in concert will have to step up on this one. The obvious question is which vendors could be candidates. Software-side players like Red Hat or VMware have 5G credentials and understand cloud computing, but they also seem wedded to NFV, which is useless for generalized edge computing. Network vendors have generally not been insightful in cloud technology. Cloud providers would surely have the skills, but would surely be trying to lock operators into their solution, not create an open model, and that’s not likely to be accepted.

The big lesson of O-RAN may be that we’re only going to get effective progress in new applications of technology when users rather than vendors dominate the efforts. The best of open-source has come from executing on a vision from a visionary. We need to figure out how to turn buyer communities into visionaries, and that’s the challenge that we’ll all confront over the coming years.

Is Cisco’s Software-Centric Strategy Really a Strategy?

Cisco’s Investor Day was all about their growing position in the software space. Software grew from 20% of revenues in 2017 to 30% in 2021, which is certainly a validation of their claim of software growth. What’s far less clear is whether Cisco’s avowed shift to software is offensive or defensive, and whether it can sustain bottom-line revenue growth for Cisco in the longer term.

Cisco’s product revenues, which include software, were $36.014 billion for the year ending July 31st, and were $39.005 billion for the year ending July 31, 2019, a decline of just short of three billion. If software revenues are indeed growing, then hardware sales declined more than that between 2019 and 2021. The key point, I think, is that Cisco is expecting to have its hardware “persist” longer in accounts, and will rely on software subscriptions for the devices for annual revenues.

This theme is attractive to Wall Street, who believe that hardware is under pressure both from budgets and competition, and who apparently think that the sky’s the limit with respect to software. If that were true, then Cisco is rushing down the Yellow Brick Road, but is it? There are three challenges.

Challenge One is feature exhaustion. Most of Cisco’s “software” is really what some (including me) have cynically called “underware”, the software designed to create functionality for a hardware platform like a switch or router, or to manage networks of those platforms. In a real sense, it’s like a form of operating system. Even before Cisco separated its hardware and software, there were plenty of users of its IOS network operating system family who didn’t upgrade. The thing that keeps users upgrading and paying for subscriptions is new capabilities. It’s not easy to add those new things year after year, and maintain a sense of user value.

Many of my readers may not remember Novell, who stepped out in the 1980s as the darling of workgroup networking. NetWare was the network operating system of choice for enterprises, the source of print sharing and file sharing, the start of the notion of resource catalogs, and a lot more. The problem was that Novell made money by selling and upgrading software, and over time they used up more and more of the stuff users valued. Eventually, there wasn’t much left to add.

That leads us to the second challenge, which is exploding competition. Novell was hit hard when Microsoft added basic resource sharing to Windows, which is one example of exploding competition. Cisco can expect other network equipment vendors to counter its own “disaggregation” of software and hardware, but there’s a more serious competitive risk. In order to create value-add to justify continued subscription revenue, Cisco will have to expand beyond basic routing/switching. That leads it upward into hosting, which of course they’ve offered via their UCS servers.

Well, maybe. At Cisco’s recent investor conference, many were surprised to see that Cisco made almost no mention of UCS servers. It’s hard to see how Cisco could really be aiming to be a serious software player if the only “software” they offer is that “underware”. Competition and the need to create update pressure for customers would drive Cisco upward, into areas where the software is more generally linked to computing tasks. How could that be done without servers to run it on? Why, if you had servers in your inventory already, wouldn’t you prepare a place for yourself in the general or at least edge-focused hosting market, by pushing the fact that you’re a server vendor already?

The edge-focused piece is of particular importance because Cisco, like all network vendors, would probably find the easiest path out of pure packet-pushing to be edge computing, which is evolving from 5G hosting missions that are (as I’ve noted) already budgeted. Not only that, server and software vendors like Dell, HPE, IBM/Red Hat, and VMware are all going after the 5G hosting and telecom opportunities, and their efforts threaten network equipment.

That threat is multiplied by the possibility that the same software would be hosted in both servers and white boxes. If major software players offer that sort of dualistic software, then a Cisco retreat from hosting might well result in software players creating a growing customer interest in white-box switches and routers. That could cut into Cisco’s device sales and make them even more dependent on a strong, expanding, software strategy.

The final challenge is facing internal push-back. Cisco has tried the software game for ages, and it’s never measured up to their hopes. I think that a part of that is due to resistance from the traditional hardware types that have dominated Cisco engineering for decades. Today, as I’ve already noted, Cisco is really not a software company at all, but a hardware company who separated out their previously bundled software. That move didn’t create the same back-pressure that earlier and broader software initiatives created, but Cisco can’t stay on that limited software track and keep revenue flowing.

The further Cisco’s software aspirations diverge from “underware”, the harder it will be for Cisco to rely on the skills it has in house, and the more new people will be needed. As that influx shifts the balance of power, it only magnifies the resistance of the employees who have been with Cisco the longest, and who have likely worked up to senior positions. Will those people embrace the new software dominance? Doubtful.

The net of all of this, for Cisco, is that making software claims is easier than making software the center of a future revenue universe. The most problematic thing about their investor-meeting story, in fact, is the lack of emphasis on UCS. That’s Cisco’s biggest, and most unique, asset among the network-equipment-non-mobile vendors, and it would logically seem it should have been a focus of the discussions, which it decidedly was not. There are three possible reasons why it wasn’t.

First, Cisco might have no intention of broadening its software position beyond “underware”. If that’s the case, then their only justification for their story to investors would be to buy time while they try to figure out where they go next. That’s not a good thing, obviously.

Second, Cisco might actually believe that they only need “underware” to succeed in software. If that’s true, then I think that instead of looking at a rebirth, as Cisco and the Street have suggested, we might be looking at the start of a major slip from Cisco dominance. Think Novell, and that’s a very bad thing.

Third, Cisco might be preparing a true software blitz that will indeed involve UCS, and are just not ready to expose their plans. That would avoid having competitors in the server/software space jump in to build barriers to Cisco before Cisco really has anything to compete with the competitive offerings. That’s semi-OK as a reason for their seemingly ignoring UCS, providing that they actually have that “true software blitz” in the wings, and quickly.

A software strategy for Cisco obviously has to meet Cisco’s own revenue/profit goals, but to do that it has to meet the goals of the buyers, deliver the ROI they’ll demand. Right now, Cisco has a software transitioning strategy that’s not transitioning to a state that will clearly deliver on that, and they need to fix that quickly or they’ll not only fail to deliver on their promises in 2022, they’ll risk putting their whole software-centric vision at long-term risk.

Juniper Dips Another Toe into 5G Metro (But Not the Whole Foot)

Juniper’s decision to harmonize its implementation of the Open RAN RIC (Radio/RAN Intelligent Controller) with Intel’s FlexRAN program raises again a question I’ve asked in prior blogs, which is whether a network vendor who isn’t a mobile-network incumbent can play in 5G, and by extension whether they could play at the edge. I believe that network success (for any sort of vendor) is impossible without a 5G strategy, because 5G is what has funding to change the network in 2022 and 2023. Does the Juniper move with Intel move the ball for its 5G strategy?

5G is a kind of transitional technology in that it transitions networks from being strictly appliance-based to being a combination of devices and hosted features. The impact of this transformation is likely to be felt primarily in the metro portion of the network, deep enough to justify hosting resources but close enough to 5G RAN access to be impacted by O-RAN and 5G Core implementations. Because metro hosting is also where edge computing likely ends up, the ability of 5G to pull through an edge hosting model that could be generalized may be critical to exploiting the edge.

5G tends to call out NFV hosting, but many 5G RAN and open-Core implementations aren’t NFV-specific. That means that there could be a common model adopted for the orchestration and lifecycle management of 5G and edge applications, if such a model could be defined. However, the issue of orchestration and lifecycle management isn’t the only issue, or even the largest issue, IMHO. That honor goes to the relationship between networking and hosting in the metro zone, and it’s that area that Juniper as a network vendor has the biggest stake.

Operators’ fall technology planning cycle gets underway mid-September, and 5G is the most secure budget item on the 2022 list, and the most significant technology topic of this fall’s cycle. Network vendors without a seat at the 5G table face a significant risk of disintermediation. Ciena, for example, announced last week that it was acquiring the Vyatta switch/router software assets that AT&T had picked up long ago, noting that “The acquisition reflects continued investment in our Routing and Switching roadmap to address the growing market opportunity with Metro and Edge use cases, including 5G networks and cloud environments.”

One big problem for network vendors in becoming 5G powerhouses is that they have little opportunity for differentiation, since they don’t have either specific mobile network experience or a particularly large product footprint in the space. Juniper’s decision to roll its own near-real-time RIC into the Intel 5G ecosystem is a way for the company to become a part of a credible, broad, publicized, 5G model. Intel saw 5G as an opportunity to break out of the processor-for-PCs-and-servers market and into the device space. That was important because devices, including white-box switches/routers, could end up as big winners in open-model 5G and O-RAN, and that would risk having non-Intel processors gaining traction in metro hosting. That’s not only a threat to Intel expansion, but also to its core server business.

It’s also a threat to the network equipment business, particularly that related to metro. If you have both servers and networking in the same place, and if 5G standards favor hosting of many 5G elements, you could certainly speculate that servers with the proper software could absorb the mission of network devices. Vyatta is proper software for sure. I know a lot of operators who spent money on Vyatta, going as far back as 2012. Ciena’s move thus makes sense even if you assume servers could take over for routers and switches. Since Vyatta software would also run, or could be made to run, on white boxes, it could have a big play in a 5G-driven metro play, if we consider that 5G is where most of the budget for near-term network change in the metro is coming from.

The problem that vendors like Juniper, Ciena, and Cisco face in 5G goes back to the question I asked above, which is whether 5G in the metro creates a bastion of hosting at the edge, or a bastion of networking in content data centers. Or both. If metro infrastructure is hosting-centric and if 5G white-box thinking dominate there, then open-model devices could own the metro and the current network vendors could see little gain from the metro/edge build-out. It’s that risk that network vendors have to worry about, and despite Ciena’s positioning of their Vyatta deal, they still have to establish a 5G positioning, not just a switch/router software positioning.

The keys to the metro kingdom, so to speak, lie in the RIC, or more properly, in the two RICs. The RIC concept is a product of the O-RAN Alliance, designed primarily to prevent a monolithic lock-in from developing in the 5G RAN, particularly in the “E2 nodes”, the Distributed Unit, Central Unit, and Radio Unit (DU/CU/RU) elements. The near-real-time RIC (nearRT RIC) is responsible for the management of microservice apps (xApps) that are hosted within this “edge” part of the 5G RAN, opening that portion up to broader competition by making sure its pieces are more interchangeable. You could say that the nearRT RIC is a bridge between traditional cloud hosting and management and the more device-centric implementations likely to be found outward toward the 5G towers.

You could also say that the non-real-time RIC (nonRT RIC) is a bridge between the 5G RAN infrastructure and the broader network and service management framework. It’s a part of the Service Management and Orchestration layer of O-RAN, and its A1 interface is the channel through which both operators’ OSS/BSS and NMS frameworks act on RAN elements, with the aid of the nearRT RIC.

Inside a metro area, both RICs and the implementation of 5G Core would create/manage the hosting of “features and functions” that are components of 5G service. It would be optimal, given the fact that most IP traffic growth comes form metro-hosted content, for the content hosting, 5G feature/function hosting, and edge computing missions of metro infrastructure to harmonize on a single structure for both hosting and connection.

This is what’s behind a lot of maneuvering by vendors like Juniper, Ciena, and even Cisco. If metro is going to evolve through the parallel forces of hosting and connecting, having nothing to say in the hosting area is a decided disadvantage. Similarly, being stuck too low on the stack, down at the fiber transport level as Ciena is, relegates you to a plumbing mission and surely takes you out of meaningful metro planning dialogs.

You actually need to think about going up in the metro-success game. Hosting of almost anything that’s distributed demands some sort of multi-tenant network model for the data center/cloud, and that’s what actually spawned the whole SDN thing with Nicira ages ago. The ONF’s 5G approach is based on SDN control, demonstrating that you could take a network-centric view of the future of metro infrastructure for 5G hosting, and likely then on to edge computing.

Let’s make an important point here: Juniper, of all the non-mobile network infrastructure players, has the strongest product portfolio to address the New Metro moves. They have their Cloud Metro fabric concept for connectivity at the physical level, they have Apstra for data center automation and virtualization, they have both Contrail and 128 Technology for virtual networking with security and prioritization, and they have Mist/Marvis and AI automation overall for operations efficiency. They also have a RIC from a deal with the Nestia subsidiary of Turk Telecom in January, and it’s this effort that they’re now harmonizing with Intel’s FlexRAN model.

The fusion is a good strategy for Juniper, because Intel has a higher 5G profile and a better platform on which to promote its RIC model to the market. Juniper has been fairly quiet about the details of its RIC so far; it’s even hard to find any information on Juniper’s website. Given the RIC’s strategic position in 5G and the edge, they need a dose of publicity and credibility if they’re to reap the full benefits of their RIC deal, and exploit that to improve their 5G, Edge, and metro position.

Any non-mobile-incumbent network vendor, which Juniper is, has a challenge in the 5G space. They not only face competition from the mobile incumbents, who are so far winning the majority of the deals, but also competition from software hosting-or-white-box players, including giants like Dell, HPE, Red Hat, and VMware. The former group have pride of place, and the latter group have the inside track on the open-model 5G space because they’re hardware-independent. For the non-mobile-incumbents, like Juniper, there has to be a powerful reason for a buyer to give them a seat at the table, and it’s not enough to say “we have O-RAN and RIC too.” So does everyone else.

The Intel move could help Juniper validate it’s RIC approach, but it doesn’t explain it. That’s something Juniper has to do, and they also need to create a better Cloud Metro positioning that reflects the reality of the metro space. It’s where the money is, and will be. It’s where differentiation matters, and is also possible, and it’s where every vendor in the server, software, and network space is hoping for a win. Juniper has magnified its metro assets, but not yet fully developed or exploited them, and they need to do that.

Why Are Security Problems So Hard to Solve?

Why are network, application, and data security problems so difficult to solve? As I’ve noted in previous blogs, many companies say they spend as much on security as on network equipment, and many also tell me that they don’t believe that they, or their vendors, really have a handle on the issue. “We’re adding layers like band-aids, and all we’re doing is pacing a problem space we’ve been behind in from the first,” is how one CSO put it.

Staying behind by the same amount isn’t the same as getting ahead, for sure, but there’s not as much consensus as I’d have thought on the question of what needs to be done. I’d estimate that perhaps a quarter or less of enterprises really think about security in a fundamental way. Most are just sticking new fingers in new holes in the dike, and that’s probably never going to work. I did have a dozen useful conversations with thoughtful security experts, though, and their views are worth talking about.

If you distill the perspective of these dozen experts, security comes down to knowing who is doing what, who’s allowed to do what, and who’s doing something they don’t usually do. The experts say that hacking is the big problem, and that if hacking could be significantly reduced, it would immeasurably improve security and reduce risk. Bad actors are the problem, according to the enterprise experts, and their bad acting leaves, or could/should leave, behavioral traces that we’re typically not seeing or even looking for. Let’s try to understand that by looking at the three “knows” I just cited.

One problem experts always note is that it’s often very difficult to tell just who is initiating a specific request or taking a specific action on a network or with a resource. Many security schemes focus on identifying a person rather than identifying both the person and the client resource being used. We see examples of the latter in many web-based services, where we are asked for special authentication if we try to sign on from a device we don’t usually use. Multi-factor authentication (MFA) is inconvenient, but it can serve to improve our confidence that a given login is really from the person who owns the ID/password, and not an impostor who’s stolen it.

The problem of having someone walk away and leave their system accessible to intruders would be largely resolved if multi-factor authentication were applied via a mobile phone as the second “factor”, since few people would leave their phones. However, if an application is left open, or if a browser tab that referenced a secure site/application is open and it’s possible to back up from the current screen into the secure app, there’s a problem. There are technical ways of addressing these issues, and they’re widely understood. They should be universally applied, and my group of experts say that more time is spent on new band-aids than on making sure the earlier ones stick.

The network could improve this situation, too. If a virtual-network layer could identify both user and application connection addresses and associate them with their owners, the network could be told which user/resource relationships were valid, and could prevent connections not on the list—a zero-trust strategy. It could also journal all attempts to connect to something not permitted, and this could be used to “decertify” a network access point that might be compromised.

Journals are also a way of looking at access patterns and history, something that AI could facilitate. Depending on the risk posed by a particular resource/asset, accesses that break the normal pattern could be a signal for a review of what’s going on. This kind of analysis could even detect the “distributed intruder” style of hack, where multiple compromised systems are used to spread out access scanning to reduce the chance of detection.

A special source of identity and malware problem is the systems/devices that are used both in the office and elsewhere, since the use of and traffic associated with those systems/devices aren’t visible when they’re located outside a protected facility. That problem can be reduced if all devices used for access to company assets are on the company VPN, with the kind of zero-trust access visibility I’ve described. If the WFH strategy in play for systems outside the office puts the systems inside the zero-trust boundary, then the risk of misbehavior is reduced because the chances of detecting it are much lower.

The “dualism” of devices, the fact that many are used for both personal and business reasons, it one of the major sources of risk, one that even zero-trust network security can’t totally mitigate. Many of the security experts I’ve talked with believe that work and personal uses of devices absolutely do not mix, and in fact should not be able to install any applications other than those approved by IT. Those same experts are forced to admit that it’s virtually impossible to cut off Internet access, however, and that creates an ongoing risk of malware and hacks.

One suggestion experts had was to require that all systems used for business, whoever owns them, access all email through a company application. Emailed malware, either in the form of contaminated files or links, represent a major attack vector, and in fact security experts say it may well be the dominant way that malware enters a company. The problem here again is the difficulty in enforcing the rule. Some who have tried, by blocking common email ports, have found that employees learn how to circumvent these rules using web-based email. Others say that social-media access, which is hard to block, means that it may not be worthwhile to try to control email access to avoid malware.

So what’s the answer to the opening question, why security is so hard? Because we’ve made it hard, not only by ignoring things that were obviously going to be problems down the line (like BYOD), but also by fixing symptoms instead of problems when the folly of the “head-in-the-sand” approach became clear. I think that we need to accept that while the network isn’t going to be able to stamp out all risk, it’s the place where the most can be done to actually move the ball. Zero-trust strategies are the answer, and no matter how much pushback they may generate because of the need to configure connectivity policies, there’s no other way we’re going to get a handle on security.