Does Juniper’s Good Quarter Mean a Good Strategy?

Juniper is an interesting bellwether of networking trends. They’re a major player in the data center and WAN markets for enterprises, operators, and cloud providers, and because they’re second to Cisco in the space, they are under more pressure to be aggressive and innovative. Juniper’s quarterly call on Wednesday showed a beat on all the relevant financial metrics, but Wall Street is still a bit cautious. Is this just financial angst, or are there some issues Juniper needs to face?

Before I get into analysis, let’s be clear about something here. With the M&A, Juniper has what I believe is the strongest product portfolio of all the significant vendors in the networking space. They have a unique set of tools to address unprecedented changes in networking at all levels. They may, in fact, be the only vendor who could field a product strategy that would lead buyers into the future with optimum technology. They should, by all rational standards, be a big winner in 2021.

Now, let’s return to their numbers, and why the Street didn’t respond as positively as one might expect to Juniper’s quarter. One possible indicator is the fact that while both cloud and enterprise revenues were up strongly (30%), service provider revenues were off slightly. Cloud provider spending has been strong in part because lockdown generated a lot of interest in creating new front ends for core business applications to support work-from-home and virtual access to applications by customers and partners. Enterprise spending has been somewhat constrained by COVID fears, which are now subsiding because of the vaccines. Service providers, with the longest capital cycles of the group, have seen less impact from COVID and were thus neither being buoyed by it nor constrained particularly. Juniper’s numbers, in short, almost surely reflect a general recovery in network spending.

Another reason for caution is Juniper’s guidance, which is largely in line with Street expectations. Juniper says their visibility into their channels’ performance is high, so that means that their expectations are modest not because of uncertainty but because they’re not seeing a strong indication of a beat in their numbers for the rest of the year. If Juniper believed it was on the verge of a big strategic win, wouldn’t you expect them to raise their guidance?

Juniper’s caution, and its positioning overall, is a bit surprising to me, since Juniper has made a number of acquisitions that it could exploit. As CEO Rami Rahim said on the call, “We’re seeing good early interest in Apstra, 128 Technology and Netrounds, which are not only strengthening our position in several attractive end markets, but also enhancing the success of the broader Juniper portfolio.” I agree that all three of these companies should boost Juniper’s technical portfolio considerably. Why then hasn’t “good early interest” turned into “expected increased sales?”

I don’t think that strategic supremacy is the reason Juniper beat Street expectations this quarter. I don’t think their modest forecast for the rest of the year reflects an expectation of potential market dominance. That could mean they don’t realize how good things could be, or that they realize it but don’t know how to execute.

Their latest strategic announcement, the “Cloud Metro” announcement I blogged on in mid-April, would have been a great jumping-off point for Juniper to showcase its full inventory of solid newly acquired technology. Obviously, it came after the quarter just reporting closed, so you could expect it to be a signal of how Juniper would position its assets to maximize revenue for the rest of 2021. They failed to bring out what I think were obvious connections, and in fact I don’t think that they’ve done a great job of talking up the symbiosis between the stuff they’ve acquired and their own current product line, much less the way that symbiosis could play in a big win for Juniper this year.

I’m far from convinced that Wall Street understands the technology issues even at this level, so I don’t think Street uncertainty over Juniper’s prospects is driven by my kind of thinking. I think that what the Street is responding to is uninspiring positioning in general. The majority of the statements made on the call are poster-children for conservative positioning, which doesn’t gain you market share or exploit future opportunities your own M&A has created.

Rahim makes an interesting statement here: “So you can’t sell an AI-driven enterprise solution without actually also selling a meaningful software component along with it. That is what is driving the momentum especially in off-box software offerings like Mist.” You can argue that all of Juniper’s M&A got them off-box software, at least off the traditional box. That is the only revolutionary statement that’s made on the whole call, and it’s in a response to a question from a Street analyst.

This is the statement that should have been Juniper’s lead, and that should have guided all of Juniper’s 2021 announcements, both to the Street and to the media and analysts in the tech market. Why? Because you can’t sell enterprise equipment without a credible operations automation strategy. In fact, you really can’t sell equipment at all, without operations automation. AI is surely a credible way to get it.

Our notion of services, even for enterprises, has risen above the simple connections of the past. That’s what Juniper has implicitly recognized with its M&A. Scattered through the call transcript are references to things like “Experience-First networking” and “our investments in automation technologies, such as Netrounds…” in the service provider space. It’s almost like a treasure hunt; find the reasons we’ll be great. Telling us why would be more effective. A lot of dry facts that users have to dig out and assemble is more an education than a sell.

Treasure-hunting for value propositions is not how things are supposed to work, especially when the sum of the Juniper strengths seem to be aimed at providing the technology drives a critical shift in the vision of what a network is supposed to do. Services aren’t just connections, it’s the experience overall that counts. Netrounds and Mist offer Juniper a great opportunity to get a handle on that higher-level service. 128 Technology is the best branch connection, experience policy management, and experience security strategy on the market. Why settle for buyer education, listing stuff and waiting for people to make the connection? How difficult should it be to make all of that exciting?

The networking market doesn’t need to be educated any longer, it needs to be inspired. Every trend Juniper talks about is real, and every one is also a proof point that grizzled network routing and switching veterans aren’t driving the bus these days. In my surveys, the average age of a network planning decision-maker in the cloud space is five years younger than in enterprises, and in the enterprise it’s eight years younger than in the service provider space. The people weaned on the Internet expect not only fast reactions, but also interesting and powerful positioning.

This is frustrating to me, because I know well how difficult it is to build the right technology to support a revolutionary future. It’s even difficult to acquire it, but once you have it, it’s not difficult at all to sing pretty about your new virtues. Juniper has done all of that, except the singing.

Competitor Cisco can position so as to make a firefly to look like the sunrise, Juniper. You need to work harder on your own story.

Making Virtual Function Code Portable

How do we create portable code in hosted-function services for the network of the future? There are a lot of things involved in making virtual-function code portable, and I’m going to use the term “code” in this blog to indicate any software that’s designed to create network features and that isn’t a part of a fixed appliance with dedicated hardware/software. Most operators are looking to ensure that their hosted code can take full advantage of resource economy of scale (meaning a “cloud” or “pool”) and most also are hoping to standardize operations. It’s those two goals we’ll look at here.

Nobody should ever plan for shared-resource code without first looking at what’s going on in the cloud. Right now, there are four basic “cloud architectures” that define hosting at the highest level. They are bare metal, virtual machines, containers, and functions/lambdas. We’ll start by looking at each, and how they relate to the issue of resource efficiency.

Obviously, “bare metal” means that code is hosted on either directly on a server, or on a white-box appliance. The benefit of this option is that it gives your code exclusive use of the resource, so sharing isn’t an objective of this model. Instead, what you’re looking for is the standardization of the hardware/software relationship, meaning that your software is portable across as many of the bare-metal options you plan as it can be.

Because bare metal means nothing is there, all the “platform software”, meaning the operating system, systems software, and middleware, have to be loaded and configured, and so does each application.

Virtual machines are shared-resource strategies, and they form the basis for the public cloud IaaS services. VMs are created by hypervisors that partition the hardware into what looks, even to platform software, like a set of independent computers/devices. You have to maintain the hardware and hypervisor as a collective resource, but each VM requires the same platform software as bare metal elements. Unlike bare metal, VMs share the actual resources, and so there may be crosstalk or other impacts between VMs that could affect operations.

Containers are the current rage for the cloud. Unlike VMs, which are created by a hypervisor, containers are a feature of the operating system, so they’re not quite as fully isolated from each other as VMs would be. With containers, you deploy an application (in this case, a service) by deploying all of its components in pre-packaged form. The packages have everything needed to load and run, and so deployment is fairly simple.

Functions (also called “lambdas”; AWS’s function-hosting is called “Lambda”) are snippets of code designed so that the outputs are a function only of the inputs. Since nothing is stored, a function is portable as long as its code can be run on the platform options. However, lambdas operate inside a special environment that links events to the deployment and execution of lambda code, and this environment is less likely to be portable.

OK, those are the architecture options. There’s also a question of public versus private hosting for each of them, which we’ll now summarize.

Bare metal servers can be obtained as an option from some cloud providers and also from interconnect companies, and white-box appliances are broadly available. This architecture is therefore the most generalized, but because there is no “platform software” included, everything has to be installed for the function code to work. There’s also no resource sharing of a server/device unless it’s provided by the platform software an operator would install. This is therefore a small step away from simply using your own data center, and it’s going to be appropriate mostly at the very edge of the network, or where operators do intend to deploy their own carrier cloud.

Virtual-machine architectures can be hosted on bare metal, including in your data center, in appliances, and on the public cloud. Because the platform software that provides personality to the deployment is provided by the user, and because resource sharing is provided by the VM hypervisor, this is the second-most-generalized hosting model. However, since public cloud providers and appliance vendors may offer some of their extension tools, it’s important to avoid their use if you really want code portability.

Containers can be hosted on bare metal (including appliances), on VMs, on IaaS public clouds, or on managed container public cloud services. The first three options will require the operator provide platform software to do the container hosting, and if those options are linked in any combination and the same software is used for each hosting platform, the same container hosting and management features will prevail. I’m telling operators to avoid using managed container services where possible, in favor of deploying their own container software even in the cloud, unless they don’t intend to use containers anywhere except the cloud.

Functions can be considered like containers, but with an additional layer to deploy and run a function triggered by an event. There is often considerable latency associated with function loading and execution, however, and I’m of the view that operators should avoid functions unless they have significant in-house function development skills.

Now for some best practices. First, unless it’s totally impossible to do so, any hosting environment that an operator creates, for any of the architectures, should be based on the same platform software. The same operating system, system software, and middleware should be used for everything, everywhere. If you do this, then the “platform maintenance” tasks will be the same, you can use the same tools, and consistency is the mother of efficiency.

Second, there are three possible layers of software in a virtual-function infrastructure; the hardware layer, the platform layer, and the functional layer. Your management practices have to deal with all these layers. Hardware management applies to all bare metal, VM, and IaaS cloud implementations, and this is where configuration management to run the (hopefully) common platform software is involved. Platform management is the loading and maintenance of the platform software, and the functional layer is the management of the functions themselves. If you follow the one-platform rule, most of the first two layers of management are straightforward. If not, try to find a management tool kit that works for all your platform options. Look at DevOps tools and “infrastructure-as-code” tools for these lower layers.

Third, your biggest management challenge will come in the functional layer, because here you’re almost surely going to have to deal not only with multiple sources of functions, but also with multiple mechanisms for stitching functions together to create services and managing the outcome.

Let’s close by looking harder at this last point. Virtual function software (not necessarily just the special case of NFV) needs to obey some standard set of interfaces (APIs) to connect them into a cooperative system that we’d call a “virtualized service”, and the software may also have to interwork with legacy devices. There has to be both a source of specifications for those interfaces, and a mechanism to do the stitching. Finally, you have to manage the results.

There are currently three possible approaches to this. First, there are network-specific sources. NFV defines both a specification set and a management strategy, and ONAP has a broader model. Second, cloud practices create their own approach, which is more generalized and has more possible sources of specifications (including write-your-own). Finally, you have international standards like O-RAN or OASIS TOSCA that can be applied.

The best approach to navigate this combination, in my view, is to forget both NFV and ONAP and try to use a combination of cloud practices and international standards. For example, what I’d personally love to see is an OASIS model for O-RAN services, and I think operators who want to deploy open-model 5G should assume they’ll use TOSCA and cloud tools to deploy against O-RAN and the eventual open-model 5G Core specification.

It seems to me that operators should play a greater role in this process. Cloud-native implementation of O-RAN and 5G Core are in their interests, and so is a standard way of defining the service lifecycle, which TOSCA could provide. This would be a very good way, and time, to merge the cloud and the network, in a more systematic way.

What the Future Service Plane Will Look Like

In my blog of April 22, 2021, I postulated there would be three “planes” in a 5G network, the top plane of which being the “service plane”. I mentioned it a bit but didn’t get too far into it, despite the fact that if there’s a broad opportunity for new telco revenue to be had, the service plane is a likely place for it to be built. I’m trying to get operator information on the service plane topic, but with little success so far, so I’m going to outline the hopes and promises here, offering some of my own views. I’ll revisit the topic later when I have more data.

Functionally, the service plane represents the set of features and hosting facilities that are associated with building a value-add to the basic mission of connectivity. We have some very limited examples of service-plane behavior in content delivery networks (CDNs), which provide a combination of content caching and URL redirection to ensure video quality of experience (QoE). It’s also possible to conceptualize modern OTT services as being “service plane”, but I’m suggesting that because those services are really just network destinations, as users are, we avoid doing that.

The technical distinction between a service-plane offering and an OTT offering would be that the former involves coordination between the delivery of something and connectivity. In the case of a CDN, users ask for content based on a generic source, and are directed to the appropriate cache point. A more general difference could be that a service-plane offering is offering not a specific connection, but a specific experience, and that’s the definition I’ll adopt here.

There are some areas where operators see beyond-connectivity opportunity, but only some of these would fit my service-plane definition. Cloud and edge computing, in the basic form of offering hosting, is an OTT service because today’s cloud model shows that it’s consumed that way now. IoT could be mapped to my three planes in multiple ways, and so it’s probably a good way to dig a bit into what we can know about the service plane today.

Most operators are still fixated on the idea that IoT will bring revenue by having people or organizations pay to network their “things” rather than their personal communications devices. This model of IoT is a connection service, consuming resources in the data/user and control planes only. The obvious problem is that the great majority of IoT applications and devices wouldn’t require a for-pay 5G or any other kind of connection service; building WiFi or one of many local IoT protocols would serve (and are already serving).

What would create a service-plane relationship for IoT would be having operators make some aspects of IoT into a CDN-like service. That doesn’t mean that the service would have to relate to content delivery, or even that it would have to be available on the Internet, but it could be. There seem to be ## broad options for service presentation available, and we’ll look at each.

The first option would be that the service would in fact be Internet-visible, as a CDN is. There could be, as there is with CDN, a generalized URL associated with it, and that would decode to a specific service instance based on the location of the user, the parameters of the service request, etc. This would be a reasonable way of presenting a service that would be expected to link to a web page for access (as CDNs are) and it would make service availability widespread. There would have to be some mechanism to journal use so the operator who deployed the service could get revenue from it, and the link with the Internet could make monetization problematic in the shifting sands of net neutrality.

The second option would be to expose the service as a function or component, designing it to be composed by someone into a retail offering rather than being a retail offering in itself. Within this option are two forks, one that still uses a URL and makes the function visible, but with access control, and the other one that uses an API and exposes the service inside a composition sandbox, something like the way that functions should be used in NFV. The challenge here is creating the composition sandbox and the rules for federating the use of the service, so that the owner can be paid for usage.

The third option would be to treat the service as a cloud component, a web service of the kind already offered by Amazon, Google, and Microsoft (among others). In this case, the service lives inside a cloud service, one created by the service owner or by a third party, and would be paid for as any cloud service is paid for. The operator who took this path could use one or more public cloud providers to host the service, but would have to pay them a part of the revenue the service would generate (actually, they’d likely receive something from the public cloud provider in settlement, since users would pay the cloud provider). If the operator self-hosted, they’d be creating their own cloud-like service framework to contain their offerings.

We could debate which of these options would be best, but before we do that we have to consider another factor, which is how we get to service-plane behavior in infrastructure transformation. It seems pretty clear to me that things like “carrier cloud” or “edge computing” are simply abstract descriptions of the outcome of a transformation, not a driver to transform. The real driver in transformation today, the only real driver, is 5G O-RAN. This is an architecture that is popular with operators, and that supports a budgeted mission. How will it impact our three options?

I think it forecloses option one, which I think was foreclosed anyway. Operators aren’t going to frame elements of their transformed infrastructure as being part of the Internet, for fear that Internet settlement rules (which often mean no settlement at all) and sharing rules would end up ceding the fruits of their investment to others. The choice is then between the second and third options, and I think the choice will be made by default, by whether operators do in fact get 5G-as-a-Service from public cloud providers.

The problem with the third, public-cloud, option is that the operators investing in 5G are trying to avoid creating their own public cloud by hosting some of O-RAN on current public cloud providers. That decision would cede the way 5G functions were virtualized to the public cloud giants. Those players will obviously choose option 3 because it perpetuates their current business model.

If the operators see this risk, and respond by insisting that either they don’t use public cloud services at all, or use them only in basic IaaS-hosting form, then the operators will shape function virtualization. However, given the operators lackluster performance in framing their own strategies, the real decision may be made by the vendors who would supply that virtual infrastructure framework.

This is a high-stakes game. If the Dell, HPE, IBM/Red Hat, and VMware product teams get their acts together, they could own this space, which you may recall could add a hundred thousand data centers worldwide in a decade. I’m seeing some signs that the upside for a winner is now being recognized, but not so much that the way that 5G and O-RAN are being treated by the vendors, and positioned for the market, has tracked opportunity realization. I suspect that’s going to change, perhaps as early as this year.

Is IBM Shape-Shifting Itself to Success?

It looks like IBM has finally settled on a strategy that could lead it to financial success; become Red Hat. The quarter just ended was IBM’s first revenue gain in four quarters, and while there’s little doubt that the marketing theme of “hybrid cloud” helped IBM overall, the only significant sales upside (15%) came from Red Hat. The strategy that seems to be evolving is one based on a combination of “CloudPaks” and Red Hat software, with other IBM business units either supporting that combination or trying to keep from actively reducing revenue growth, at least.

To some, including me, this is a bit of a shock. IBM, after all, is undoubtedly the longest-lived of all the IT vendors, going all the way back to the literal dawn of business computing. They’ve reinvented themselves many times, more than any other company, and it may be that’s what is happening again here. They’ve just chosen a different avenue to channel the new IBM, one that instead of bringing technology pieces in-house to add to IBM’s technology story, brings the story instead.

The most significant strategic insight we can draw from the IBM quarter is that “hybrid cloud” as a concept is vastly under-appreciated. There has never been any realistic chance that enterprise consumption of cloud computing services would be anything other than the hybrid model, and yet somehow everyone seems surprised. I have to attribute that to vendors’ positioning of virtually everything as “hybrid cloud” and the media’s resulting scattershot approach to covering it.

Hybrid cloud, as far as enterprises are concerned, is the use of cloud computing services to create a more agile front-end to legacy applications. There is no need to build a private cloud, and in fact there’s no need (or desire) to make any significant changes to the data center component of the hybrids. However, there is a need to create an effective data conduit from cloud to data center, and also to create a framework that allows the combined (hybrid) application to be managed effectively, including but not limited to deployment and redeployment.

That IBM could gain marketing traction for telling enterprises what they really knew all along is a bit astonishing, but it also illustrates the risk that IBM had with hybrid cloud and the reason why Red Hat is valuable. Hybridization is a software-centric problem because the goal is not to disrupt data center hosting more than is absolutely necessary, and Red Hat has the cloud software framework to do that. Had IBM tried to go hybrid on their own, they’d have been vulnerable to solidifying a strong but under-used positioning strategy, then failing to deliver anything tangible on what they validated. Someone would have stepped in and someone still might, of course.

Whether IBM can hold its successful hybrid cloud positioning against competitors depends on how strongly those competitors could counterpunch. A big part of that relates to IBM’s own account control, and one interesting testimony to the continued strength of that is the unexpected improvement in mainframe sales. A lot of big accounts still rely on mainframes to host their core business applications, and if those accounts are expanding their commitment at this point, it shows IBM is still considered a highly credible partner.

The concept on which that continued credibility seems to rest isn’t the mainframes (the sales increase is an indicator of another trend, not the trend itself), but the CloudPaks. By creating a package that represents a cloud position, horizontal or vertical, IBM has productized its own strategic concept, and given it something to link with its Red Hat acquisition, making the CloudPaks both a product bridge and a strategy bridge in the Red Hat picture. In the most recent quarter, the bridge worked and hybrid cloud and Red Hat were clearly symbiotic for IBM. The big question for IBM (and its competitors) is whether the bridge will continue to work.

On the plus-for-IBM side, there aren’t all that many firms who can field the scope of software technology that IBM can, and even fewer who have significant strategic influence. In fact, there are only two or three (depending on how we arrange them, as we’ll see).

Another IBM plus is IBM’s cloud, which has been jousting with Google for third place among public cloud services. Having a cloud service makes IBM’s hybrid cloud story more credible, and also more profitable if they can convince users to hybridize with the IBM cloud rather than with one of the giants. However, IBM’s cloud credibility is high only with those big IBM accounts, at least so far.

On the minus-for-IBM side, we have the fact that until the Red Hat deal, IBM was king of a declining market sector. Mainframes may be expanding where they’ve already taken root, but they’re not a populist IT strategy. Red Hat gave IBM traction outside its limited bit-account base, and that’s important because the majority of the market for hybrid cloud is outside that base. IBM may have reaped a nice harvest in the most recent quarter, but it’s not going to take long before it has to expand its sales efforts outside the sector where it really has strong account influence. It’s there that it will encounter competition.

There are three players who might do something that counters IBM’s positive trends, as soon as the third quarter. One is Dell, one HPE, and one VMware. The is-it-two-or-three point I made earlier is obvious now; Dell and VMware may be splitting, but it won’t be done until later this year, and in the meantime it may overhang both companies by diverting executive attention away from big strategic initiatives.

HPE is a company who has consistently underplayed its assets. It’s a full-service hardware and software company, it has good technology, representation in all the key verticals, a good partner/distributor program, and the not-uncommon tendency to mellow its positioning to avoid overhanging current-quarter sales. You don’t want to tell a prospect about something that could take half-a-year for them to work through, when you have to make your numbers at the end of the current quarter.

While it’s difficult to say how this all nets out for IBM, my view is that it puts the company under tremendous pressure. If the hybrid cloud strategy looks like a win, there are competitors with a broader user base who could try to exploit it. If IBM makes a single stumble in execution, competitors will capitalize on it. Even if they don’t, I think IBM has shown that the industry is crying out for a credible hybrid cloud story, and while IBM moved the ball with their positioning, they didn’t score…yet.

5G and the Network Transformation Opportunity

Most operators I talk with agree that some sort of network transformation is essential to both managing costs and creating new revenue-generating services. The challenge for them is figuring out how to go about it, not only in terms of technology options but in terms of making a business case for changes. Promised savings in capex or opex may be helpful, but operators tell me that they’re still a hard sell in justifying a major network upheaval. Wouldn’t it be nice if transformation were budgeted? Well, it sort-of is, with 5G.

5G has more than its share of entertaining ideas that don’t rise much above the entertainment level, but however much many 5G applications may be overhyped (and most in fact are), that doesn’t alter the fundamental truth that 5G is happening. It’s got a budget, backing in the C-suite, and early 5G deployments are showing the technology can sell itself based on the status 5G brings. That makes 5G an important vehicle in transformation of the network, but the role of 5G and where and how it might impact networks and network vendors isn’t well understood.

I’m finding that even a lot of operator planners are confused by how 5G networks would be built. Part of the confusion lies in the fact that the 3GPP standards for 5G New Radio (NR) and 5G Core imply that 5G networks are separate from other networks, which some operators believe isn’t true. I’ve taken some time to chat with 5G operator planners to get an idea of what they’re thinking, and translate that into some kind of explicit model.

What I’ve found is that it’s best to think of 5G in terms of what I’ll call zones. If we do that, it’s possible to map implementation options with a fair degree of consistency. If we add that inside these zones there are planes that divide functionality into layers, and we can get a pretty good idea of how 5G might actually be built.

Let’s start with our zones. The first zone, the 5G access zone, is made up of the 5G cells themselves, and the backhaul technology that aggregates traffic, usually at metro concentration points that define the “normal” range where user roaming occurs. Those metro points define the 5G core zone, which is the range of technology that supports what Evolved Packet Core (EPC) does in 5G. The 5G Core (5GC) connects to the data network zone, which defines the connection between 5G users and the network services they’re accessing, usually the Internet.

Just as there are three zones, there are three planes in my model, but not all planes are necessarily represented in all zones, particularly in the near term. The lowest plane is the user plane which corresponds to the data plane or traffic flows, and this is really IP connectivity. The middle plane, the control plane, is the set of behaviors that control the connectivity and traffic flow, and the highest, the service plane is where service features are hosted and coordinated.

Standards and industry initiatives vary in influence across the zones. In the access zone, it’s the 5G RAN (and emerging O-RAN) stuff that’s the most important. In the 5G Core zone, the 5G RAN and Core are both influential because the two interact there, and we also see the beginning of the influence of the IPSF and general IP standards here, and those dominate in the data network zone.

As far as technology is concerned, there’s a similar zonal shift in play. Within the access zone, operators seem to be focusing primarily on white boxes, largely because access-zone deployment doesn’t have a lot of current feature variability, so the devices there are much more like traditional fixed-mission appliances. The 5G Core zone, on the other hand, seems to be a place where both white boxes and general-purpose servers (in the form of edge computing) would be deployed. In the data network zone, my view is that we’re back to appliances, either proprietary routers or white boxes, because traffic has been sufficiently aggregated to justify specialized data-plane performance considerations. There is likely a service plane focus here too, because it’s more difficult to couple services to flows that have been highly aggregated, as they are in the data network zone.

All of this combines to say that the most contested ground in 5G infrastructure is the 5G Core zone, which is the metro area. The widest range of resources are deployed here, all the zones are likely represented here, and the primary feature value-add point in infrastructure is likely here too. Not only does that mean that deployments here will likely command the greatest profit margins for vendors, it means that wins in this area could percolate both outward toward the towers and inward to the data network, broadening the value of success in this critical space. If a vendor wants to play big in transformation, they almost certainly have to play big in the 5G Core zone, or they lose the contribution that 5G budgets can provide.

If you’re a hosted-function player like Dell, HPE, IBM/Red Hat, and VMware, you cannot afford to lose in this space, because this is where your best chance of building data centers would be found. If you’re a white-box player (like DriveNets), you’ve got to try to fend these hosting players off here, because the largest number of devices you can aim at are to be found here. Obviously, incumbent network equipment vendors like Ericsson, Huawei, and Nokia need to fend off everyone in this space. All of this jousting and fending creates some specific issues for us to watch.

The most significant of those issues is the integration of the data/user, control, and service planes into a common functional architecture. In a very real sense, the control and service planes give the data/user plane its personality, and those two planes also provide all the service value add potential. One example is that of network slicing; we have to be able to map slicing in some way to and through the data network zone or slicing benefits are confined to a metro area. Another example is edge computing or IoT services, which will require coordination of the service plane (for service discovery and composition), the control plane, and the data/user plane (the latter two for slice and QoS). There’s a lot of latitude for creative positioning in this planar integration story, because not much has really been said or done there.

Appliance vendors targeting the 5G Core zone are at a serious disadvantage unless they have 5G Core functionality, for the obvious reason that 5G Core defines the zone’s properties. That means white-box players will need to have a 5G strategy or they put their metro position at risk, which then could compromise either access or data network positioning, and they absolutely cannot afford to do that. The network equipment incumbents can’t afford to let their natural advantage in the access zone be stopped at the metro point, which is a particular risk given that 5G Core standards are still dribbling out.

What this all adds up to is fairly simple. The metro area is critical because it’s the place that best balances service personalization and economies of scale. 5G Core, combined with 5G RAN (NR for purists) targets the metro-area architecture, and 5G is funded. Hosting 5G O-RAN represents perhaps the easiest way to introduce edge computing. Everyone who wants to sell network equipment to operators needs to sell something for which there’s a budget, and 5G is it. To me, it’s inescapable that 5G is what will transform metro, and metro is the starting point for any kind of network transformation we can expect to see.

Should Governments Encourage, or Mandate, Vendor Diversity?

How do you prevent vendor lock-in in telecom infrastructure? There have been a lot of ideas in this space, because the problem is one that operators have complained about for decades. The latest notion is for governments to mandate vendor diversity or openness in some way. This isn’t the first time the issue has come up. Is the approach rumored to be a recommendation of a UK task force on the topic a different approach, is the problem space changing, or are we heading down a familiar rat hole?

Many in the industry will recall that AT&T, over a decade ago, broke network procurement into multiple functional areas, and then picked vendors for each area to encourage competitive interplay and reduce lock-in. They also tried to limit the number of functional areas vendors could bid in, and while that created some jockeying for a couple years, it really didn’t amount to much. The problem is that the key vendors in the networking space often have a better strategy than other vendors, and often their products are symbiotic between the “functional areas” that an operator could define.

Forced competition among key players is one way of eliminating lock-in, but it doesn’t impact a second problem operators face in equipment selection and deployment, which is loss of innovation. Everyone knows that incumbent vendors tend to take root on their current position and become trees, immobile except for some waving of product positioning branches. Operators trying to avoid lock-in and promote innovation have tried to use standards to create an interoperability framework that would let them pit new and innovative suppliers against incumbents.

The problem with this approach is that it requires the standards group to frame at least the high level of functionality and implementation, in order to define interfaces. Working to do that requires a good understanding of both current technology and technology trends, and that’s more likely to be found in the vendor community than among operators. Not only that, standards groups in the networking space have been moving slower and slower over time, and as a result operators find the market well ahead of the standards they’ve hoped would control it.

Then there’s vendor interference. Every network standards body I’ve been involved with has seen interference from the big vendors. In some cases, it’s just a matter of the vendors’ promoting a point of view that favors them, which is counterproductive to operator goals but not really bad form. Sometimes it’s active undermining of initiatives through less-than-honorable means. Big vendors can usually afford to deploy a large number of their employees to these groups, and pay them to make contributions. That means that even if their tactics were fully honorable, these vendors would likely dominate the groups’ activities.

The reason I’m going through all of this is that government mandates are not likely to address any of these problems. In fact, if there are specific rules that have to be followed, it’s hard not to see current interminable standards activities getting even more interminable, if that’s possible. We already have situations where people have retired from a standards effort before it finished its job, and we might see several generations of standards-writers passing the torch if we’re not careful. Governments rarely are.

The article I referenced at the opening of this blog objects mostly to the notion of assigning a quota to “competitive” suppliers, including open-model networking adherents. They’re not against the government encouraging broader participation, open-model contributions, or both. I have concerns about both the quotas and the encouragement.

Telcos have arguably screwed up their own standards efforts by failing to staff them with enough people and with people with skills in the emerging technology areas that would make an advance in network technology possible. They’ve also brought their own, historically glacial, pace to the process. I contend that governments are even less likely to staff projects with the right people or enough people, so how exactly could they encourage something? Would then even recognize “innovation” if it were to be presented? And if we think telcos are glacial, they’re supersonic aircraft in comparison to governments.

We keep missing a critical point here, the point that I believe is fundamental to the next advance in networking. That point is that network software is where network functionality will reside in the future. Hardware is just something to run the software on and provide it with adequate performance. The advantage of software-centricity is that nothing is set in stone, as it is with hardware interfaces. Not only can APIs be changed, they can be “adapted” to a related but slightly different form, expanded, and subjected to rapid development processes aimed at making major changes in a matter of weeks or days, not years or decades. Given that, there’s no reason not to presume that advanced network technology could be developed in an open forum, using fast-fail project approaches. Let the market pick what works best and mandate it.

O-RAN is about three years old, and it’s made significant progress in that time, because it’s an open-source software initiative. Even three years is a long time to get something done in the Internet age, but it sure beats the 3GPP decades-long generational evolution. We could make O-RAN better, too.

How? By making open network software work like the IETF works. Not inside the IETF, which has a bit too much of a traditional IP bias and is influenced too much by big router vendors, but separate from it and guided by a principle that says you can submit a proposed standard/specification only if you also submit an open reference implementation. The next-gen telco network would then be based on interfaces that were first open, and only then subject to proprietary enhancement.

Operators could still decide they wanted proprietary stuff, but I think most would say that they wanted major vendors to conform to the open specifications, even if they didn’t open their own implementations. I’m against having governments play a role in this process because I don’t think they’d have the skills to do it right.

Will the Open Grid Alliance “Rebuild” the Internet?

Do we need to “rebuild the Internet”? There have been many suggestions on how to go about that, but so far none have really changed anything fundamental. Does that mean that we’ve had bad, even dumb, notions, or is there something more fundamental in play? Perhaps what we really need is a reason why a rebuilding is necessary. The latest notion, the “Open Grid Alliance”, (also HERE) has motives consistent with recent technology trends, but we’ll have to see whether good motives are good enough, and whether OGA can make them a reality.

On the surface, the whole notion of the OGA seems to be focused on gathering every possible industry buzzword into one place. We have edge computing, IoT, 6G…the list goes on. This may be a reasonable step to gather media attention in a hype-saturated age, but it creates a significant barrier to progress, the classic “boil the ocean” problem of having too many moving parts and too many things to justify and integrate.

While I can’t be certain of this, it appears to me that the primary aim of the group is to figure out a mechanism for “federation” of service elements that are created for those buzzword technologies. That includes the relatively simple matter of interconnect, but also the much more complex matter of how operators create and manage pan-operator relationships for new services. In particular, the group seems to be focusing on edge and cloud computing and their use in creating services.

The implication here is that future services will be composed from a set of hosted features, and that this will often require contributions from multiple sources. Networks today support “federation” through interconnection, and of course we already have that with the Internet and other networks. When you elevate features beyond connection, you need to elevate federation beyond interconnection.

The premise of the group is that the Internet is evolving already, and it will need to transform to accommodate those buzzword technologies, so we need to be thinking about how that will be done. That’s something I can agree with, but I was involved in a lot of the things that were intended to transform the Internet, and most of them came to nothing. The stuff that succeeded (like MPLS) was really an internal technology adaptation to traffic engineering, not something that was intended to support new services and service missions. The barriers to that broader “rebuilding” are pretty easy to identify.

Barrier number one is the business case for the players in a complex federation. The Internet itself is a bill-and-keep model, meaning that all the players bill their customers and keep what they get. There is no settlement among providers, which makes it difficult to create a business case for supporting services that require a significant investment, which is most value-added and non-best-efforts services. Way back in the late 1990s, one of my ISP clients recognized this and I worked with them to write an RFC on “brokered private peering”, which was designed to address the problem of how operators would settle among themselves for services beyond Internet best-efforts. It went nowhere.

The IPsphere Forum (IPSF) tried to do a more thorough job of that a decade later. The initiative was founded by Juniper and was supported by dozens of operators worldwide, and the operators themselves directed the process through an “Operators’ Council”. The initiative first ran afoul of antitrust barriers, and the TMF exploited this to absorb the body, but before that was finalized a single Tier One trashed the process by saying they wouldn’t participate in a community that assembled services from a pool of components created by competing global operators. That component-sharing concept seems key to any successful federation venture, by the way.

The past problems are exacerbated in today’s world because the operators are unlikely to see the same pace of opportunity development in those buzzword areas, because unlike wireline services, mobile services are already so competitive that there’s no such thing as a home territory, and because operators will invest in any edge technology at a different pace to support different mission goals. It’s hard to see how a group could develop consensus approaches under these conditions, even if we got past the usual problem of industry and standards groups—glacial pace of progress. 6G, they say, is ten years out, and dealing with its consequences could hardly be quicker.

Barrier number two is that all the real drivers for change in the Internet are in a fuzzy phase, which means that it’s not clear just what specific technology choices would be useful, much less optimum. How easily could we frame an approach to an “open grid” without more information on the specific way each consuming service would use it? What does edge computing, IoT, or 6G need? A justification, obviously; a higher-layer application that consumes the services the network would create. That starts a chain of dependent concepts that has to be built before we know where it goes and what the collective value is. In order to move forward now, without all the complex value chain in place, the OGA would need to try to generalize a way of representing and sharing service components.

This particular task is very much what like the IPSF was trying to do, and had largely succeeded with, but I doubt that the material produced is still available online (I have copies of some of the stuff, and of course all of my own contributions). It was a complex task that was only converging on final agreement at the last substantive meeting before the process blew up, which was in late 2007, after several years of hard work. The point is that it is possible to overcome this issue and create a general model for pan-provider service composition, but it’s likely to be a challenge even if the original material from the IPSF can be recovered to serve as a starting point.

The final barrier I see is vendor competition. I’ve been a part of a number of international standards, specification, and industry groups, and in all of them there was as much vendor jockeying to block progress for competitors as there was to advance the goals of the group. VMware is the biggest dog in the Open Grid Alliance pack, and surely they’re broad enough in product terms to be competitive with a lot of others. Will those other vendors 1) avoid the group because they suspect VMware is getting the most out of it, or 2) join and then do everything in their power to stymie progress? I don’t think a third option of joining in a happy spirit of community is really an option at all, based on past history.

Even if all these barriers were breached, I don’t think we could fairly characterize the effort as having “rebuilt” the Internet. In point of fact, given the bill-and-keep model of the Internet, I don’t think that edge computing and other enhancements would figure into the Internet at all, for lack of a way to incentivize the players. What would likely happen is perhaps more useful anyway, which is that it would provide a way of meshing the cloud and telecom. If that’s the value, then the key player in the OGA is again VMware, simply because they’re the player most engaged beyond the Internet.

The Open Grid Alliance’s future depends on what VMware decides to do with it. What they need to do is to quickly lay out a model of the architecture of an “open grid”, and get work started on fleshing that out. Others who join the group will have to support the architecture as a condition of membership. VMware’s blog, referenced above, suggests that they see the group primarily as a way of dealing with edge computing, with a bit of cloud joined in. The foundation issue in the edge space is 5G O-RAN hosting, because little else is likely to drive a lot of near-term deployment. Thus, how VMware makes it’s Telco Cloud initiative work may be the determinant of OGA’s fate.

And VMware’s fate too. If operators decide to host O-RAN in the public cloud rather than deploying their own stuff, and if they use public cloud provider web services to implement O-RAN rather than just using IaaS or containers to host an arbitrary implementation, then VMware will have a hard time promoting the OGA. Since they’re the biggest player in the founding group, you have to believe they believe it’s essential to their future, so they can’t afford to have it fail.

Thoughts on the VMware/Dell Separation

VMware is finally going to become independent of Dell. The deal is complex and still needs an IRS ruling on the tax impact to be delivered before it’s final, but it would resolve some long-standing questions about whether being largely owned by a computer vendor would compromise VMware’s credibility where Dell isn’t the primary supplier of hardware, or perhaps isn’t a supplier at all.

VMware’s relationship with Dell may be important, but I don’t think it’s the biggest issue VMware faces. That’s critical because if the shift to independence doesn’t resolve that most critical issue, the turmoil that’s often created with changes in the business framework of companies may hamper VMware’s strategic progress overall..

All you have to do to understand VMware’s roots, and it’s risks, is to consider the name of the company. “VMware” is “Virtual-Machine-ware”, reflecting the company’s early dominance of hypervisor-based server partitioning into multiple virtual machines. Enterprises used VMware to multiply the capacity of their servers and data centers, and as the enterprise evolved, VMware has been able to sustain its relationship with CIOs and the IT organization, up to now.

Three specific things have emerged to generate a risk to that relationship. First, container technology is rapidly becoming the preferred approach to allow enterprises to multiply the capacity of their servers. Second, the emergence of public cloud computing has shifted enterprise software focus to the public cloud, but not by “moving” applications there. Finally, the emergence of the use of hosted software in creating telecommunications services and features has changed the hardware dynamic to a broader notion than “servers” or “data center”. How VMware responds to these three factors will determine its future, whatever its ownership structure might be.

Virtual machines partition servers below the operating system, which means that every tenant VM on a server looks like an independent server, with its own operating system, middleware, and so forth. That’s great for isolation of tenant applications, but the additional overhead limits the number of VMs that a server can host. Containers split off applications by separating “namespaces” rather than the hardware. A single OS is used, and some middleware may also be shared, so the overhead of a container is less. In addition, container technology was really promoted primarily to facilitate portability of applications and simplicity of deployment, so there are positive container benefits beyond resource efficiency.

VMware didn’t jump out as a big container player, perhaps to protect its VM incumbency. That decision collided with the emergence of public cloud services, which were initially focused on “infrastructure-as-a-service” or IaaS, which could have been called “VMs-as-a-service” because that’s what they were. VMware’s cloud strategy seemed to be linked to the presumption that running the same VMs both in the cloud and on the premises created the ideal “hybrid cloud” model. That in turn tied VMware to a single, simple, hybrid cloud architecture proposition, always a risk in a rapidly evolving technology space.

Then, as the saying goes, came the telcos and “telco cloud”. For a decade, the network operators have known that they had to take advantage of the agility and richness of software in defining future network infrastructure and services. NFV was an attempt to frame how software could be used in virtualizing network functions, and while it didn’t succeed, it did validate the notion of “universal CPE” (uCPE), which really means white-box hardware that’s separated from software. White boxes are disaggregated network devices, and it’s difficult for buyers to accept that you need to run hypervisors on these devices to partition them into VMs (even though there’s a hardware abstraction benefit to that).

5G specifications include using hosted virtual functions for the functional elements rather than fixed appliances, and the open-model 5G initiatives like O-RAN further define a software-centric approach to implementing 5G, which means a hosting-centric shift in infrastructure planning. So far, indications are that operators favor white-box elements toward the edge, which might well mean that the majority of O-RAN hosting could be in white boxes. However, operators are thinking more and more in terms of containers even for data center hosting; recent changes in NFV to support containerized hosting are an example.

The interesting thing is that if you look at the specific assets of VMware versus its major virtualization-and-cloud-software competitors, VMware stands tall. I’ve noted multiple times that I think their technology is at least tied for, if not the singular best, in the space. Their strategic positioning of those assets seems to be the problem, and of course this is an issue with a lot of (well, yes, maybe most of) vendors. While vendors seem happy to over-hype things like IoT and 5G, they under-position their own assets. This is likely due to the fact that senior management believes that product positioning is really all about supporting the next quarter’s sales, while “future technologies” are fair game for wild predictive extravagance.

This sets up the big question regarding Dell and VMware, which is whether VMware’s issues in our three critical areas were created and sustained by Dell’s goals and not VMware’s own interests. To drive the former, VMware has to address two major points, IMHO. First, VMware has to frame a better position on “hybrid cloud”, by first recognizing what’s really going on in the public cloud, then what’s happening in the data center, and finally how the two should optimally support each other. Second, VMware has to make its VMware Telco Cloud business the absolute leader in what’s becoming a very competitive space.

Right now, VMware is at least accepting a broad view that their hybrid cloud strategy is to run VMware virtual machines on premises and in the cloud. They’ve created partnerships with public cloud providers to promote that goal. Obviously, harmony of hosting doesn’t create application integration, and in any event the big challenge is that enterprises want containers on premises and (increasingly) managed container services in the cloud. VMware actually supports what people need, but they’ve been shy/reluctant about expressing it. This is an easy problem to fix because Tanzu, VMware’s virtualization portfolio, is (as I’ve already said) at least the equal to anything else out there. All they need to do is sing better.

VMware also has some very innovative thinking in their Telco Cloud space, in no small part because it can build on their Tanzu framework. Their challenge again may be that VM-centricity. The network of the future is going to consist of three separate things—remnants of legacy technology (large at first, and likely for at least four or five years), white boxes, and the fusion of hosted functions and “carrier cloud”. How to build the second and third pieces of this, and how to keep all the pieces managed and integrated as they evolve under their own pressures and constraints, is a problem VMware needs to solve. I think they’ve actually got it solved at a technical level, but it’s harder to say that with confidence than it is with the container issue, because there’s less specific detail on their site to confirm my judgment.

As a final point, VMware is one of the companies that have united to form the Open Grid Alliance, a group promising to “rearchitect the Internet”. I’ll blog on that later this week, but there are a couple of important points to note about the intersection of that group and the telco positioning issue. Point one is that much of the stuff their alliance is targeting is really telco infrastructure as much as Internet infrastructure. Point two is that another member of the Alliance, DriveNets, is a white-box-based cluster/cloud router vendor. The VMware press release doesn’t mention VMware Telco Cloud or make any commitment to support white boxes.

That’s worrying, frankly, because I think their Telco Cloud is one of VMware’s most strategic concepts, and something they just cannot afford to get wrong. It would be very unfortunate if OGA ended up knocking VMware out of kilter in the space, and yet the industry is replete with examples of alliances-gone wrong. VMware needs to work hard to be sure that OGA doesn’t, but it also has to make sure that it dances to its own telco tune, because expectations for VMware will build now that they stand on their own.

Assessing Juniper’s Cloud Metro Strategy

Fusions of networking and cloud aren’t new, but it’s at least novel for such a fusion to be proposed by a network equipment vendor rather than a cloud software vendor. Similarly, it’s not news that the metro portion of the networking is the key to the network overall, but it’s at least novel to combine this point with the cloud. Juniper’s “Cloud Metro” announcement targets exactly what the name suggests, and Juniper has an impressive set of assets in play in the space, perhaps the best in the space. Do they play them in the announcement? We’ll look at the demand side, the justification, first, and then see how well Juniper does in addressing it, both technology-wise and in positioning.

The largest source of traffic on the Internet is video, and the majority of that traffic is delivered via content delivery networks (CDNs). This is true, but not new. Edge computing would be located in the metro area, if one counts access as part of metro. Also true. 5G, and in particular O-RAN and open-model 5G, will rely more than ever before on hosting of features, and the place that’s likely to happen is in the metro. All this means that Juniper’s opening thesis on their Cloud Metro story is true, but that any novelty in it has to stem from edge computing and 5G, and I contend that the two are symbiotic.

CDNs are hosted elements, and of course edge computing and 5G O-RAN require hosting too. Hosting here means “resource pool”, which means “cloud computing”. We can see that association in the cloud providers’ interest in both O-RAN/5G and edge computing. We can see it in the “Telco Cloud” initiatives of HPE, IBM/Red Hat, and VMware/Dell. You don’t get competition from multiple segments of the market for an opportunity area that has no realistic justification. Juniper, representing network equipment vendors, surely sees their position in the metro threatened by wholesale substitution of cloud technology for network equipment. They need a metro story, and they’ve created one.

“Cloud Metro” is the tag line for Juniper’s story, and it’s a good one. The metro sites of networks are where the action will be, where all the traffic and feature injection issues resolve optimally. There may be edge computing, but the big resource commitments will be in the metro, making each metro area into what is in effect a distributed data center.

Juniper believes that the revenue opportunities represented by network services are rising up, not perhaps as high as the OTT space but certainly higher than bit pipes. The new features that will cement the value proposition for these new services are hosted as functional elements and then assembled into cohesive experiences. The process of assembling them has to include threading workflows among them, which means throughout the metro area.

Old-time services were linear delivery pipes, and new services will be mazes of work exchange among functional components, creating that stuff known as “horizontal” traffic, just as it has inside data centers. The solution is to create a “metro fabric” something that can provide high-speed exchanges for that mesh of function traffic.

The metro fabric is created using high-speed interfaces on Juniper’s aggregation products, and two new high-end aggregation switches are included in the announcement. These are combined with flexible traffic steering for connections, automated operations to ensure that the added workflow complexity doesn’t drive up opex, and a more sophisticated monitoring capability that lets operators understand more about the state of services, rather than focusing on isolated boxes and trunks.

As part of the metro fabric, Juniper is bringing a form of 5G-like slicing to an IP network. While they’re not saying this is totally 5G compliant (the standards aren’t done yet), they are participating in the 5G standards process to ensure their stuff tracks the 3GPP. The slices can be used by any service, and they create transport traffic classes that can be used to separate things based on differences in QoS needs, or simply for security isolation.

Perhaps the biggest piece of the announcement is Paragon Automation, which includes some Juniper stuff (Netrounds) and third-party elements. The goal is to allow operators to define “script” responses to network events or network tasks, which both simplify operations and reduce errors. The Netrounds piece is especially important because it introduces active service quality assurance rather than relying on explicit failures to trigger actions.

There is a metro demand. There is a strong indication that 5G and O-RAN will expand it. If IoT is real, that will contribute more, and my drivers for “carrier cloud” would deploy massive cloud-like resources to the edge. In short, there is a real opportunity here for Juniper to realize. Juniper’s Metro Cloud approach does address the opportunity in a practical and sometimes creative way, but the real question is whether Cloud Metro moves the ball in a way that would be compelling to buyers.

Network vendor strategies, in general, tend to be sales-cynical, meaning they’re designed to be positioned effectively rather than to address the actual opportunity at the technical level. Juniper is a network vendor, and their own positioning has IMHO blown market-hype kisses while generally falling far short of the potential of their technology.

There is nothing really new in Cloud Metro, making it more a solution architecture than a product announcement. Since it’s made up of fairly conventional pieces or third-party tools, Juniper’s competitors could launch a counterpunching attack. Could Juniper’s Cloud Metro really hold a place in a “cloud metro” world? To be assured of success, it would have to meet three tests. First, the positioning would have to address a simple truth, which is that server-hosted high-capacity routing is not the right answer. Boxes and hosting will coexist. Second, the positioning would have to include something specific with regard to O-RAN, not just a pleasing smile in O-RAN’s direction. Finally, there would have to be some specific data center hosting element to the story, to offer a reasonable positioning in “the cloud”. If Juniper’s technology and positioning measure up in these three areas, then the strategy behind their announcement is sound. If not….

The inevitable marketing lead-in slides in Juniper’s announcement deck don’t raise any of these points explicitly, which doesn’t necessarily mean that Juniper doesn’t address them, but does mean that it’s harder to pin down the specific ways that could be happening. We’ll have to get interpretive here, on our own.

I think that Juniper defends that first point adequately. The metro zone is the logical place where feature complexity would be introduced. It’s close enough to the edge that the traffic flows and performance requirements make feature customization at metro scale practical, but deep enough to concentrate enough resources to justify optimizing for economy of scale. The concentration of service features, almost surely hosted rather than instantiated in appliances, means there is likely to be a need for data-center-like horizontal traffic support, so “Cloud Metro” is an appropriate term for the collective infrastructure mission.

The second point, which is an explicit acknowledgment of the role of O-RAN in creating metro hosting, and support for that role, is less easily drawn out of Juniper’s position. They do show “vRAN” as one of those service-feature focus points in the Cloud Metro, and they do refer to the virtual CU and DU elements of 5G O-RAN, but they don’t reference 5G/O-RAN beyond identifying its transport elements (the CU/DU) as connection points to be served.

The final point, the need for a specific data center piece to the story, is the one I found the most surprising. Juniper acquired Apstra, which is a solid and innovative cloud data center strategy, but nothing is said about Apstra in the announcement. Given that arch-rival Cisco has data center products and has even adopted the “intent model” terminology that Apstra has used from the first, it doesn’t make sense to me to leave the Cloud Metro strategy hanging without a strategy to host the cloud. So much depends on buyers accepting this as a complete solution model, an all-Juniper strategy, that the hole in the story is not only hard to understand, it’s a risk.

It’s also hard to understand why Juniper didn’t work to tie in two other acquisitions, 128 Technology and Mist. The virtual networking features of 128 Technology could be highly useful in 5G control-plane traffic, and in the control-plane behavior of other services like CDNs. Mist technology could, at the very least, provide a managed services framework at the edge of Cloud Metro, in combination with 128 Technology. Why not make a big play, one that explains and addresses all the key trends? Why leave O-RAN hosting to a part of the user plane? Why not build the cloud data centers with Apstra and not just connect them?

Juniper has always had good, even great, technology. At the peak of its success just short of two decades ago, it also had great marketing and positioning. The former is still there, but the latter seems to have been lost. I think Cloud Metro could have been a much better tale to tell, and there are even pathways available to make it a great positioning, one that Juniper could hope to exploit and that they could also hope would throw competitors off-balance. It’s still a good concept, but not so much a good story, and Juniper needs one of those to prosper.

Is Open RAN on the Path to Dominance?

We have tiny operators endorsing O-RAN, and some giant operators saying they’ll be deploying it (at least in some areas) in 2022. We have giant vendors dissing it, while another giant vendor opens a lab that seems aimed at encouraging it, and a third seems to be promoting it actively. Two vendors have gotten together to address the biggest technical issue that’s been presented as a reason not to adopt it. Lots going on, but what’s underneath all the talk?

First, operators are telling me that they absolutely want to see an open RAN for 5G. That’s true of almost 100% of operators, in fact. Well over 80% say that they will be deploying it in at least part of their service footprint. Not only that, network vendors are expecting to see open RAN deployments and are looking at how their own products can be woven in to take advantage of the momentum.

Second, if there is a market for private 5G at all, that market is going to be based on an open RAN implementation. Few enterprises have done any rational planning of a private 5G deployment, but those who have don’t hesitate in telling me that they are presuming they would use an open RAN strategy from the first, everywhere.

Third, open RAN is well beyond being “half-baked”, but it’s not iced yet. There are way too many pieces of technology involved, too many vendors in some areas and too few in others, and the nature of the relationship between open RAN solutions, 5G overall, and IP networking, is far from clear even to network operators with savvy staff planners available. For those without the in-house skills, the whole picture is so murky that most don’t even know where to start or who to ask.

Forth, by 2022 we will have reached a critical point in the open RAN space, the point where we either resolve all the significant issues with the open-model approach to 5G, or we throw in the towel and admit that 5G has to be a vendor-specific deployment. Huawei, the giant but politically troubled price leader in networking, seems confident that the open approach will fail. Or, maybe, they’re confident that their own future depends on its failing. Either way, there’s sure to be a giant throwing shade on whatever happens in the open RAN space.

I’ve heard a dozen different presentations by open RAN suppliers and integrators, and the truth is that none of the ones I’ve heard would address all the concerns that operators are expressing in my dialogs with them. The suppliers seem to fall into one of two categories; either they’re savvy and technically optimized, but too small to have any business credibility, or they’re big, credible, firms who are trying to shoehorn open RAN into their current technology direction.

In other words, it’s not impossible that this whole wave will in fact collapse, that Huawei’s dismissing of the concept and Ericsson’s maybe-fingers-crossed endorsement are prudent choices to be proven correct within a year.

What’s the problem? There’s more than one, and there may even be a cascade of them.

To me, the big problem is that 5G isn’t even finished as a standard. The 3GPP’s 5G core specifications already define a Release 18 that’s not even started. Release 17 won’t be finalized and frozen until mid-2022. Almost all of what we hear about today in the media, and all that’s available in real services, are either implementations of 5G RAN overlaid on 4G networks, the so-called “non-stand-alone” or NSA version, or pre-standard versions of the standalone 5G model. Most vendors think that implementing anything in the 5G core before Release 17 gets to its Stage 2 freeze point this summer is a risk. All this means that we’ve not really had a model for all of 5G, we’ve been kissing its fingertips. That makes it difficult for a vendor to present the entire story of 5G credibly, because they’d be unable to deliver a standard implementation if somebody liked what they heard.

The second problem is that vendors are about making money first and foremost, and about solving problems or addressing opportunities only if they make money. There will surely be mobile network equipment vendors (like Ericsson, Huawei, and Nokia) who will be providing everything needed, because that’s how they do make money. The problem is that open-model networking in general, and open RAN (O-RAN) in particular, doesn’t even have many vendors who offer everything in the RAN space, much less the rest of 5G. You can see that by the fact that we’ve only now heard that there’s a solution to the massive MIMO problem that’s been a stumbling block for open 5G RAN strategies.

The third problem is that the question of the business case for 5G hasn’t been answered in a way satisfactory to many operators and vendors. Sure, the biggest reason why the media is full of 5G hype is that hype is the foundation of click bait, which is the foundation of media revenue, but the pace of 5G adoption is dependent on whether it’s simply an evolution of 4G or that it opens a new revenue stream.

How do we solve these problems? “We” almost surely don’t, in the sense that the collective pronoun represents some sort of community effort. The problems are going to get solved because some big player decides to solve them. Who will that player be? A cloud software player or a cloud provider.

In this corner, as the boxing introductions go, we have the cloud software giants, Dell/VMware, HPE, and IBM/Red Hat. These players understand how to build software, and large and complex software ecosystems. They probably have the majority of the pieces of a complete and open 5G, including and especially the RAN, and they certainly have enough of the pieces to be able to make money selling an integrated strategy. That means they could step up and draw a diagram of what open 5G would look like, and they wouldn’t have to get former sideshow pitchpeople to give the presentation.

In the opposite corner, we have the cloud giants, Amazon, Google, and Microsoft. These are the companies who understand hosting as a service, which means they understand both hosted function deployments on which 5G is based, and “as-a-service” in terms of a consumption model. They know that they could make open 5G into a populist revolution, something that could launch itself as a social media craze, and that’s a good thing. They also know that if they do that, and if they tap off the hosting that 5G would create, they could keep a major competitive group out of the market. That market could amount to one hundred thousand data centers, so keeping competitors out is an even better thing.

Every one of the players I’ve mentioned, in both corners, have the technology resources to make open 5G RAN, and open-model 5G overall, succeed. They could make money, a lot of money, on it. But every one of those players also has a fear, perhaps the greatest and most paralyzing fear that any seller ever faces, the fear of the educational sell.

Seller comes in for a meeting with Buyer to present WizzyOpen 5G, the Next Big Thing in Just About Everything. Buyer is dazzled by the PowerPoint, drinks a few lunches, and when the seller whips out the order book to get something signed, says “OK, all I need is for you to help me make the business case for my CFO, get my staff trained, write a contract that unloads every possible risk I might have onto you, and set up the service sale program that I’ll use to sell the services WizzyOpen produces to my prospects.”

No seller could possibly even attempt that, because they know that when they do all that up-front heavy lifting, somebody else will jump in and sell SuperWizzyOpen at a discount (because they didn’t have to bear the cost of all that fluffery). The sellers also know that no salesperson could ever make quote if they sat around waiting for all those objection dominoes to fall, so all their salesforce will be doing job interviews instead of sales calls. Case closed.

That raises the biggest problem, which is buyer education. And no, don’t say that the buyer will get it online. Many believe my blogs are interminable; they want their insights digested into 500 words or less. I did a quick calculation, and a minimum complete 5G story of a credible level of detail would require 150 slides and 50,000 words of text. Who would develop that? The vendors would all wait for a competitor to do it, saving them the trouble.

We have to figure out how to get 5G presented holistically. We have to get open RAN and open 5G framed out and presented in that context. If that can be done, then we are assured of open RAN success. Do we get an industry group to take it on? It will take three years to get consensus on what should be in it. So how?

The only realistic avenue I see for 5G education in general, and for open RAN education in particular, lies in the vendor certification programs. However, these programs usually focus on individuals who are responsible for maintaining a product set rather than considering it, and they usually develop after the product set has been sold successfully. For open RAN in particular, the risk of depending on vendor certification is clear; how does a vendor justify the cost of such a program and prevent seeding the market with literate buyers for competitors to exploit?

Time, and opportunity, is how. At some point, one of the three sellers in one of the two corners will get so complete a solution put together, and will get so powerful a presentation of that solution built, that they’ll know they can jump in and steal the majority of the opportunity before others can build on their educational success. The question is whether the open RAN market can wait for that. I think it’s going to be close.