Getting Telecom Beyond the Dumb Pipe

Many people have heard the “A rose by any other name…” quote. Let me offer a network technology slant on that, which is “A dumb pipe created by any technology is still a dumb pipe”. Given that we’ve got operator and vendor commentary that takes the opposite stance, we apparently need to look a bit at why technology doesn’t trump mission for either wireline or wireless. We also need to look at why new technologies could create new services, but not because they smarten dumb pipes.

People and companies share a lot of things, and one is a resistance to change. The longer a given practice has been followed, a given strategy accepted, the harder it is to displace it. Network operators have been offering the service of connectivity for what’s surely longer than anyone on the planet has been alive. It’s no wonder that they see every “new” service as some new spin on connectivity.

Wireless, meaning cellular telephony and broadband, is new in the sense that it’s not tethered, and so becomes a personal information conduit that follows the user around (as long as they don’t forget their phone, of course). That proved to be a valuable capability, and so it’s prospered, but at the same time it validated the operator preconception that if you looked hard and long enough, some comfortable extension to connectivity services would ride in to save the day. All the commentary on 5G, IMHO, stems from that preconception.

Fundamentally, 5G is just an upgrade to mobile networking. I’ve had a 5G phone for some time, and it’s hardly been life-changing, or in truth even offered a noticeably different experience. Then, of course, many would argue that making a mobile phone call wasn’t a noticeable different experience either, it was only the context that was different. I think that’s why so much of 5G interest early on focused on IoT. If you had to connect billions of devices in addition to billions of people, you might well need new and revolutionary technology. But it’s clear by now that we don’t have that need today, and we may never have it.

How about things like network slicing? You can get your own virtual private cellular network, you can create separate networks for separate missions. Isn’t that worth something? Hearken back to the 70s when we saw voice services based on a network-hosted PBX, something called “Centrex”. It got some play, but it didn’t change operators’ fortunes, even in modernized IP-PBX form. As far as having mission-specific networks, that’s useful only if there are missions that ordinary broadband Internet won’t support, that users need support for, and that regulatory policies won’t declare to be a form of paid prioritization.

The driver behind this is simple; IMHO; mobile services are no longer the dependable source of high margins they were in the past. Connectivity monetizes whatever creates it, because it’s a means to an end and not an end in itself. That’s the bad news for 5G and dumb pipes. Is there any good news? Maybe.

5G takes a step, perhaps a decisive step, toward unifying computing and network technology as the framework for services. The sad truth is that when operators and vendors try to use 5G as a crutch to making dumb pipes valuable, they’re ignoring its potential to make dumb pipes smart, and that is the future of connectivity if there’s any future beyond commoditization.

The Internet is an important indicator here. It’s a network, but first and foremost it’s an experience host. The value of the Internet lies only partly in its near-universal connectivity. The other part, the important part, is the support for what people want to be connected with. It shifted us from connection as the experience to connection with the experience. There’s a good argument to be made that the network and the experience are always one, but that means that when the experience is beyond connection, the network has to somehow integrate as tightly with it as possible so as to rebuild its own value. That means unifying connectivity and experience hosting, and that is something 5G could advance, though at this point it would likely have to be an indirect sort of support, in three areas.

The first area is identity routing. From the first, there’s been a discussion in IP networks as to how we address things, or what things we’re addressing. In traditional networks, there is a “network service access point” or NSAP, and this is what the network sees as a user address. In mobile networks, mobility means that the relationship between a user and an NSAP has to be more agile, and in the IETF there have been a number of location-independent routing notions floating about. Mobility management a la 5G (and earlier) is helpful for addressing users who move within a relatively contained area. It would be better to have a strategy that would address users who are “mobile” through any geography, and users who are “portable” in that they may operate from a variety of locations.

The second area is layer relationship management. IP has a data plane and a control plane. 5G has its own “control plane” and considers both IP planes to be its “user plane”. We also have the venerable OSI 7-layer model, which has been steadily augmented with sublayers and which doesn’t conform to the layer structure of IP. We really need to go back to basics here and redefine how networks mix control and data, end-to-end and per-hop, and so forth. Maybe we need to accept that there is no standard layer set, and adopt a model that allows for arbitrary layering. This is what I think “intent modeling” might do, for example.

The third area is session awareness. All networking is justified by experience delivery of some sort, and an experience is a relationship between a consumer and a provider that endures for some reasonable period. The network analog of an experience relationship is a session, something the OSI model defines as living in the fifth layer. How do we dependably understand when a session is being established? There are inspection approaches (Juniper’s Session Smart Routing is among if not the best example of this) but could we make session boundaries explicit? If so, we could identify session requirements in terms of service features, which would then permit dynamic mapping to services.

There is a lot of potential in the integration of hosting and networking, for operators and for the industry at large. The problem is that we’ve accepted goals for that integration that are vague, insipid, conventional, unrealistic, and sometimes all of the above. If we really want to make networks more than dumb pipes, we need to use function hosting to attack areas where connectivity and experiences merge, and that starts by identifying where those points are. Some good discussion on this, endorsed/sponsored by operators, would be very helpful. It could even be essential for the industry’s overall health.

Is the Broadcom deal for VMware a smart, even pivotal, move?

OK, Broadcom is buying VMware, and most of the comments I’ve seen from industry analysts or the Street have been, well, doubtful. I guess that makes it fair play that I have doubts about their doubts. There are potential issues raised by the deal, but they’re not spectacularly different from the issues raised by any large M&A. There are also potential signals of a shift in the industry that could be very important indeed. I’ll start with the issues and move on to the signals.

The first issue is the classic “channel competition”. Broadcom is primarily a hardware company, and its products are used in a lot of devices that also run higher-level software. VMware, as a major supplier of higher-level software, will surely compete with some of the stuff current Broadcom customers offer. Some might see that competition as a reason to seek an alternate chip supplier. I don’t see this as a big deal; Broadcom has offered cards and devices as well as chips, so there’s always been a bit of an overlap between levels of their products and offerings of others that are based in part on Broadcom elements. In networking, recall that they bought Brocade earlier, so there’s also been a bit of overlap in the networking space.

One way Broadcom could address this issue is to visibly embrace standard architectures. The P4 standard for switching chips is an example; Broadcom has its own model, but P4 is backed by the ONF and others. An open link between lower-level hardware components and the software, including VMware’s stuff, would help ease any concerns software firms might have in supporting the hardware elements in software that’s competitive with VMware.

The second issue is the potential dilution of management attention, which some industry and financial analysts see as a risk to VMware customers. This is a harder issue to dismiss because it’s based on something very subtle, but it’s hard for me to understand why Broadcom would make its largest-ever acquisition and then poison the customer base of the company they bought.

Moving on to signals now, the most obvious is that the deal suggests a shift toward a single-source play for a complete product, breaking away from a tendency for the industry to separate IT hardware and IT software in terms of suppliers. Broadcom could, in theory, offer a package of technology that would be mutually supporting, and that could work against competitors who offer only piece parts. This issue is particularly important because VMware is strong in the data center and hopes to be strong in networking, and Broadcom has hardware components/chips in both these spaces.

Why, you could rightfully ask, is single-source plays for all the elements of an IT device important? Most enterprise buyers could answer that one for you. It’s been increasingly difficult for enterprises to hire and retain highly qualified technology teams, because most such people think their prospects are brighter working for a vendor. Every year, things like virtualization and white-box networking come along and make it more challenging to stay ahead of critical tech developments. The users find it harder to integrate the pieces of technology needed to deploy a unified IT or network device, so they want to lean on vendors. Vendors are happy to prop things up, as long as it doesn’t hurt their own bottom line. The more skin they have in the game, the more likely they actually have all the pieces needed themselves, the more likely it is that they’ll be willing to do the requisite propping.

Networking, even the network interface to servers, is already heavily dependent on custom silicon and network adapter cards. It’s also obviously dependent on software, and so it’s a place where creating a total solution could save users headaches and at the same time create better margins for the vendor who creates that total solution.

Differentiation, or lack of it, also enters in here. You can justify higher prices and margins if you offer something others don’t have. If there is no “something” that’s a recognized differentiator, then price is what matters. The more technology elements you can provide to build a usable unit of deployed IT or networking, the more likely you can hold off competitors where differentiation is difficult, because your own margins are strong.

All of this is a sort-of-selfish justification for the deal, I admit, but there’s also a broader industry signal that may be in play here. When a network or IT device is assembled from the hardware and software pieces provided by multiple players, there is a real risk that the sum of the business goals of all the players don’t lead to an optimum-from-the-user-perspective solution. Software vendors want to sell their software, and hardware vendors have a likewise-self-interested vision of the market. Could a company who sold both, drove development in both spaces, find it easier to create an optimum hardware/software partnership? I wonder.

Virtualization, the cloud, the edge, white-box networking, 5G, and even things like AI are all dependent on a highly symbiotic hardware/software relationship. The question is whether a vendor like the newly combined Broadcom/VMware could create that symbiosis better and quicker than a competitive market for the pieces of the solution. Competition doesn’t necessarily generate innovation, and that’s especially true when the competitors are bent on protecting their current market incumbencies.

Custom chips are increasingly the foundation of innovation, certainly for hardware and even credibly for devices overall. Can we envision realistic white boxes or AI without them? One of the most successful of all chip companies, Nvidia, has both chip drivers and additional software offerings. I think you can make a case for the same strategy in networking and IT, which means that the Broadcom acquisition of VMware could be good for innovation.

A final point is that vendors like Cisco and Juniper have developed custom silicon, entered into alliances on silicon and silicon photonics (Juniper, in particular), and so forth. That makes the Broadcom/VMware decision look like less of an outlier, and in fact raises the question of whether Broadcom and VMware, separately, could be competitive in the kind of market that this sudden chip interest says is now developing.

The value of the deal may tie back to some of the points I made yesterday on Cisco’s quarter. Differentiation in networking is becoming more difficult, so the space is threatened with price commoditization. When that happens, it’s not uncommon to try to combine a product area with one whose value and pricing power are higher. In other words, the earlier point I made on the ecosystemic value of the deal may be its best justification. If the other potential values are also realized, then the deal could be very good indeed.

Thoughts on Cisco’s Business Trajectory, and on Networking

Let’s face it, Cisco’s quarter was bad, and nothing management says can alter that. Supply chain issues may have been a factor, but it’s hard to justify the miss and the weak guidance Cisco supported with that excuse (one, by the way, that all the vendors who have weak quarters have been using). They did much better just a quarter ago, after all. Cisco, Cisco’s competitors, and everyone in the networking industry need to take stock here, and reflect on new details on some of the points I made in my blog on their prior quarter.

If we want to look beyond the now-classic supply chain excuses, there are two sources of revenue/profit issues that Cisco and others face. The first is differentiation, needed to sustain pricing power and margins. The second is return on investment, which is needed to get any additional budget for network gear, or to sustain current spending levels if they come under pressure. Both seem to be at work here.

The challenge for network vendors is that networking at the device level has been commoditizing for ages. A router is based on many broadly accepted standards, making it difficult to claim any great feature differentiation if you stick to the basic function of pushing packets around. The vendors responded to that by creating “ecosystems” of products that reflected the reality that networking today is a complex assemblage of stuff that buyers find hard to integrate. And, of course, with creative marketing and sales.

To me, the big question raised by Cisco’s quarter is whether even the network-ecosystem approach is running out of gas, and if so why. The answer to the latter may be easier to see than the answer to the former.

I’ve mentioned many times that Cisco really wants to be a “fast follower” in technology. They want to exploit proven opportunities more than evangelize new stuff in the hope it will catch on. That’s understandable in a sales-driven company; you don’t want your sales force pushing something that turns out to be a dud, both because it hurts their credibility and because it overhangs sales of current-generation stuff. To me, a problem with ecosystem credibility is most likely to lie with a shortage of exciting ecosystems. You can’t differentiate with old stuff in the world of ecosystems, any more than you can in the world of devices.

If you ask enterprises and service providers whether they believe that networking is changing, almost 100% say it is. If you ask of the changes are radical, just short of 90% say that’s also true. I don’t have up-to-the-minute data on the point, but last fall about two-thirds of enterprises and three-quarters of operators said their vendors weren’t offering “new” or “novel” solutions to their network problems and challenges. So let me get this straight; networking is changing radically and vendors aren’t changing their stuff to keep up, right? It sure seems so.

The popular ecosystem strategies for Cisco and other vendors have tended to center on things like network management and operations or network security. These things are important, of course, but they’re ecosystemic product sets long recognized and offered. The changing network issues that buyers are referencing can’t be the same stuff that’s been around for a decade or more. What then are they?

Networking has had its share of transformations, particularly for enterprises, and the enterprise transformations have been tied to shifts in the network services offered them. In the past, we saw a shift from networking built from user-provided nodes and leased lines to IP VPNs. In the present, we’re seeing a series of shifts created by the cloud.

Enterprises use networks to connect users (employees, customers/prospects, partners) with information and application resources. The cloud has transformed where the “front ends” of these information/application resources are hosted, and by doing so have changed both the network connection for the users and for the rest of the applications and databases involved. If we were to assume that the popular view that “everything will move to the cloud” were correct (note that I don’t subscribe to that view), then networking would be nothing more than the Internet for access to cloud apps. Even steps short of that extreme would surely give cloud providers a much greater role in enterprise networks.

Most vendors, including Cisco, have focused on “multi-cloud”, which isn’t the real problem, but which has the advantage of being easy to promote in the media and follow up with sales. There is certainly a shift in networking going on that’s driven by the cloud in general, but nobody is really pushing it.

Edge computing, which is a subset of cloud computing, would magnify the number of things that the cloud would do, and thus magnify the impact on network services. The impact on networking would be greatest if one of the drivers of edge computing were to be an increased use of “the edge” to host network functions, as 5G proposes to do. Since this kind of impact would be most likely confined to metro centers, I’ve tended to call this a “metro” shift. Cisco rival Juniper did an announcement on “Cloud Metro” a year ago.

I think what’s going on here is simple. Cisco, not surprisingly, isn’t anxious to tout a change in networking that would validate cloud providers rather than their traditional network operator customers. Not only that, cloud providers are more willing to embrace white-box technology or SDN for their networking, and neither favors vendors like Cisco.

We can now attack the question of whether ecosystem differentiation is running out of gas, because it’s also a good transition into the second potential challenge—difficulties with network ROI. The failure to develop new ecosystems can have the effect of removing justifications for projects, which removes spending authority, if the new ecosystem can potentially also represent business value-add. If Cisco and others were able to develop new benefits for networking, that would drive new spending. To the extent that new ecosystems were able to generate new benefits, they could help Cisco boost its numbers, but we’ve already seen that things like the cloud would more likely reduce spending than boost it.

This reflects the real challenge for Cisco and others in the space. What’s really needed is a new set of network benefits, and that’s a problem because every network vendor has focused on the notion that connectivity is the only real goal of the network. We’ve achieved connectivity. To find other network benefits, we’d have to find things to do with networks that step beyond basic connectivity. That almost surely involves going “up the stack” and more into applications.

This circles back to the cloud, too. The cloud is winning the battle of new benefits, which is advancing computing by distributing it, and making more of the network a between-cloud-stuff proposition than a separate entity. Cisco has offered servers and software for years, so it’s unlikely they could make a push for cloud hosting gear that would offset any network revenue challenges.

We can sum up the ROI issue with some data. Back up 20 years, and we find that network budgets were almost equally balanced between “sustaining spending” on current infrastructure and “project spending” designed to add business value. Since then, most of the projects have focused on cutting sustaining costs, not adding new business value. That means that networks have been under constant budget pressure for decades now, and we’re probably seeing this exacerbated today because of economic uncertainties.

You can’t expect users to spend more annually to sustain the same set of benefits, particularly if there’s hope of spending less. That hope materializes in things like white-box competition and discount pressure on vendors like Cisco. The only sure way to fix this is to make networks do more, not just cost less.