Consumer Broadband Technology is Winning

If we were to identify the most significant trend in networking, the thing that has the greatest impact on 2023 and beyond, what would it be? In my view, it would be the consumerization of networking. There was a time when business services were the major driver of network data services, but that time is now passing. In fact, it’s pretty much passed from the perspective of operator and vendor planning, at least for the enlightened players. What matters now is the consumer, and consumer-targeted services will now become the baseline services for businesses as well, not immediately but inevitably.

The biggest reason behind this important trend is a simple matter of numbers. There are, in the US for example, about seven and a half million business sites, of which about a million and a half are associated with businesses that have multiple sites, and about fifty thousand are associated with “central sites”. In contrast, there are about one hundred thirty million households, of which about one hundred and eighteen million have broadband. My modeling says that, today, just about one percent of broadband connections in the US are made to business sites.

Residential broadband is not only pervasive, it’s getting better. The baseline for modern broadband Internet is 50 Mbps and many areas have gigabit service options. This, when back 20 years ago, a major company headquarters might have 45 Mbps T3 services (in point of fact, there were only about eight thousand such locations with that capacity in the US). The cost for a broadband Internet connection is a very small fraction of the cost of business broadband, too.

Finally, a key driver of residential broadband is increased “Internet tolerance” associated with the explosion in online shopping. Almost all residential Internet users will do product research online, and about 85% seem to do at least some online shopping. This means that companies are probably relying on the Internet to support sales, and that in turn means that they’ve accepted the QoS limitations of residential broadband for their sales overall. That makes them less anxious about shifting what was traditionally internal company traffic to the Internet, perhaps with added isolation and security via SD-WAN/SASE. “Less anxious” doesn’t mean an instant transformation to a consumeristic network model, but it does mean that the transformation is happening now, and will accelerate over the next three years.

This shift has major consequences, the most obvious of which is that residential broadband access technology becomes the only significant wireline infrastructure, and being a broad player in networking depends increasingly on tapping into that somehow. Every player doesn’t need to be a broad player, of course, but there’s going to be increased pressure on most to at least have a role in consumer broadband, and you can see that with Ciena.

Ciena announced on November 22 that it had acquired Benu Networks and entered into an agreement for Tibit Communications, with the goal of enhancing its position in residential broadband. Ciena has other options to increase its market footprint, as I’ll talk about below, but it’s found it necessary to get into the consumer broadband space in a more serious (and closer-to-the-user) way. That’s likely because competitors who did enter the space would be at an advantage if Ciena didn’t counter the move. Access is the biggest consumer of fiber, and an optical player needs to be supporting the dominant technology.

The shift of focus to residential broadband doesn’t necessarily mean the death of business broadband, but it does likely mean that MPLS VPNs will begin to decline. However, it is possible (even likely) that operators will look for a way to use residential broadband infrastructure to deliver VPN technology, likely through a combination of separated business connectivity in the access network and SD-WAN on-ramps to replace MPLS. This facilitates a shift away from the current VPN gateway routers to appliances or even hosted instances. There is, for example, no reason why the SASE-like SD-WAN technology used in the cloud couldn’t be used as a cloud-hosted VPN on-ramp to a specialized business broadband connection. It could also be used as an on-ramp to traditional SD-WAN-over-the-Internet, of course.

That takes us back to the point about Internet QoS and “best-efforts-is-good-enough”. Remember that wave of online influence on sales? Well, we deliver material through the Internet and the cloud, and we’ve adapted both the software involved and the interaction models of our software to the limits of “best efforts”. It works. Now we’re seeing more applications that support workers rather than customers shift to the same model, because of remote work and also because the Internet/cloud approach delivers a rich GUI. This effort is also proving that the Internet can support “mission-critical” interactions. And the Internet is available; there is nothing technical or regulatory that stands in the way of businesses shifting all their traffic to the Internet. Yes, it might mean changing technologies and moving security measures around, but it’s feasible. And the Internet is cheaper, so ultimately it will win.

One could reasonably ask what the industry thinks of this. What I hear from all my contacts is interesting. Among network operators, I find that the majority of the junior-level people see things pretty much as I’ve described, and the senior-level people reluctantly agree. However, the juniors are of the view that this shift will be decisive by 2024 and the seniors think it might be decisive by 2027. Among network vendors, I see a similar divergence of viewpoint, but based perhaps a bit more on role. Strategy players and engineers who are in emerging-technology areas see things like junior operator types, and management and engineers involved in traditional product areas seem to be locked into the operator-senior viewpoint.

Ciena is interesting here, in that they are taking steps now that clearly required senior management approval. Not only that, there’s a whole other set of network evolutions driven by consumerism, one being the potential metro bonanza. If metro centers become the places where edge computing is hosted, then could the core be an optical mesh of those locations? A full mesh of the roughly-250 major metro centers in the US would require about 63 thousand fiber strands, but my model says that a two-tier structure would require less than 11,000 and a three-tier structure less than a thousand. With packet optics you could thus have all the edges only three optical hops max from any other. That would validate almost all of Ciena’s current product line, so why not bet there?

Answer: Because Ciena wants a “natural opportunity” and they’d have to drive metro to make a go of that space. While metro positioning by major network vendors is currently sub-optimal IMHO, if Ciena made a sincere effort to promote a new metro model, only two outcomes would be possible. First, their inherent product limitations (primarily optical, little data center exposure) would mean they’d fail. Second, they’d get a good story out, and their packet-product competitors (Cisco and Juniper) would then be motivated to jump in, and Ciena would be out-competed again.

So we’ve had a quiet revolution. If there is no mass market, then whatever market has the most mass gets the most attention. If there is a true mass market, then eventually it eats all the other markets in terms of opportunity, and it becomes difficult to play even in a niche without a position in the mainstream. That’s where Ciena is, and where every network vendor is. The times aren’t changing, they’ve already changed.

A Promising Tech Publication Shuts Down: Why?

Back in 2019, the publishers of Politico announced they were launching a new tech publication, called “Protocol”. It came out early in 2020, and earlier in November of 2022 it announced it was ceasing publication. Since Politico is a highly successful and respected publication in the national and international political scene, how come their tech effort failed? I had high hopes for Protocol and an exchange with the first editor in February 2020 when the first issues came out. Take a look at my referenced blog, and then let’s dig in.

My view on tech coverage is that, in a market where we’re surely trying to build an ecosystem as complex as any in human history, we’re “groping the elephant”. Remember that old saw about someone trying to identify an elephant behind a curtain by reaching in and feeling around? Get the trunk and it’s a snake, get the leg and you think “tree”, and the side would lead you to believe you were feeling a cliff. You can’t define an ecosystem if you look only at parts. My view, which I conveyed in email, was that more than anything else, tech coverage lacked context, and that Protocol needed to provide it. The response I got was “I completely agree — this is one of the things we want to do really well, making sure we try to tell the whole story instead of tiny pieces of it.” Well, I don’t think they did that, and I offered examples from the early story to justify my view.

An implicit point in my assessment of Protocol is that the publication had an opportunity, which means there was an unmet need. I won’t bore you with the details of what I think the need is; all my blogs focus on that. What that leaves is the question of why the need is unmet, why tech publications (these days that means online publications) aren’t doing what I believe the market needs, so let’s look at that.

To be holistic, you have to understand the whole, and in tech that’s incredibly complicated. But you also have to understand the relationships that turn “the whole” from a collection of boxes and software to a functioning infrastructure that supports some viable mission set. I know a lot of tech journalists, and I think most of them would agree that actually understanding the specific area they cover is a major challenge. Understanding all the areas and how they relate to each other? Forget it.

So is any attempt to cover tech, to convey developments in context doomed? I don’t think so. I think my tech journalists would also agree that if they had an outline that represented the framework into which their stories fit, one that provided that critical knowledge of elements, relationships, and context, they could do their stories in a way that would meet the needs of the market. They don’t get that, and they surely could because editors (including those who ran Protocol) could have talked to people and assembled the view. They could still do that today, but they don’t. Why?

Back in 1989 when I first started to do surveys of enterprises in a methodical way (to populate my forecast model), there were about eleven thousand real qualified network decision-makers. The number of subscribers to the best network publications of the time was about the same number. Ten years later, the number of qualified decision-makers had increased to thirteen thousand five hundred, and the circulation of publications had increased to over fifty thousand. The reason for this was that publications shifted from being subscription-based to ad-based. You filled out a reader service card, answering questions, and from those answers, the publication decided if you were qualified to get a free copy. Sound logical?

Maybe not. Here you are, a lowly tech in some vast organization, with about as much influence on the decisions made as the person who operates the coffee shop nearest the headquarters. One question on the card is “What value of technology do you personally approve or influence?” and you get a range. Pick the truth (zero) and you’ll never see that publication unless you steal a copy from someone else. So you pick (on the average, according to my research) whatever level is about two thirds of the way up from the bottom. This strategy gets you the publication, but it also means that the total purchase influence value of subscribers exceeds global GDP, which isn’t exactly plausible. It does explain how we jumped so far in “influencers” and subscribers, though.

OK, so we printed more copies than we really needed; so what? The right people still got the news, the ads were effective. Then along came online. Now we had the same explosion in unqualified people (meaning people who weren’t actually making decisions), but we could also tell what they were interested in, which we couldn’t easily do with a printed publication.

Ah, and remember that advertisers pay for eyeballs. Now, suppose I have fifteen thousand decision-makers and fifty-thousand hangers-on. I do a long, well-contexted, article that’s rich fodder for the former group, and the latter group tunes out. I have fifteen thousand eyeball-hits. On the other hand, if I do a “man-bites-dog” sensational piece, I get all sixty-five thousand. Why? The hangers-on want digested, exciting stuff, so they’re happy. The real decision makers have nothing else to read, so we get them too.

This isn’t an easy problem to solve, and I’m not sure I’m qualified to suggest a solution. My blog gets roughly a hundred thousand fairly regular readers, but as you know I don’t accept ads, or compensation for running specific stories there. I’m free to do what I want, which is not the case for “real” online tech publications that have to pay employees, website hosting bills, and so forth. I write everything myself, from my own knowledge and experience, so there’s no outside cost for me to cover. But even with all of this, I understand how things would be for an ad-sponsored blog. You get paid by click, therefore you cater to clicks.

Protocol’s challenge is that they came from a background of news, and news is widely digestible and broadly understood. Tech is not; in fact tech understanding is probably what a new tech publication should be trying to convey to readers. Tech understanding means what, though? Does it mean providing enough information to make a truly objective assessment of a technology and the vendor space associated with it? No advertiser wants that, they want something that preferences their own products/services. What Protocol ended up doing was a kind of news slant on tech, and while it was useful to readers I don’t think it offered advertisers the kind of thing they wanted out of the stories, which was something that mentioned them, or at least was favorable to their buying proposition. But that approach would first replicate everything that was already out there when Protocol launched, and second miss the critical goal of actually helping the tech buyer apply technology to business problems, and so justify their purchases. That would require addressing a much smaller audience, and that defies ad sponsorship principles that focus on eyeball counting.

Protocol was launched by the people who gave us Politico, but political news touches everyone and doesn’t require special skills to digest. You can sell an ad on a political website and be assured that millions could be reasonable targets for it. Can we make the same assumption about technology sites, technology ads? No, because only those who influence big tech purchases are viable ad targets. So is there no niche for Protocol to have filled? I think there was, and I think that niche was to advance buyer literacy among those real buyers.

Let me offer some insight I dug out of my old survey data. Back in 1998, almost 90% of decision-makers said that they fully understood the technology they were buying and how to apply it to their problem set. Ten years later, only 64% said that, and today only 39% say that. It’s hard for me to believe that, if we had the same level of tech literacy in 2022 that we had in 1998, we wouldn’t be way further along in tech revolution than we are. We’d be selling more tech products and services, company stocks would be higher, and tech employees and investors would have more money. Seems good to me.

For vendors, this frames a dilemma that I mentioned last week in my blog on Cisco and Juniper, the issue of sales versus marketing. What’s the difference between being sales-driven and being marketing-driven? Salespeople are commissioned to sell, not to educate. They don’t want to spend a lot of time in a sales call, they want to get the order and move on to the next opportunity. Say “consultative sale” or “buyer education” and they blanch. But if a new technology comes along, how do the decision-makers get the literacy they need to pick a product and get the deal approved internally? The best answer would be “marketing”.

Marketing is a mass activity not a personal one. You create marketing collateral and get it to a decision-maker, and you can educate them, indoctrinate them, support them in their mission, in a way you’d have a lot of trouble getting your sales force to support. Marketing is the great under-utilized resource in tech, and it’s the thing that can really drive market change. For vendors who aren’t major market players, marketing is what can make you into one, and every such vendor needs to accept and exploit that truth. Why? So they don’t go the way of Protocol.

Could an Up-and-Coming Vendor Gain Traction in Networking?

In my blog on Monday, I talked about the battle of the two giant IP network equipment vendors, Cisco and Juniper. The two, I said, are battling it out in a sales-driven arena and neither are pushing all the buttons they could on the marketing side. That raises a question, which is whether a newcomer could step up, and use marketing techniques the others haven’t fully exploited to gain a lot of traction. Is the “next Cisco” really a possibility?

Obviously, newcomers have succeeded in network equipment in the past. Cisco and Juniper both had to claw their ways into a strong position in the space, battling incumbent vendors. In Juniper’s case, it was Cisco. In Cisco’s case, you might be surprised to learn the incumbent was IBM. In both cases, the wannabe vendor used a specific and easily understood criticism of the incumbent, then leveraged it.

Cisco’s rise came about because IBM’s network technology was simply priced too high. Interestingly, a part of that was due to the fact that IBM’s technology (System Network Architecture or SNA, if you’re interested) was inherently highly secure and reliable, which the market of the time needed but the emerging Internet didn’t. IP routing was a heck of a lot cheaper, and that generated a business shift to IP. That, and the growth of the Internet, then propelled Cisco forward.

IBM SNA was proprietary technology, at least in that it wasn’t based on formal standards. Cisco’s IP stuff was initially based in part on the same sort of thing, and Juniper entered the market by capitalizing on the fact that there was an increased demand for standards-based networking. Juniper was particularly effective in promoting its technology to the service providers, and from there they expanded their reach.

So can there be a newcomer, a “unicorn” vendor that could threaten to at least steal some market share from these giants? A decade or two ago, there was strong interest in the VC space to come up with “the next Cisco”, largely centered in startups in the Boston area, but it didn’t generate anything notable. More recently, competition to our two network giants has come from something more diffuse than unicorn-like, the “white box” or “open-model” approach.

You could say that this is a further step along the “standards” path, but unlike the old standards differentiator, open-model networking is attractive even to some enterprises. There’s been router software available from a variety of sources, some open and some licensed, for well over a decade. When custom chips capable of pushing a lot of packets came along (Broadcom is the leader in this space), and software-defined networking (SDN) took shape, the result was a push for open hardware that could be married to the router software.

The question is whether “open-model” or “white-box” presents a compelling value of the sort that launched Cisco and Juniper. You can argue that history says “No!” because white-box router technology, or white-box technology in general, hasn’t taken off in the enterprise space and has met with limited success in the service provider market (more on that below). Why? Two reasons, one practical and one subjective.

The practical reason is the buyers’ concerns about integration, which operate at two levels. First, a white box is a space heater without software, and if the new router model is really open, the software has to come from any source, which means integration is important. As it happens, nobody wants to do that or pay for someone else to do it. Second, networking today is more mature than it was when Cisco and Juniper launched, and a mature market has a large base of devices not fully depreciated. A new source means integrating with a mass of stuff already in use, and that’s a headache too.

Maybe a different tagline is needed here, and there are a couple of ideas floating out there. One is from startup/unicorn DriveNets, who offers the “Network Cloud” and the other from Juniper, with “Cloud Metro”. It doesn’t take a PR genius to notice that “cloud” is a common theme, and it’s a smart one because a great majority of both enterprise and service provider network planners say that the cloud is impacting networks and network infrastructure.

DriveNets is a disaggregated or cluster-router model, where a collection of devices connected in a mesh becomes in effect one high-capacity device or even a series of virtual devices hosted on a single cluster. DriveNets is the most successful of the Cisco/Juniper IP infrastructure competitors, but the company has focused on the IP core network, in no small part because AT&T played a big role in getting the company launched. The problems with this are 1) that there aren’t as many core devices as devices at other network levels, and 2) the core is arguably moving toward agile optics. Still, for the service provider space, DriveNets is a real contender.

The question of how many routers a newcomer could sell raises the other tagline, “Cloud Metro” and the topic of metro networks overall. Metro is (as I’ve noted in past blogs) a kind of sweet spot for service provider networking. There are a lot of potential metro concentration points, as many as a hundred thousand worldwide. Each of these serves enough customers to make it a viable point for service feature hosting, edge computing, and other interesting stuff. Juniper grabbed the notion first almost two years ago, but hasn’t developed it as much as they could have. DriveNets’ architecture would also be a great fit for metro, but they’ve not really exploited that capability either. Could another startup or even a smaller vendor take advantage of that lack of metro focus? Perhaps.

The problem is that “metro” is a service provider infrastructure element, and there’s increased market interest in enterprise-compatible products. The service provider sales cycle is between 10 and 19 months at the moment (up from 9 to 14 months, where it was largely stuck for most of the last two decades), where the average enterprise network deal is done in between 4 and 9 months. Enterprises, though, are focusing network infrastructure-building on the data center (switching rather than routing), security, and VPN edge technology. Security is the easiest of the three to sell.

So who’s the most important competitor to Cisco and Juniper? Given the need for a strong enterprise focus, probably the almost-finalized Broadcom/VMware combination. Broadcom has the chips. VMware has a nice inventory of virtual-network technology that plays into the way enterprise networking is moving. They also have a significant foothold in enterprise data centers, which is critical. Their biggest handicap here is a combination of positioning and organization, and the two are likely related.

You’re probably not surprised that I’d criticize their positioning; whose don’t I criticize, after all? Here, though, the challenge is that there are some significant mindset changes needed to promote VMware’s position, and those can only come about through some really aggressive marketing. VMware doesn’t have that history, and they have a big positioning advantage in multi-cloud and cloud portability they could leverage.

VMware’s enterprise networking position is definitely cloud-centric, which is a good thing. They have a strong virtual-network story for the data center (NSX) and in SD-WAN, and they’re starting to integrate the two. Their security portfolio is good, but they lack the security focus of vendors like Cisco and Juniper, and even their virtual-network cloud stuff gets a bit blurred in positioning relative to vSphere.

VMware isn’t going to steal routers’ thunder, but for the enterprise a router is increasingly just an edge device, and if MPLS VPNs do fall out of favor because of cloud networks and SD-WAN, the majority of them would disappear. That means that the enterprise network might increasingly be moving to appliances and hosted instances, which is VMware’s strength., It’s hard to say how quickly this all could happen, but if I’m correct in my views, I think we’ll see clear signs by the end of 2023. Meanwhile, Happy Thanksgiving!

Cisco Versus Juniper: How’s that Shaking Out?

Cisco and Juniper are both key players in the network equipment space, for slightly different reasons. Both had good quarters and were rewarded by Wall Street, but there have always been major differences between the style of the two companies. Whether those differences are widening or narrowing is important both to the competitors themselves and to the market at large, so today we’re going to look at those differences and what they might mean.

First, let’s look at the numbers. Cisco defines six product areas, and they were up in three of them. Juniper defines four product areas and they were up in all four. Both companies benefited from a reduction in order backlogs created by easing supply-chain issues. Juniper, based on my input from the Street, was generally rated lower than Cisco and generated a bit more of an upside than expected, but I think their objective financial performance was better. The difference in Street viewpoint and that potentially improved upside on Juniper’s part are the things we have to look at now.

Cisco, as I’ve noted before, is a sales machine. Their approach has been pretty consistent over the last couple decades, in my view. They focus on making the deal, on the current quarter and making sure not to undermine it, and on making sure they do undermine competitor initiatives aiming at rocking the boat. The company doesn’t innovate as much as execute, and its ability to consistently turn in good numbers has made it attractive to the Street.

Juniper is in some senses the same, in that they’ve tended to respond to their market-leading competitor in taking a sales focus. That leads to their being characterized by many on the Street as “playing Cisco’s game”, and given Cisco’s strength in sales, that’s a sub-optimal approach. That likely accounts for Juniper’s lower Street-cred, so to speak. On the other hand, Juniper has made some incredibly smart product-strategy moves, especially in M&A. Of the two, I believe they have the best product portfolio, and by a decent margin.

Who wins in 2023? To decide that we need a formula to define what a win would require, so I’m going to have to propose a model. In my view, you start with a broad vision of market evolution that frames your value proposition. You add a network model that fits that vision, and close with a product set that fills out the model, a marketing position that evangelizes the vision and product set, and a sales strategy that frames current buyer needs into the market vision, thus fits into the first two things. Do our vendors have that? Let’s start with my own view of what’s going on in the markets.

For most of modern times, commerce has been driven by processes initiated and controlled by the seller. You read an ad in a magazine, you went to a store or you ticked a reader service card to get information, but the real process got started when you encountered product information the seller provided explicitly, and the sales process was controlled by a retail outlet and/or a salesperson. A company’s IT process and network had to run the business, but that meant largely supporting non-real-time steps.

Today, if you want information on a product or vendor, you go online. If you want pricing you go online. If you want to buy something, you’re increasingly likely to go online. Commerce now takes place in real time, driven by the buyer’s attention and drawing on information resources the seller presents not explicitly to you, but to the market at large.

This shift has enormous significance, because human participation in the normal flow of online commerce doesn’t exist, can’t exist for the process to work efficiently. Information technology is the instrument of commerce now. It’s not just supporting the business, it’s the instrument whereby marketing and sales are realized. This is “mission-critical” at a whole new level, a level where “the normal flow” is what generates revenue, and where sustaining that flow in real time, for all is the fundamental mission of the company.

Historically, nothing in IT has worked that way, including networking. Historically, something breaks and humans have to cooperate to get it fixed, sometimes replacing things, rerouting, rewiring. Historically, we could not assume that a remote transaction was authoritative until the paper copy validation caught up. Historically, we could protect goods, records, and bank accounts with armed guards. None of those historical assumptions hold up in today’s world. We need a new basis for the fundamental promises that make up successful commerce, successful economies, and a big part of that new basis has to come from the network.

OK, this is my view of a good vision statement. What do our competitors offer? Neither has this kind of high-level view, but what views do they have? It’s somewhat difficult to say, because both companies tend to take sales messages into their marketing/positioning channels rather than articulating strategic messages. That also applies to their websites, which should reflect the issues they believe are driving network technology planning.

Cisco, IMHO, presents no vision of the evolution of the market at all. Their homepage dives into security immediately (one of the areas where they indicated they would prioritize for shareholder value reasons). Digging deeper in their site, you still find no statements of market evolution or buyer need. This is totally consistent with Cisco’s sales-centric approach, but it leaves an opportunity door wide open.

Juniper doesn’t do much better on their website, but they have articulated something that at least can be called a high-level vision, “Experience-Based Networking”. Not only is that a tagline that could be linked to the market vision I opened with, it’s also one that supports the evolution from the old model to the new. All of that is good news, but Juniper doesn’t make the positioning connection strongly (their tagline isn’t immediately visible on their website, for example) so I guess it would have to be considered “potential good news.”

Let’s move down a level now, and construct a network model to support the vision. For reasons that will become clear, I’m going to conflate this with the product-map-to-model step (one more layer down).

You need quite a few things in a network model, including enhanced management to reduce outages and impacts, tight integration between cloud and network, tight integration between data center and network, and a high level of network portability across multiple infrastructures/operators. All of these things are aimed at ensuring that the online experience is highly reliable, presents a consistently high QoE, can be efficiently linked to applications, and can be delivered over the Internet, a VPN, or a private network.

Cisco defines no particular model or vision to achieve these goals, hardly a surprise since they don’t define high-level goals at all. However, Cisco’s product line does have the elements that could address these specific model elements. What both enterprises and service providers tell me is that Cisco tends to focus on product sales rather than on network model, which again isn’t a surprise. They seem to believe that if a given product is needed, the prospects will decide their model and ask for products to fill it out. This, again, is reasonable on the surface, but risks strategic intercession by competition.

Juniper does have a network model, and it does in my view, a better job of aligning to the reference network model I described above, one that I think is better aligned with real market trends. Juniper also biases its website and positioning more to the model level. For example, their most visible positioning strategy is to promote AI and the cloud integration of network and other telemetry to enhance visibility and management. Mist AI is a strong product, and the notion that AI could enhance operational responses to network issues is congruent with the tagline (Experience-Based Networks) and with my presumptive mission and network models.

Enterprises and service providers who have commented to me about the two companies place Juniper at the top for “innovation” and Cisco at the top on “execution”. They confirm that Cisco is more likely to have account control and influence, given their market-leader status, but that Juniper likely has technologies that better fit conditions and how they’re likely to evolve.

One new factor in the competitive mix is Cisco’s restructuring announcement. While it will include layoffs, the major point that the company raised was a realignment of effort toward profitable segments, to enhance shareholder value. This has been interpreted by some financial news services as a shift more to an enterprise focus, and by some Street analysts as a move to sustain and improve share prices. It’s also possible that Cisco is reacting to Juniper’s success, consistent with its normal goal of being not a leader in tech but a “fast follower” who will exploit (and step on) the success of others.

This whole swirling mix of points suggests to me that 2023 will evolve in two stages. The first, which I think will last into the mid-spring timeframe, will be a gradual “evolution” of the two companies’ current positioning and strategies. The timing of this roughly aligns with what I expect will happen in the global economy, as the inflation and rate hikes shocks dissipate. I don’t expect a major change, particularly with Cisco, but just what they mean in their restructuring story will become clear.

The second stage is what I think we could call the “awareness” stage, where network buyers will respond to the developing conditions in the market, and both Cisco and Juniper will have to respond to changes in attitude. I believe that the evolutionary model for the role of the network that I opened with here is now emerging to the point where at least some planners in both our competitors now see the conditions. Of the two, as I’ve said here, Juniper seems best-positioned to address the future, and they are even now a bit more willing to be strategically innovative than Cisco. That means that they could, in the “awareness” stage next year, jump out and change the dynamic of their space, at least a bit.

What’s behind all my “could” qualifiers here is the fact that “awareness” happens at the pace of marketing. Aggressive positioning leads to more media engagement, which leads to more website visits that can set agenda points for network planning. That leads to sales calls that are conveying a solution to a problem the vendor itself has defined, and so are very likely to fit. And that leads to a change in dynamic. All this stuff starts with that aggression, and neither of the companies have shown aggression in the last five years or so. The “awareness” phase of 2023, and the advantage in 2024 and beyond, will lay with the vendor who comes out of their shell first, fastest, and best.

Can the New Sylva Project Save Carrier Cloud?

Ah, telco cloud! The initiative that many (myself included) had hoped would revolutionize the cloud, the edge, and telcos all at the same time. Well, it’s been a dud. As a poet once said, “The tusks that clashed in mighty brawls of mastodons…are billiard balls.” Telcos seem to have ceded everything to the public cloud giants, but now there’s an initiative, called “Sylva” that’s hoping to create an open-source telco cloud stack. Will it work?

If history means anything, Sylva has a big uphill slog. While there have been impactful telco initiatives in areas like Open RAN, they’ve been focused on a very limited target. Even then, the pace of progress has been slow enough to limit the extent that the initiatives could influence the market. Sylva has two goals according to the project FAQs; release a software framework (in my terms, an architecture model) that would “identify and prioritize telco and edge requirements”, and develop a reference implementation and an integration/validation framework. That is a very long way from a limited target.

The white paper (available in the github link I provide above) aligns Sylva explicitly with two things. First, a need to move computing to the edge, meaning within a few kilometers of the user. Second, a need to map 5G requirements to a cloud model in a way that ensures telcos’ special requirements are met. The white paper also preferences containers and Kubernetes as the technical foundation for the software to be run, which of course means that the platform software that makes up Sylva would have to include both. It also preferences a the ever-popular “cloud native” model, which has the combined advantage/disadvantage of having no precise and fully accepted definition.

The good news about Sylva, from the white paper, is that it explicitly aligns the project with the evolution of edge computing and not just to things like 5G. That means that Sylva could form the foundation of a telco move into edge computing in some form, either via direct retail service offerings or through what AT&T has described as facilitating services, to be used by others in framing those service offerings.

Another good-news element from the paper is the explicit recognition of public cloud services as an element of telco cloud, but not the entirety of it. The paper properly identifies the basic model for this; a “convergence layer” that presents APIs upward to applications, then maps those APIs to whatever hosting is available below, either deployed by the telcos or by third parties, including public cloud providers. Something like this was proposed with the Apache Mesos/Marathon and DC/OS approach for cloud computing, since updated to support containers.

I’m bringing up Mesos/Marathon here for a reason. Sylva mandates containers and containers-as-a-service (CaaS), which is a concept that’s also found in the current iteration of NFV. Mesos/Marathon is a broader approach to orchestration, one that works with containers but also with other hosting models. One could reason that to the extent that other models might be required, something other than Kubernetes and CaaS might be a more realistic goal for Sylva. Part of the reason for my concern goes back to the ambiguity inherent in the term “cloud native”.

In the NFV community there’s been a tendency to use the terms “cloud native” and “containerized” as being equivalent, which obviously they are not. Containers are a hosting model, a platform architecture. “Cloud native” if it means anything cohesive is an application model designed to maximize the benefit of the cloud. Conceptually, containers predate the cloud, and there are models of cloud hosting (functional computing, for example) that are not container models.

Does this mean I’m promoting Mesos/Marathon instead of Kubernetes? No it does not. The plus for Kubernetes and containers is that modern software development and deployment practices are decisively heading in that direction. One of my constant criticisms of telco initiatives in software and the cloud has been that they’ve tended to ignore the state-of-the-art cloud-think, which in part means “edge-think”. Remember, the real goal of Sylva is to support edge computing. I’m saying that we need to think about the relationship between edge applications and the development/deployment model to ensure that we don’t push a strategy that doesn’t support the kind of edge applications likely to drive “telco cloud” in the first place.

I think it’s fair to say that edge computing is about latency control. The benefit of hosting close to the user is that latency is reduced, and latency matters primarily in applications that are event-driven. Generally, event-driven applications divide into two pieces, a real-time piece that’s highly latency-sensitive because it has to synchronize with the real world, and an optional and more transactional piece that takes a real-world condition (that is often made up of multiple events) and generates a “transaction” with more traditional latency sensitivity, decoupled from the first piece. I’ve written a lot of event-handlers and the primary requirement is that the overall processing, what’s usually called the “control loop” is short in terms of accumulated latency.

Functional computing, which means software whose outputs are based only on the current inputs, is a development model that encourages low latency by eliminating references to stored data. Functional computing also promotes scalability and resiliency because any instance of a software component, if presented with an event message, can run against it and generate the same result. So we could fairly say that functional computing is a reasonable development model for the edge.

How about a deployment model? In public cloud computing, functional computing has been conflated with serverless deployment, where a component instance is loaded on demand and run. That approach is fine for events that don’t happen often, but where the same event happens regularly, you reach a point where the time required to load the components and run them is excessive. In this situation you’d be better off to keep the software function resident. That doesn’t mean it couldn’t scale under load, but that it wouldn’t have to be loaded every time it’s used. Kubernetes and containers will support (with an add-on for serverless operation) both models, so we can fairly say that mandating Kubernetes in Sylva doesn’t interfere with functional computing requirements.

Kubernetes does allow for deployment in VMs, the cloud, and bare metal (some with extensions), so I think that the Sylva approach to deployment does cover the essential bases. There may be current event-driven applications that resist container/Kubernetes deployments, but few believe these would be candidates for telco cloud edge hosting, and in any event it’s difficult to point to any significant number of examples.

What this means is that the framework that Sylva articulates is suitable for the mission, which is good news because a problem at that high level would be very difficult to resolve. There are still some lower-level questions, though.

The first question is timing. The first release of the framework is expected mid-2023, which is just about six months out. That schedule is incredibly optimistic, IMHO, given past experience with telco-centric initiatives. However, failure to meet it would mean that some of the target missions for Sylva might have to advance without it, and that could reduce the benefit of Sylva. Slip far enough, and the market will either have moved beyond the issue or decided it was irrelevant, in which case Sylva would be too.

The second question is content. When NFV launched in 2013, the goal was to identify the standards and components needed, not define new ones. Yet NFV ended up depending on a number of elements that were entirely new. Can Sylva avoid that fate? If not, then it disconnects itself from cloud evolution, and that almost assures it will not be relevant in the real world.

The third question is sponsorship. There are, so far, a number of EU operators and two mobile infrastructure vendors (Ericsson and Nokia) involved in Sylva. None of these organizations are giants in cloud and edge thinking or masters of the specifics of the container and Kubernetes world. Edge computing is something that needs players like Red Hat and VMware for platform software and Cisco and Juniper for network hardware, because edge computing is realistically a function of metro deployment. I’ll be talking more about Cisco and Juniper on Monday, in my blog and in the Things Past and Things to Come podcast on TMT Advisor. Overall, edge computing a melding of data center and network. Will other players with essential expertise step in here? We don’t know yet, and I think that without additional participation, Sylva has little chance of making a difference to operators.

The AI Revolution Meets Adam Smith

When people worry about the risk of AI or robotics, they’re typically seeing us all wiped out by rogue robots. They either kill us off directly, or they conspire to crash our aircraft, trap us in elevators, or maybe drown us by opening spillways on dams. I won’t say that “active extinction” scenarios are impossible, but I think that the real risk of technologies like AI lies in upsetting the balance of Adam Smith. I don’t often comment on the social impact of technology, but I think an exception is justified, and I’ll be interested in how you feel.

An economy is a vast interconnected system. Smith was among the economists who pointed out that production was the union of raw materials, labor, and capital. If these three forces are balanced, then an economic system can be strong and inclusive. If they’re out of balance, then a lack of something and a surplus of something else will reduce efficiency and generate risk, both to the economy and to society overall.

We’ve already seen examples of the raw-materials risk. On the one hand, countries like the US gained economic strength and global prominence by exploiting rich natural resources. On the other hand, exploiting natural resources has consequences; the over-reliance on fossil fuels and cutting of forests are examples. Now, another risk, “social risk”, is gaining visibility in our society, and the worst could be yet to come…in part because of AI.

It used to be that heavy work required skilled labor, that creating complex precision products required a skill set that only a few could offer. The industrial revolution changed that by creating machines that could move mountains and assemble watches. It was a social revolution as much as an industrial one, because it was populizing. At the end of it, more people could do high-value work and the overall state of the economy could improve, along with their lives. We had elite toolmakers, but we had a whole population that were suddenly productive tool users, and that raised the standards of living for many.

Fast forward to the 1960s, when computers came along. Computers, they said at the time, were the second industrial revolution, another step forward in the global economy. But was that true? Yes, a computer programmer could create software that could file records, do accounting, and so forth. That made programmers and software really valuable, but what about the clerks and accountants? In the last sixty years, we’ve seen the computer empower those that actually worked in IT, but also combine with industrial automation to displace workers, not empower them. Instead of a populist revolution, we launched a capital revolution. If you have the capital to afford automation, you need less labor.

Now we bring in AI and robotics. No, I don’t mean the kind of AI/robot that can do everything a human can do. I mean the little incremental pieces of machine intelligence that can empty a truck, move a package along. No single one of these little things can do what even a child could do, but combine them and they can do more and more of what anyone can do. The smarter they are, even at seemingly trivial little tasks, the more human labor they offset. Capital to buy them gets more important, and the labor (being displaced) is less so.

What do we do with all that human effort we’re displacing? The popular press would say this creates leisure time. Work weeks could fall from 40 hours to maybe 4 hours a week, and everyone has the free time to enjoy their hobbies. Do they have the money, though?

Would a company who just invested capital to displace human effort then keep paying their humans as before? Did the steel industry, as automated processes exploded, keep paying their workers to stand around and watch the assembly lines? Visit a former steel town like Pittsburgh PA and you’ll see the monuments to change in the form of abandoned mills, and this was without AI.

Yes, we’ve heard about companies experimenting with reducing the hours in a work week while continuing pay, but it doesn’t seem to be spreading to companies at large. Yes, we’ve heard about technologies that would make our lives better, ostensibly without changing the workforce. But capital demands return on investment, and so replacing human workers really means creating unemployment in the long run. So where will we find the next vast post-industrial wasteland? Maybe everywhere.

Capital rules right now. Wall Street is way more important than Main Street, but is it safe from AI? We already have software that does algorithmic investment. One of my Wall Street friends quips that his software is selling to and buying from his competitor’s software. Right now, my friend makes a lot of money, as is routine on the Street, but could AI make him redundant, too? Sure.

Back in those dawn-of-computing 1960s, one big financial company found out that if you wanted to train people to program before computer science courses were invented, and you started six hundred qualified people through an exhaustive testing and interview process, you could expect around 20 to start the training and twelve to be commercially successful programmers. One in fifty; two percent. Yes, we might create good jobs with the computer revolution, but not ones the average worker could expect to fill. And based on this company’s experience, the average worker isn’t going to be a programmer whatever training you give them.

Suppose that AI, by nibbling away at little tasks here and there, takes over all the tasks that automation and robotics can support. Suppose that one in fifty can be a part of that revolution, creating the software and the devices. What do the other 49 of our 50, the 98%, do? How do they live? We cannot have a society in which the great majority of people are onlookers. There’s not a sociologist on the planet who would say that such a division would be stable, and in fact we’d have major societal issues long before we got to this end game.

You probably won’t find many on Wall Street or executive suites who believe that AI could really depopulate their workforce significantly, but you’ll find a lot who are happy to see automation in any form reduce the cost of production. That sort of shift, if it’s empowered with the right technology, leads to a future where the toolmakers rule and the tools use themselves.

NASA scientists just released a report that tries to explain why we’ve not found other intelligent life. Their conclusion is that other civilizations didn’t last long enough to be noticed by us. Disease, nuclear war? Maybe. Or maybe they died of a social revolution that spun out of widespread “de-populism” created by AI. We have about eight billion people on earth now, and we have to make sure that what we do to make production more efficient doesn’t make labor valueless, because the 98% probably aren’t going to take that lying down.

I don’t have an answer to all of this, but I think part of the answer has to lie in applying things like AI to be as populizing as the industrial revolution was. I also think part of the answer is to rein in capital a bit. What do you think?

Are Mobile Services Running Out of Gas?

Verizon, once the player to beat in mobile services and perhaps the strongest Tier One overall, sure seems to have fallen from grace. The company has lost wireless subscribers every quarter for years and it’s now trading at its lowest level in five years. Given the fact that stocks have been in the toilet for most of this year, and that telcos worldwide are asking regulators for some form of relief, what can we say about the whole mobile services and telecommunications space?

One thing I (at least) can say is that 5G isn’t going to pull operators out of the fire. In fact, there are strong indicators that 5G has made things worse, and that negative impact is not done yet. 5G is an orderly infrastructure upgrade to wireless networks, needed largely because it increases the number of subscribers a cell can support. However, to realize the benefits of 5G, you need the obvious step of deploying it, and the less-obvious step of getting a lot of your customers to switch to it.

5G deployment has been budgeted for years, but that doesn’t mean it’s been free. The need to deploy 5G has generated capex that runs counter to operators’ general desire to lower costs. They’ve spent five years paring down opex, to the point where there’s little left they can do there. Capex is what remains, and most operators are still only a bit more than halfway through their planned 5G upgrade projects. That’s clearly a problem.

A second problem is that willingness to pay for mobile services is diminishing. I don’t claim to do large-scale user surveys, but the discussions I have with tech people has been enlightening. Five years ago, nearly all those I interact with said they got their home wireless services from a “premium” provider. Only a fifth indicated any real concern about competitive pricing. Today, only half tell me they get wireless services from a premium provider, and two-thirds have either price-shopped or are now planning to.

Part of the reason for this shift is the improved reliability and availability of wireless services in general, and the growth of and improvement in MVNO services, which while they don’t replace the infrastructure of big providers, do reduce their profits. Some MVNOs target specific population segments, like seniors, and others bundle mobile services with wireline and TV to make their offerings stickier with consumers. Either cuts into the revenues of players like Verizon.

Another issue 5G has exacerbated is the “promotional smartphone” problem. Some operators (notably AT&T in the US) have recognized that users really care more about their phones than their phone services, and have decided to offer regular upgrades and phone discounts as their differentiators. While this raises their costs significantly, it also helps them sustain or even build their customer base. In the past, players like Verizon who believed they had service-level differentiation were slow to get into the promotional-phone game, but with 5G it’s important that you get customers onto the new service, and that means making deals on phones that support 5G.

Those who have watched the telco space over the last couple decades probably recognize the current situation; it’s very much like what happened in wireline services. We had a need for massive capital investment to get broadband speeds up, competition among providers, CLECs and wholesaling, you name it. Wireline has been less and less profitable, and the current trend toward streaming (which smartphone use surely contributes to) has undermined the once-safe live TV bastion for operators. Wireless was the safe haven of profits in those days; it’s not so today.

Nor is it likely to be in the future. I think that we can expect to get through 2022 and even 2023 without any major convulsions for the telco world, but beyond that I think that the warnings about return on infrastructure are going to be justified by conditions, and some action will be needed. Some may question why that is.

We have public utilities that provide electricity, water, gas, and so forth. Telcos, in the past, were surely either public utilities or services of the government. Why are they in trouble when the other utilities are not? Demand and competition are probably the reason.

It’s not that telecom demand has exploded more than the demand for, say, electricity, but that fulfilling the demand has required a different kind of infrastructure. You can make more electricity in a variety of ways but most of the transmission facilities in a modern economy are capable of being utilized, and so are the generation facilities. Maybe you need more of something, but it’s not a tech revolution. On the other hand, both wireless and wireline services of 20 years ago would be useless in supplying today’s needs, and the elements of the infrastructure of those older services would have little or no value. We have to invent what’s almost a new telecom.

The regulations that followed “privatization” of telecom over two decades ago didn’t help, either. The “competitive local exchange carrier” or CLEC concept forced telcos to wholesale copper loop to others, the idea being that this would increase competition. What it did instead was to limit investment in technologies that could have supported better broadband, by both potential competitors and by telcos themselves.

Competition eventually developed, delayed by a decade by poor public policy IMHO. In wireless services where you don’t need to trench physical media to every home and business, the barriers to competition are lower. Other public utility services require extensive distribution investment, which keeps competition levels low. With telco competition we’ve largely eliminated the true public utility model of a guaranteed rate of return, and so telcos unlike public utilities may no longer have a safe dividend yield.

But these aren’t the biggest issue. That honor goes to the risk that telcos won’t be able to sustain the growth in network capacity and latency needed to fully realize the vision of the Internet.

One of my LinkedIn contacts just sent me a quote of mine from Business Communications Review in 1996: “The Internet is a good way of supplying product information and supporting material to a literate population. If that describes a company/industry’s market, then the Internet will promote success. I do not believe it will bring about the massive changes some have forecast, simply because the availability of knowledge does not imply the effective utilization of same. We’ve had libraries for centuries, yet most people never set foot in one. Why should the Internet be any different? Like books, it will widen the gap between those who dip into it and those who don’t.” The interesting thing is that the defining truth about “the Internet” is that it’s pulled through the underpinning of universal broadband, and universal broadband is in the process of spawning new overlay services that don’t require understanding to use.

This is the base truth of 2026, to step out thirty years from my quote. What most people call “the Internet” is the worldwide web and online, web-based services. These have proved valuable enough to generate a whole new kind of literacy, the thing we call Internet literacy. The biggest step, the one coming, is the harnessing of AI/ML, edge computing, broadband, digital twinning, and more to create what’s essentially an information field that we all live in and exploit implicitly by how we live, not explicitly by what we know how to do.

I didn’t anticipate this back in 1996, and so I can hardly take others to task for not seeing it then. I can call the industry out for not seeing where we’re headed, though, and for not understanding that telecommunications services are the thing that’s needed to get us there. We need to understand how the higher-level “information field” of the future evolves out of its constituent parts, but we also need to insure that the medium that field will depend upon, which is broadband connectivity, is available in the form we need, when we need it. The industry, meaning both telcos and vendors, and governments should be looking at this right now, while we have time to address the issues properly. If I were a network vendor, I’d be taking a lead on this point for 2023.