Where is Networking Really Heading?

It’s not likely that many doubt that network spending is under pressure. That’s true even in the mobile space, despite all the hype that’s been surrounding 5G. We’ve heard a lot about open-model networking, not only in 5G with O-RAN but with white-box switching and routing. We’ve heard that hosted features and functions are the way out of connectivity commoditization for operators. And, no surprise, we’ve really heard all of this before. What, if anything, is coming out of all the noise now?

We need to make a point, though, before we start. Nearly every major operator and network equipment vendor are public companies who, like all public companies, answer first to their shareholders. That means that they answer to Wall Street, to the financial markets worldwide. Despite what many believe and say, they can’t take steps to make customers happier if those steps compromise their bottom line…at least not for long. That sort of move would result in a shareholder revolt and a change in management, followed by a change in direction to preference profits again.

The pressure in the network industry is profit pressure. That pressure has already wrung a lot of operations cost reductions from the operators, but our first network industry challenge is that the emphasis on opex management was on less on a durable strategy and technology shift than on a quick response to satisfy financial markets. That plucked all the low opex apples, which means that the residual opex reductions that might be gained can no longer fund massive technology changes.

Current trends in corporate earnings make it clear that neither operators nor their vendors are finding it easy to satisfy those financial markets. The shape of the industry in the future will depend on what happens as these two groups, whose long-term interests are symbiotic but whose short-term relationships are almost adversarial, wrestle out a compromise they can both live with.

Let’s start with the network operators. It’s obvious that the decades-long decline in profit per bit or return on infrastructure has not really been stemmed. However, it’s becoming obvious that this classic measurement isn’t as useful as it was believed to be. The problem for operators is lackluster revenue growth, resulting from the fact that the number of users of network services are growing very slowly, and that current users are unwilling to add things to their service list that would increase ARPU.

Growing the user base is likely the driver behind the operator fascination with 5G-connected IoT devices. Imagine billions of sensor/controller elements, all with their own mobile services plan! What could make a telco CFO’s heart beat faster than that? That these sensor/controllers, lacking any independent income source, would have their plans paid for by businesses and consumers was swept aside. Needless to say, that concept has no real credibility.

ARPU growth is the only other option for revenue gain, and the problem there is that neither the business nor the consumer segment of the market is eager to spend more. Businesses are finding it more difficult to justify even current costs of network services, and consumers are starting to realize that broadband speeds over 100 Mbps or so don’t map well to improved quality of their Internet experience. What else do you sell, to whom, and how?

In most industries, the buyer would be expected to transform their own business model. In telecom, that seemingly basic rule hasn’t worked out, largely because the business transformation would likely involve a technology transformation, and operators rely on network equipment vendors to provide technology. Those vendors, with their own Wall Street mouths to feed, are reluctant to upset the apple cart of their current revenue streams, in favor of something new where they might end up with a smaller market share and profit.

The increased operator emphasis on open-model networking is likely due largely to this. While all the old reasons for open networking, notably the vendor lock-in problem, remain valid, the biggest new factor in the picture is the perception of operators that their traditional vendors aren’t doing enough to resolve their ARPU dilemma. Whether open-model networking, even if it’s adopted, would resolve the deadlock on new service technology will likely depend on the nature of the vendors who provide it.

Startups are the normal source of technology innovation, but there are issues for startups in the network infrastructure or service infrastructure space. The first is that VCs are not particularly interested in funding network startups, particularly those who are aimed at selling to network operators. The second is that startups are usually seen by operators as risky partners.

Network equipment vendors are another source of innovation, but as I’ve already noted, operators believe that these vendors tend to promote a stay-the-course strategy for their operator buyers, to preserve their own bottom lines. I think that’s generally accurate, but the Ericsson decision to buy Vonage may represent a shift in network vendor thinking. It’s too early to say exactly what Ericsson has in mind here, or whether a voice-centric platform would be helpful to operators even if Ericsson positioned it optimally.

Transport or IP network vendors like Ciena, Cisco, Juniper, the IP piece of Nokia, etc., are at least somewhat credible to operators as a source of innovative service support, but in the operators’ view these vendors have largely ignored the question of service transformation in their thinking. Adding edge-connection features to connection services (security is an example) isn’t transformational enough for most operators. I think this group of vendors could in fact create a transformational, open-modeled, service ecosystem, but perhaps they have the same inertia problems that operators do.

How about the software players? Operators believe that both IBM/Red Hat and VMware represent credible players in an open-model world, especially those operators who see cloud technology as the key to integrating new ARPU-generating services with their current service set and infrastructure. One interesting truth, though, is that operators cling to software-provided virtual functions (NFV and the NFV applications called out in 5G specs are examples) rather than a more general composed-service model that could admit to cloud-hosted elements of an experience. They then criticize the vendors for being stuck in virtual functions!

Another possible source of open-model service-enhancing initiatives is the open-source community or open-model groups like O-RAN. The problem with these groups is that they tend to fall into two categories with regard to what drives them. The open-source community has the skill to do the job and a history of moving quickly, but they often can’t engage senior decision-makers in the telecom space, and tend to take a bottom-up approach that limits their ability to completely solve network problems. The open-model groups are often backed by operators, can engage with senior management there, and can take a broader view of the problem. Unfortunately, they don’t always take that broader view, and they usually move at the same glacial pace as the operators themselves. It’s a tossup whether this group can do what’s needed.

The final possibility is the cloud providers, and here is where I think we can actually expect, rather than simply hoping for, progress. The reason is scope of operation versus scope of infrastructure. Virtually every viable network service is really a national or global opportunity, but network infrastructure for ISPs, telcos, cable companies, and the rest of the operator community tends to be regional or even local. That leaves providers with a stark traditional choice—do I offer services in an OTT model that can spread over any network infrastructure but that competes with “real” OTT players, or do I build out somehow to cover a bigger geography with my technology? Neither option is realistic, and the cloud providers are offering a third, which is host my differentiated service technology in the public cloud when my prospects are outside my infrastructure footprint. The problem with this, of course, is that it enriches the cloud providers and reduces the impact of “differentiating services” on an operator’s own bottom line.

One potential path to a better solution, a path that almost any vendor could take and that standards groups or open-source bodies could surely take, is federation among operators. Ironically, this approach has been contemplated for almost two decades and there have been a number of initiatives that addressed a federation model and were fairly broadly supported. None generated a useful result, perhaps because the problems of next-gen networks were not yet acute. As I’ve noted recently, Ericsson could have this goal in mind with its Vonage acquisition, but likely only for voice and collaborative services.

A federation model would allow operators to offer each other wholesale features and other resources to support composition of new services. While it would still mean that a given operator might have to pay for out-of-footprint resources, other operators could also be paying that operator, and overall operator return on infrastructure would certainly be higher than it would be if public cloud resources were used instead of federated ones.

Of course, there’s always the answer that operators seem to love; stay the course. Think that somehow, against all odds, the past happy time when bits were worth something will be restored. That’s the thinking that prevails in most of the world today, but it’s under increased pressure and I don’t think it can be sustained much longer. That means that vendors of whatever cloth they may be cut will have an opportunity to improve their market share if they can get in front of the change. Which ones will do that is still an unanswerable question.

How Human is AI?

A Google employee raised a lot of ire by suggesting that AI could have a soul. That question is way out of my job description, but not so with questions that might lead up to it. Are AI elements “sentient” today? Are they “conscious” or “self-aware?” At least one researcher claims to have created a self-aware AI entity.

This topic is setting itself up to be what might be one of the most successful click-baits of our time, but it’s not entirely an exercise in hype-building or ad-serving. There are surely both ethical and practical consequences associated with whatever answer we give to those questions, and while some discussion is helpful, hype surely isn’t.

One obvious corollary question is “how do we define” whatever property we’re trying to validate for an AI system. What is “sentient” or “self-aware”? We’ve actually been arguing for at least a century on the question of biological sentience or self-awareness. Even religions aren’t taking a single tack on the issue; some confine self-awareness to humans and others admit at least indirectly to the notion that at least some animals may qualify. Science seems to accept that view.

Another corollary question is “why do we care?” Again, I propose to comment only on the technical aspects of that one, and the obvious reason we might care is that if smart technology can’t be relied upon to do what we want it because it “thinks” there’s something else it should be doing, then we can’t rely on it. Even if it doesn’t go rogue on us like Hal in “2001”, nobody wants to argue with their AI over job satisfaction and benefits. Is there a point in AI evolution where that might be a risk? A chess robot just broke a girl’s finger during a match, after all. Let’s try to be objective.

Technically, “sentient” means “capable of perceiving things” or “responsive to sensory inputs”. That’s not helpful since you could say that your personal assistant technology is responsive to hearing your voice, and that a video doorbell that can distinguish between people and animals is responsive to sight. Even if we were to say that “sentient” had to mean that perceiving or being responsive meant “capable of reacting to” doesn’t do us much good. Almost everything that interprets a real-world condition that human senses can react to or create could be considered “sentient”. And of course, any biological organism with senses becomes sentient.

“Conscious” means “aware of”, which implies that we need to define what awareness would mean. Is a dog “conscious”? We sort-of-admit it is, because we would say that we could render a dog “unconscious” using the same drug that would render a human unconscious, which implies there’s a common behavioral state of “consciousness” that we can suppress. Many would say that an “animal” is conscious but not a plant, and most would agree that in order to be “conscious” you need to have a brain. But while brains make you aware, do they make you self-aware?

We can do a bit better with defining self-awareness, at least with animals. Classic tests for self-awareness focus on the ability of an animal to associate a mirror image of itself with “itself”. Paint half of a spider monkey’s face white and show it a mirror, and it will think it’s another monkey. Paint some of the great apes the same way, and they’ll touch their face. “That is me” implies a sense of me-ness. But we could program a robot to recognize its own image, and even to test a mirror image to decide if it’s “me” through a series of movements or a search for unique characteristics. Would that robot be self-aware?

One basic truth is that AI/robots don’t have to be self-aware or sentient to do damage. It’s doubtful that anyone believes that chess robot was aware it was breaking a girl’s finger. AI systems have made major errors in the past, errors that have done serious damage. The difference between these and “malicious” or “deliberate” misconduct lies in the ability to show malice and to deliberate, both of which are properties that we usually link with at least sentience and perhaps to self-awareness. From the perspective of that girl, though, how much of this is really relevant? It’s not going to make the finger feel better if we could somehow declare the chess robot’s behavior malicious by running some tests.

This broad set of ambiguities is what’s behind all the stories on AI self-awareness or sentience. We don’t really have hard tests, because we can easily envision ways in which things that clearly shouldn’t meet either definition might appear to meet both. Is my robot alive? It depends on what that means, and up until recently, we’ve never been forced to explore what it does mean. We’ve tried to define tests, but they’re simple tests that can be passed by a smart device system through proper programming. We’re defining tests that can’t work where behavior is programmable, because we can program it in.

So let’s try going in the other direction. Can we propose what AI systems would have to do in order to meet whatever test of sentience or self-awareness we came up with? Let’s agree to put self-awareness aside for the moment, to deal with sentience, something that might be approachable.

One path to sentience could be “self-programming”. The difference between a reflex and a response is that the former is built in and the latter is determined through analysis. But anything that can solve a puzzle can behave like that. I’ve seen ravens figure out how to unzip motorcycle bags to get at food; are they self-aware because they can analyze? Analyzing things, even to the point of optimizing conditions to suit “yourself” isn’t exclusively a human behavior, and in fact can be found even in things that are not self-aware. Scale may be a possibility; a sentient system would be able to self-program to deal with all the sensory stimuli from all possible sources, through a combination of learning and inference. Children are taught sentient behavior, either working it out through trial and error or being instructed. Either is likely within the scope of AI, providing that we have enough power to deal with all those stimuli.

We can’t dismiss the role of instinct though. Sentient beings, meaning humans, still respond to instinct. Loud noises are inherently frightening to babies. Many believe that the fear of the dark is also instinctive. Instincts may be an important guidepost to prevent trial and error from creating fatal errors.

Culture is another factor, and in AI terms it would be a set of policies that lay out general rules to cover situations where specific policies (programs) aren’t provided. Cultural rules might also be imposed on AI systems to prevent them from running amok. Isaac Isamov’s Three Laws of Robotics are the best-known example:

First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These laws are more useful in our quest for a standard of sentience than you might think. Each of them requires a significant extrapolation, a set of those broad policies, because what might “allow a human being to come to harm,” for example, requires a considerable exercise in judgment, meaning inference in AI terms. “Hitting a melon with a hammer will harm it. Hitting a human with one would therefore likely harm it” is an extrapolation of something an AI system or robot could be expected to apply, since conducting the first test wouldn’t be catastrophic in terms of social policy, and the rule could make explicitly testing the second hypothesis unnecessary.

I think that it would be possible, even with current technology, to create an AI system that would pass external tests for sentience. I think that some existing systems could pass enough tests to be mistaken for a human. Given that, we can then approach the complicated question of “self-aware” AI.

You and I both know we’re self-aware, but do we know that about each other, or any other person? Remember that sentience is the ability to respond to sensory inputs through the application of reasoning, meaning inference and deduction. Our ability to assign self-awareness to another depends on our ability to sense it, to test for it. We have done that with some animals, and have declared some to be self-aware and others not, but with animals we have biological systems that aren’t explicitly trying to game our tests. An AI system is created by self-aware humans who would be aware of the tests and capable of creating a system designed to pass them. Is such a system self-aware? I don’t think many would say it is.

The problem with the step from sentience to self-awareness is that we don’t know what makes us self-aware, so we cannot test that process, only symptoms, which can be mimicked by a simple AI system. We may never know. Should we be worried about self-aware AI going rogue on us? I think we have plenty of more credible, more immediate, things to worry about, but down the line? Maybe you need to ask your robot vacuum.

Is Nokia vs Ericsson an Open-Model vs OTT-Voice Duel?

Nokia surprised a lot of people by beating Q2 forecasts, sending its stock up on July 21st. It’s beating rival Ericsson in stock performance for 3, 6, and 12 months, too. Ericsson, as I noted in a blog on July 18th, has bet billions on a Vonage acquisition, and Nokia seems to have been on an open 5G model. Why have the two taken different paths, which seems best tuned to market conditions, and what does this all say about the telecom space? That’s what we’ll look at today, starting with Nokia.

While Nokia has been criticized for being “open in name only” with respect to 5G, it has certainly moved to support the open model. I summed this up in my blog, so rather than repeat the criticisms, I want to look at whether the open approach has been helpful.

Operators have long been concerned about vendor lock-in, particularly in mobile infrastructure. The problems Huawei has been having, relating to the US government’s position on the company and the use of its technology, has raised the perceived risk level. Open-model 5G, including O-RAN, is a logical response, but it wasn’t something Nokia cited explicitly on their earnings call last week. They stressed execution and managing the supply chain issues, which is kind-of-pablum for earnings calls these days. The only mention of 5G was the comment that Nokia got off to a slow start and has caught up.

To a degree, Nokia is a kind of natural winner in the 5G space. 5G is a combination of mobile network upgrade (and Nokia has a division for that) and a general infrastructure capacity upgrade (for which Nokia also has a division; two, if you count optical separately). It’s always an advantage to cover most or all of the bases in a network upgrade like this; if you don’t the other vendors you admit may fight you for some or all of the pieces. Both Nokia and Ericsson can provide mobile-specific transport elements, but Nokia has a pretty strong general metro infrastructure story.

Did Nokia’s open-model embrace help, then? A quick check of comments operators have made to me about them was interesting. Roughly five of every six said Nokia was “open”, far more than competitor Ericsson garnered. Slightly less than half said that “open” was an essential attribute to them, so that suggests that Nokia had an advantage among roughly 38% of operators I sampled. Nokia and Ericsson matched each other in other “essential” attributes, like expertise, cost, and support for the required feature set, so this is a good thing for Nokia.

Ericsson, as I noted, didn’t fare as well with “open” comments. Only two operators in five said they were “open”, slightly less than half the number who described Nokia that way. Ericsson doesn’t have the product breadth in metro infrastructure Nokia has either, which gives Nokia an edge that’s difficult to measure.

What about Ericsson’s Vonage acquisition? As I noted in my blog on the deal, Ericsson seems to be focusing their attention on UC/UCC services, which are connection services. That’s a comfortable place for operators, but it doesn’t move the ball all that much, particularly when the same sort of services could be offered as OTT services (as Vonage has been doing). However, almost the same percentage of operators who thought Nokia was “open” thought that the Vonage deal “could be positive” for Ericsson’s customers.

Wall Street didn’t like Ericsson’s quarters; their stock took a hit. Nokia’s went up on their own quarterly results. On the basis of all of this, it would appear that Nokia has positioned itself better, but it’s not totally clear what role “open” participation had in their results.

We might even be justified in saying that Nokia may have staved off open-model 5G just a bit, if one takes the broadest sense of the term. Operators, confronted with no major/credible vendor who promised an open-model 5G, might well have been way more active than they have been in seeking open solutions from less familiar vendors.

The problem those less-familiar vendors goes beyond the obvious point that they’re not the usual suppliers of mobile or telecom infrastructure. Because few vendors provide the actual RF components of a mobile network, and many of the most active supporters of open-model 5G are smaller firms, their solutions are necessarily integrated offerings. Operators have long been wary of this model, making exceptions largely when one member of the open project is a giant in another space. That’s one reason, IMHO, the operators have been promoting a public cloud strategy.

The other reason, of course is that 5G service requirements spread far beyond the traditional wireline footprint of operators. That means that they don’t have real estate to place metro hosting in all the places where they need, for competitive reasons, to offer service. This is the best reason I can see for the Ericsson/Vonage deal. If Ericsson could use the Vonage platform to create a means of “federating” the elements of a 5G service, or even offer to be the “interexchange player of last resort” to link locations away from where an operator would have facilities for hosting, they might be able to fend off the public cloud people. If that were the case, Ericsson could expect to see its fortunes improve considerably, and literally.

Another possible reason for the deal is the AT&T link I mentioned in yesterday’s blog. If AT&T is seeing a need to transform its business voice services to VoIP and a universal 5G platform, then the use of the Vonage platform might help, and it might be at least a contributing reason for AT&T’s selection of Ericsson in its 5G expansion.

For Nokia, anything that could help Ericsson poses the principle risk to Nokia’s current success. As operators respond to commoditization in the connection services space, they are likely to have to elevate their current services, including voice, to something like OTT. Vonage could help support that transition too, and if Ericsson pushes Vonage’s platform properly they could generate enough operator interest to threaten Nokia’s momentum.

But is that likely? All telcos are conservative, glacier-like in their movement. Telecom vendors tend to be the same, and Ericsson has always been perhaps the Great Marketing Sloth in a group that’s pretty slow overall. Nokia, with so many different corporate DNA contributions over the years, has at least some senior people with a more aggressive background. There are a lot of ways they could fend off this latest Ericsson push, and if they do they may take the top position in the telecom infrastructure space for quite a while.

We can’t declare Nokia’s upside a victory for open-model networking, and it’s too early to say just how well Ericsson will, or even can, play its Vonage acquisition. Any way you look at it, the Nokia/Ericsson and AT&T/Verizon duels suggest that we’re in for some interesting times in telecom.

Has AT&T Done Enough?

I probably watch AT&T more closely than I do any telco, any network operator. Not only have I chatted with literally hundreds of AT&T people over the years, I’m convinced that they are a poster child for “telco-tries-to-be-realistic”, and I certainly think they need to do that. I also think this quarter’s earnings prove that point, and maybe offer some insights to operators overall.

The first thing that jumps out from the transcript of AT&T’s call this quarter is the comment “we’re continuing our progress, improving our infrastructure and expanding our customer base across our twin engines of growth, 5G, and fiber.” They also said they had more quarterly postpay adds than ever before. In fiber, AT&T reported over 300,000 fiber adds, and the tenth straight quarter with over 200,000. While AT&T might not agree, I think that 5G success can be linked to smartphone promotions to facilitate a switch to 5G or even a switch to AT&T from a competitor. I think that fiber wireline Internet emphasis is defensive; they need to fend off players like Comcast who are MVNO partners with a competitor. DSL won’t do the job.

On the business side, AT&T has issues, but it’s hardly alone. They commented that “We’re seeing more pressure on business wireline than expected.” If you read (or listen) further, you find that AT&T is seeing businesses gradually shift away from what they call “legacy voice and data services”. There is no question that businesses have been far more resistant to VoIP and Internet-based connectivity than consumers; consumers of course never really had any data service but the Internet.

The most interesting comment on business services is “On the data front, VPN and legacy transport services are being impacted by technology transitions to software-based solutions. Today, approximately half of our segment revenue comes from these types of services.” What this is saying is that SD-WAN and virtual networking in general, using Internet connectivity as a “dialtone”, are starting to displace IP VPNs and business-specific broadband access.

Let’s stop here and parse this stuff a bit. Businesses, as I’ve pointed out in past blogs, link incremental spending to incremental benefits. Absent some justification, they want to see costs go down, not up. The fact that network connectivity hasn’t been linked to any transformational changes in productivity puts price pressure on even current network spending, and makes any increase hard to justify. That shows that “business connectivity” is commoditizing.

Viewed in this light, the 5G and fiber move has another level of justification. AT&T notes that business customers are replacing legacy voice with mobile services. The flight from legacy business data services combines with this to put business revenues under immediate pressure, so one solution is to try to beef up residential revenues, and another is to deploy 5G assets to capture any flight from legacy voice, and beef up fiber to extend quality Internet access to more places, including places where there are branch offices of their valued (and fleeing) enterprise customers.

Government spending is also under pressure, which AT&T says accounts for 20% of business service declines. Here, where it’s policies and not market conditions that establish the purchase justification framework, the company really can’t offer any affirmative options beyond the same 5G/fiber-broadband focus that they’ve adopted to support consumer and enterprise. I think it is very possible that targeted software-defined data services and mobile/VoIP OTT-like services that would appeal to businesses on cost could also improve AT&T’s ability to win government deals.

The reason I noted “OTT-like services” as appealing is that AT&T also said that out-of-area service extension contributed about 20% to business service revenue declines. They said “This pressure will be managed through opportunities to operate more efficiently, movement of traffic to alternate providers, symmetrical wholesale pricing adjustments and natural product migration trends.” While they didn’t say this explicitly, a migration to a software-defined business VPN strategy for out-of-region branch connectivity would surely help manage these “wholesale” costs.

This sort of service shift also raises the issue of 5G dependence. 5G is seen by many (apparently including AT&T) as a platform for the creation of wireless features that would earn more revenue, which simple 5G migration does not create. The problem is that, like business services, 5G value-add features like network slicing would have to be “federated”, wholesaled from a provider who actually had facilities in a given geography that AT&T didn’t cover directly.

A shift to business services based on mobile and software-defined capabilities is an OTT strategy. AT&T has in the past commented on the idea of building “facilitating services”. According to what AT&T said in March, 2022, “On what I refer to as Act Two, we are doing a lot of work today that is enabling us to open up aspects of the network for others to come in and start at offering value-added services associated with it.” It’s very possible, even likely, that AT&T intends to use these opened-up assets to enhance OTT services, making them differentiable on AT&T networks where they’d simply be riding on top of other networks.

One thing that seems pretty clear is that you can’t admit to commoditization of transport and a growing dependence on OTT without having any revenue strategy to offset the inevitable declines at the lower level. This may be another example of what Ericsson could be hoping to exploit with the Vonage deal, since it was Ericsson that was picked by AT&T for its 5G expansion. That would mean that AT&T and Ericsson/Vonage would have to codify how the Vonage OTT stuff could exploit APIs that AT&T exposes below. Does AT&T use the Vonage platform developer program to do some of the linkage?

Another thing that seems clear is that it’s likely that most, if not all, telcos will have to face these same issues at some point. AT&T has the lowest demand density of the Tier One telcos, which means that its natural return on infrastructure investment is under the most pressure. However, rival Verizon had an objectively bad quarter while AT&T had at least some bright spots. Verizon has the advantage of high demand density, but that may have lulled the company into complacency, particularly with respect to how to deal with the commoditization of connection services and pressure on return on infrastructure investment. And, of course, Verizon’s territory is dense enough to encourage competition.

A final point, perhaps the most critical point, is that business service infrastructure and residential broadband infrastructure have to converge. When you have a problem with return on infrastructure you need to take advantage of common facilities wherever possible. The fact is that there are no branch office “business access” technologies in common use that can measure up to the performance of residential broadband. Branch locations, at the least, need to connect through the same facilities as consumers. I think AT&T’s fiber and 5G plans, which are focusing on areas of high demand density within their territory, are a step toward “common-ization”, and here again I think AT&T is taking a lead over other telcos.

Not as much as they should, though. AT&T has tried harder than any other Tier One to confront the future, but they’re still entombed in telco amber, limited in their ability to address the future by the blinders of the industry. The issues they saw first, and have tried to address aggressively, are being felt by others now. Has AT&T done enough to gain the running room they need to do the rest? I suspect we’ll get the answer to that in 2023.

How Do We Orchestrate Complex Services?

One of the things that 5G and the notion of “virtual network functions” has done is demonstrate that “provisioning” is getting a lot more complicated. A virtual function that mirrors a physical device lives in two worlds, one the world of network operations and the other the world of application/cloud hosting. If we expand our goal to provisioning on-network elements (virtual functions are in-network), or if we create virtual functions that have to be composed into features, it gets more complicated yet. What does “service orchestration” mean, in other words?

In the world of networking, services are created by coercing cooperative behaviors from systems of devices through parameterization. Over time, we’ve been simplifying the concept of network services by adopting an “IP dialtone” model overall, and with our increasing reliance on the universality of the Internet. This model minimizes the provisioning associated with a specific service; Internet connectivity doesn’t require any provisioning other than getting access working, and corporate VPNs require only the use of a specific feature (MPLS) that’s pre-provisioned.

Application networking has also changed over time. The advent of the cloud, containers, and Kubernetes has combined with basic IP principles to create what could be called the subnet model. Applications deploy within a subnet that typically uses a private IP address space. Within this, all components are mutually addressable but from the outside the subnet is opaque. The on/off ramps are then selectively “exposed” to a higher-layer network through address translation.

If we apply this combination to the world of Network Functions Virtualization (NFV) using the ETSI ISG rules, we can see a “service” that is made up of discrete hosted functions that are “service chained” with each other, and which obviously also have to connect with real devices. NFV took a network-centric view in service chaining and internal service connectivity, so the virtual functions were hosted in the “application domain” and connected in the “network domain”. VNFs lived in the network domain as virtual forms of devices.

What happens when we move away from original-ETSI-ISG models, to “containerized network functions”. It depends, and that’s the nub of our first challenge in orchestration. Do we create CNFs by containerizing VNFs but otherwise leave things as they were? If so, then we’ve not really moved away from the ETSI approach, only changed the hosting piece. VNFs are still devices in the network domain. Or, do we adopt the cloud/container/Kubernetes model of hosting, and extend the network assumptions of an “application domain” to include devices? Or will neither work?

This isn’t a philosophical issue. The reason why the NFV ISG locked itself into its model of networking was to preserve existing management practices and tools. If a VNF is a virtual form of a physical device, then we could argue that a network service is created by device management as usual, and that the virtual-device piece is created by software hosting. Ships in the night, except of course that if a virtual device fails and can’t be restored at the “hosting” level, it has to be reported as failed to the network service layer.

If ships in the night are still passing, so to speak, then the current management and orchestration practices associated with networks and applications can continue to be applied. There isn’t a compelling need to integrate them, so “service orchestration” may not be a big deal. In the world of NFV, this seems to be a safe bet.

But the world of NFV may not be the end of it. If we are deploying services, can we compose a service where a real device or a virtual device might be employed? If so, then our deployment orchestration process has to have the ability to recognize equivalence at the functional level but divergence at the realization level. We could even envision a situation where somebody wants “SASE-Device” as a part of the service, and that want might be fulfilled by 1) loading SASE functionality as a VNF into a uCPE element already in place, 2) shipping a uCPE or an SASE device and then setting it up, or 3) deploying SASE functionality inside a cloud data center.

That latter point implies that we have to consider “local conditions” when setting up a service. Those conditions could include not only what’s locally available already, but also perhaps constraints on what can be used, such as cost. This in itself suggests that it may be necessary to mingle provisioning steps across the network/hosting boundary. Ships in the night may collide.

The dilemma that faced the NFV ISG, and the choice to book ships in the night, is now being faced by Nephio, for the same reasons as it faced the ISG. Service orchestration, in a unified sense, is a big challenge. I took the time to lay out a complete model in ExperiaSphere, and it took six sets of tutorial slides to describe it. The late John Reilly of the TMF (one of my few personal heroes) created a model that, if properly extended, could have done much the same thing, but a decade earlier (NGOSS Contract). Implementation difficulties, then, may be the issue in service orchestration.

Or, it may be that we’ve not faced the issue because we’ve been able to avoid it, which is what I think is the case. From both the network side and from the hosting side, jumping between ships in the night seems unnecessary. The question is whether that’s true.

Currently, it probably is. As long as we stick with the NFV-established vision that a VNF is just a virtual device, then what we’re doing is codifying a vision where separation of orchestration doesn’t change just because we may add different virtual devices in the future. That’s because the NFV vision is not really limited to current physical devices; what we’re really saying is that the network behavior of a virtual function can be separated from the hosting. That’s true with 5G, which is the only truly standards-sanctioned use of NFV.

I’m not so sure that those ships can continue to pass, night or otherwise. There are a number of trends already visible that suggest we may need integrated provisioning.

First, the question of whether physical devices should be required to decompose themselves into microservices before being assembled into virtual devices was raised at the first US meeting of the ISG in 2013. At the time, there was great concern that requiring this sort of decomposition/recomposition would make vendors unwilling to submit their current device logic in VNF form, and I agreed with that. I wish I’d brought up the question of whether future VNFs might be better composed from microservices, but I didn’t. If we are to compose a VNF, then multiple software instances have to be selected and deployed to create the network-layer managed entity, and that may be beyond simple ships-in-the-night orchestration.

Second, operators worldwide are trying to automate service lifecycle management, from order entry through end of life. Automated lifecycle management has to be able to accommodate differences in “local conditions”, which for residential services means in a variety of areas, and for business services means in multiple locations. The more location differences there are, the more important it is to be able to change an abstract requirement (an SASE, to use my earlier example) into a locally optimal specific deployment.

Third, we are already seeing operator interest in “facilitating services” designed to bridge between their connection services and on-the-network or OTT services. It’s hard to imagine how these will be supported without a fairly comprehensive orchestration capability, not only because they’re likely to require feature deployment to accommodate OTT partners’ locations, but also because they’re a form of federation.

Which is the final reason. Even 5G and VNF-based services are already facing the question of how they are supported when an operator’s footprint won’t match the footprint of the customer. Every operator who has this problem (which is pretty much every operator) will conclude that they don’t want the customer to contract for each geographic segment independently, but if that isn’t done then the operator who owns the deal will have to accept federated contributions from other operators. How, without better orchestration, would those be lifecycle-managed?

Of course, logically deducing that we need an orchestration solution doesn’t create one. We could always just stay the current course, and if we do we’re likely heading for a future where “the Internet” subsumes every other notion of networking. If we presumed universal IP like that, we could use virtual-networking tools (which Kubernetes, for example, already supports) to build service connections both to the users and between feature elements. The problems with this are that it could take a long time to evolve, and that if it does it’s difficult to see how network connection features like 5G slicing could be differentiated without mandating paid prioritization on the Internet.

This is a complex issue, one that’s going to take some serious thinking to resolve, and it seems like there’s an almost-universal tendency to just assign the problem to a higher layer. That’s not been working to this point, and I don’t think it will work better in the future. Adding layers adds complexity, in orchestration, security, and pretty much everything. We seem to be complex enough as it is.

Can We Define the Next Big Tech Thing?

I blogged yesterday to ask the tech industry “What Lies Beyond?” What I want to do here, and what I deliberately did not do yesterday, was try to answer that question. That’s because my answer would move from analytical assessment of what I consider to be hard facts, to interpretations that in the end are my personal judgment…an educated guess if you like. I don’t want to mix those things on a topic this important.

If you look at our relationship with tech as it’s evolved over the last 70 years or so, it’s possible to see a trend that’s hardly surprising, but not always considered. Tech is getting more involved with us. In business, tech is moving closer to the how and not just the what. We used to enter transactions into databases by punching cards to record them long after the event. Yesterday, I checked out of a market by waving my phone over a terminal, and everything that those cards carried, and more, is pushed through a whole series of applications.

In the consumer space, many people have built their lives around what is in fact a virtual world. Even I, a social-media Luddite, have many “friends” that I don’t see face to face, and some I’ve never seen that way at all. We are entertained by devices we carry with us, not ones we stick on a console in the living room. We crowdsource more and more, probably more than we should, because we can.

The way that tech has changed businesses and lives is important because it’s the sum of the changes we think are valuable that determines what we’re willing to spend on tech. Thus, if the question “What Lies Beyond” that I asked in yesterday’s blog is important, it’s important to understand how tech gets even closer to us, more involved with us. Features of technology are important only insofar as they support that continued shrinking of tech’s distance from our lives.

To me, there is a single term that reflects what we need to be thinking about, and that term is context. Suppose you had an oracle (a soothsayer, not the company!) who told you the absolute truth about everything. It would be great, right. Now imagine yourself sitting in a restaurant having lunch, and hearing the oracle say “when you mow your lawn, you need to set the mower an inch higher.” Or maybe, as you ponder an investment decision, hearing “you haven’t talked with Joan and Charlie for a while.” Both those oracle comments may be not only true but helpful, but not in the context in which they’re delivered.

What lies beyond our on-demand world is the anticipated world. What allows useful anticipation is context.

Context is made up of things. First, there’s what we are doing. Second is what we need to do it. Third is where we are doing it. Number four is what is the risk-reward balance of interruptions. Even if we’re pondering investments, knowing the building is on fire justifies interrupting our deliberations. Knowing that the lawn needs mowing probably does not.

The problem with addressing context is that it requires too much from us. None of our components are things that our gadgets can readily provide. By nature, context is systemic, and so we need to get the system in which we live, the real world, transported into our tech world. This is best accomplished, like many things, through a movement on both sides. Thus, the what-lies-beyond future depends on three things, the metaverse, IoT, and AI.

What underpins context is what I’ve called the “metaverse of things” (MoT) or a digital twin. In order for tech to be contextual, it has to have a model of our context that it can deal with. The combination of digital twinning technology and an injection of metaverse principles can allow IoT and other information resources to be introduced into a contextual model that can also “contain” the user.

I’ve blogged before about MoT, digital twinning, and contextual services, so I won’t repeat that here. Instead, I want to look at the tech impact of these concepts, by linking them back to the driving benefits.

Any widespread use of context demands at the minimum a broad view of presence; not just the simple “I’m online” stuff we get now, but more body-specific, motion-and-location-specific. This would mean a combination of “tags” (which could be watches, even smartphones, but could also be RFID-like elements) and “sensors” that could sense the tag positions. We could expect this to provide a mission for IoT, and we could also expect it to be multi-leveled, meaning that people in their homes or workplaces would “allow” greater precision in accessing those tags than those walking the street.

Because this sort of tag/sensor combination is useless if every MoT player decides to create their own system, we can also expect that this will standardize at three levels. First, obviously, the sensors and tags would be standardized. Second, the message formats would be standardized so applications with access rights could read the presence data. Finally, there would be a service level standardization that would provide for access to sensor/tag data with the regulatory levels of anonymity and to simplify software development by presenting “presence features” rather than raw data. You can’t have hundreds of applications accessing a simple sensor without blocking it.

The next thing you could expect is that latency becomes important, but it’s not just network latency, it’s the length of the control loop. The first step in keeping latency down is to provide edge computing facilities to shorten transit delay, and the second step is to improve network latency, first to the edge sites and then between edge sites. If we assumed that edge hosting was done in a metro area, major cities would likely be able to satisfy edge requirements from a metro center; for “metro areas” that are spread out (Wyoming comes to mind; the whole state was defined as a Local Access and Transport Area or LATA) it would probably be necessary to spread the metro out, or at least to manage a trade-off between the number of connected edge sites (to reduce latency in reaching one) and the latency of the connections.

The notion of presence as a physical property has to be augmented by presence in a behavioral sense, meaning that applications would need to understand that “what am I doing?” dimension. Some of that knowledge could come from physical presence; I’m driving a car, walking along, riding the subway. Some could come from my interaction context; I’m chatting with Jolene or Charlie, texting with my mom, or perhaps I’m receiving a call that’s been prioritized based on my policies. This information is generally available today, but in order for it to be a real driver of applications, it would have to be deliverable in a standard form and conform to regulatory policies on privacy and personal protection, just as physical presence would.

What I think this exercise demonstrates is that the future of tech depends on the creation of an ecosystem, not on the individual technologies that make it up. If you improve network latency you have removed that barrier to a set of applications for which we have a half-dozen other inhibitors to deal with. Same with edge computing or IoT sensors. Our problem in advancing tech to the next level, in the sense of being able to help workers more directly to enhance their productivity, or to improve quality of life for consumers, is that we’re looking at a strand of spaghetti and calling it a dish.

If we want to advance tech, in computing, networking, personal technology, or whatever, then we need to support this ecosystemic view in some way. If we don’t do that, then eventually we’ll likely stumble on a solution, but it will take decades where doing it right might take months. Which tech evolution do you prefer? I think you know my choice.

Tech: What Lies Beyond?

Why is it that we always seem to miss the obvious? OK, we had a COVID wave, during which some areas were locked down and many people avoided going out, particularly to “social gatherings”. Economic data shows that they accumulated record savings. What happens when the restrictions on their lives are lifted? Answer: They spend some of those savings. Call it “rebound behavior”.

How about this corollary? People stayed home, so what did they do? Educate themselves by reading the classics, or watch TV? You can guess that one. In a desperate attempt to find something to watch after the first week of isolation, they use some of that money they’re saving to subscribe to pay TV. What happens when they can go out again? Answer: They go out, and thus they’re not home to watch that pay TV, so they cancel their subscription.

The pandemic created massive shifts in human behavior, which created massive shifts in economic behavior. Returning to “normal” doesn’t mean that somehow with the advance of the calendar, we erase the past. We recover from it. That’s a big part of why we’re having inflation now, and it’s a big part of why we’re seeing changes in the market sectors that were impacted by the behavioral changes. In tech in general, and networking in particular, it would be nice to know what changes are transitory, part of rebound behavior, and what are likely to persist.

Objectively, the current impacts on the global economy and markets are related to “inflation”, and I put the term in quotes for reasons you’ll soon see.

Inflation the increase in price for goods, almost always associated with an excess of money to spend. The most familiar cause of inflation is boosting the money supply—we saw that with Confederate currency during the US Civil War, and with the German mark during the waning days of the Weimar Republic. A similar, though usually less dramatic, source is excess economic stimulation, which some have said is a factor in the current situation. But there’s another factor at play here, which is the fact that consumers during the pandemic accumulated record or near-record savings, and denied themselves a bunch of things. Pent-up demand is a powerful force once the factors suppressing demand are eased.

The normal response to increased demand is that happy vendors increase supply. That’s been a problem because there’s constraint on supply chains created by both COVID and the post-COVID mindset. We had a lot of people working from home or out of work. Not all were willing to return to business as usual. We had business failures, people who found other positions, and all this created distortion in the labor markets. Governments have also been stressed by conditions, including civil unease, and have responded by trying to gain advantage somewhere. In short, we’re in flux in the global economy, while the economies of the world adjust to the evolving market realities. That’s why global stock markets have been so volatile.

Are we in, or about to be in, a recession? If so, it will be an unusual one. Unemployment traditionally spikes when we’re entering a recession, and that’s not happening. Consumer spending usually dives, and that’s not happening either…yet. The question, from a broad economic perspective, is whether these twin not-yet factors turn negative because of increased worry, before some of the economic shifts and shakes have shaken out. If they do, then we can expect that there will be a recovery in the second half. If not, then when we finally see daylight will depend on how far our twin factors go down the tube before we see stability in the fundamentals of the global economy.

For tech, we can expect a variation on the theme here. Business will typically try to plan capital purchases for stable economic conditions. If they see an issue, they will often take action to minimize their exposure if things get worse. Some banks’ decision to increase reserves for bad loans is an example, and so are decisions (so far by a limited number of companies) to slow-roll tech spending plans, particularly new projects and new spending predicated on a general improvement in revenues. This sort of reaction tends to happen when there’s a perception of risk that crosses over into another budget period. Companies (assuming, for convenience, calendar-year budgeting) will often push projects into later quarters if they feel uncomfortable with conditions.

As of today, I’m not hearing too many cases of project deferral. So far, the largest constraining factor on vendor revenues has been delivery challenges created along the slippery slope of the current supply chain. Since the supply chain constraints on the production of goods is also perhaps the largest source of inflationary pressure, that means that fixing the supply chain would fix things overall, fairly quickly. When, then, could we expect some relief there?

That’s hard to say. I’ve attempted to run my model on buyer behavior to guesstimate something, but this is uncharted territory. I told a friend in the investment business that I expected that the quarter just ended (April-June) would still probably generate some unhappy news, but that it would also generate some good news. The next quarter, then, would be when I’d expect the good news to dominate. Given that the stock market is driven by the prospect of something more than by its realization (by the time what’s coming is clear, everyone else has already taken action, so you need to move on rumor), we could expect stocks to be somewhat ahead of overall economic conditions.

That doesn’t mean that tech is going to roar back to pre-COVID days, but better. A lot of buyers have been planning for the worst, and some of the steps they’ve contemplated wouldn’t be inappropriate in good times. As is the case with the economy overall, there’s going to be a race between positive moves with the supply chain, and negative reactions to inflation, central bank interest rate increases, and overall economic concerns.

Two things will make the difference between tech vendors at risk and tech vendors with an opportunity. The first thing is breadth of the products on which their revenue depends. This is not a time when the broadest solutions win; it magnifies the risk that at least some of those products will be under project deferral pressure. The second thing is a strong link to a credible future requirement set. Buyers are still looking forward, but they’re potentially looking beyond their normal horizon. If they see a vendor who seems to offer something outstanding in support of a distant shift, that vendor will look better in the near term. This second point is linked to the concept of a “planning horizon”.

From the very first, my surveys of enterprises have indicated that companies considering capital purchases will assign an expected useful life to those purchases (three to five years), but in the last two decades they’ve responded to increased interest in “technology substitution” by defining what is in effect an expected useful life of the approach or architecture on which their purchases are based. Over time, enterprises have tried to look further forward than the “useful life” of equipment, thinking that when a piece of equipment reaches end-of-life, it would likely be replaced by a comparable device. However, if the entire approach is now subject to review, an obsolete piece of equipment may not be “replaced” directly, but be part of a broader shift in approach.

What I am seeing from some enterprises is an indicator that the planning horizon is being pushed out, as well as the “useful life” period being extended. That means that enterprises are now sensitive to risks to approach, which means they want to see vendor engagement with the kinds of future developments that could create a change in approach. This is why having a strategy that specifically links to credible shifts in technology and architecture is a benefit.

The obvious question is just how a vendor could create links to credible shifts in technology and architecture. It’s likely the answer to that would be different for enterprises versus service providers, and perhaps even across the various global regions. It’s also likely that the best answer would include things that aren’t particularly to the advantage of the vendors.

Start with a basic truth, which is that pure connectivity is under pricing pressure from both sides. Consumers may well believe “bandwidth hype” more than enterprises, but neither group is showing buyer behavior that suggests that they’d pay more per unit capacity than they do today. In fact, there are strong signs that they would want to pay less. What’s needed to boost tech is simple; credible benefits. Simply increasing bandwidth doesn’t deliver that, particularly for businesses. We need to go beyond.

What lies beyond? That’s a question that could have been addressed two decades ago. In the business space, I noted that we’d seen cyclical changes in tech spending that reflected periods when new tech opened new business benefits for exploitation. We ended our last one in roughly 2000, and we’ve not had one since. Could we have? I think so, and three waves of spending improvement, validated by real government data, seems to back that up. We have an opportunity to answer the “what’s beyond” question today, too. The long-term success of tech in avoiding commoditization depends on doing just that.

Ericsson and Vonage: Any Hope?

Ericsson has announced that it’s received final regulatory approval for its acquisition of Vonage, a “communications service provider” that’s long been an OTT play. For many analysts (both telecom and Wall Street) this was a head-scratcher from the first and now it’s happening. Is there any rationality behind it, or is it a knee-jerk reaction to the ongoing problem operators are having in monetizing 5G? I blogged on this acquisition last year, but let’s revisit it now to see if it makes more sense…or less.

From the first, Ericsson has made it clear that what it wants from Vonage isn’t a service platform to compete with its telco customers, but rather a technology platform to allow those customers to compete better with OTTs. The Vonage Communications Platform is a set of APIs and a developer program, and it has been used (and can be further promoted) to develop OTT collaborative services. The idea, says Ericsson, is that it could build service offerings like Teams and Zoom, so operators could avoid the dreaded “disintermediation” they’ve been talking about for almost two decades.

Why now? The answer is 5G. In reality, what 5G has been from the first is an evolutionary step in wireless services. It’s not the Great Transformation, or maybe even the little one. It’s an evolution, but like all network technology evolutions, it requires operator investment. Most 5G users don’t even know when they’re on it, or back on 4G LTE, but operators need to pay nevertheless.

It’s hard to say whether operators promoted the notion that 5G would be a Great Transformation, whether it was vendors (like Ericsson) who wanted the Street to see rosy profit growth, or the media who’s always looking for something to promote. Everyone got into the act, so we got this 5G fantasy story. The question is whether Ericsson believed it, enough to spend a boatload of money buying Vonage to exploit it, for some good reason. If not, this might have been the Great Boondoggle instead of the Great Transformation. In fact, that should probably be the outcome of default, unless Ericsson can make things different.

The biggest barrier to that is the network operators themselves, Ericsson’s customers, the players whose 5G fortunes need a bit of tuning. It’s not that operators oppose the idea of a service platform to build something on top of 5G, or that they object to having something on top of 5G. The problem is that collaborative services have already established themselves, thanks in no small part to COVID, and I mean well established. I’ve talked about the risks Cisco faces by being a “fast follower”. How about the risks of being a slow follower?

Then there’s the problem of Vonage and its customer base. Vonage’s platform developer community has built up services that they, or Vonage, or both, make money on. Vonage, then, is offering collaborative services, which you can see simply by visiting their website: “Power your customer experiences across the journey. Connect employees any time, from anywhere, on any device. Vonage does that.™ Now we’re talking.”

This begs the question of how Ericsson monetizes the acquisition, because it raises the question of how Ericsson’s operator customers exploit the Vonage platform. Do they compete with Vonage (and vice versa), do they encourage developers to build operators’ own custom collaborative services? I said at the outset that Ericsson doesn’t seem to be trying to compete with its operator customers for 5G services, but collaborative services are what Vonage already does. To avoid competition, Ericsson would need a plan to “deconflict”, which we don’t yet have.

Then there’s the “business as usual” question. I would like to see operators make a success of some over-5G service, or any other higher-level service, frankly. They seem to be locked into the notion that connectivity is all they can do, and doesn’t the Vonage website quote say “connect employees any time, from anywhere, on any device”? That sure sounds like connectivity, which means it sounds like it’s playing the same tune as the one operators have failed to dance to for over a decade now. How is this going to help anything, anyone?

It is possible that Ericsson has some vision of the application of Vonage’s platform beyond that which Vonage itself has exercised. After all, Vonage was a VoIP company to start with, a connection player. Possible, but if that’s the case, why not say something about it? You might argue that Ericsson doesn’t want to tip its hand on this point, but one of the things that typically happens with any big M&A attempt is that people in the acquired company who don’t want to be part of the acquiring one will bail out. Usually these are higher-level people who know that their days would be numbered anyway. What are the chances that at least one of those people wouldn’t know what Ericsson and Vonage were discussing, so the secret strategy couldn’t be secret unless Ericsson never checked with Vonage people to validate the possibilities. That doesn’t seem very likely either.

It’s also possible that Ericsson is simply trying to give the customer what they want, even if perhaps the customer is wrong. “The customer is always right” is, after all, a time-honored principle. This would mean that if it is essential for operators to build services on top of 5G, and if operators are predisposed to consider only comfortable connectivity services, then buy up a platform that can provide those services. Then all that remains is to figure out how to work Vonage’s business into the picture.

Perhaps as an out-of-area partner or extension? Who are the prospective customers for the Ericsson-Vonage-technology operator collaborative service? Enterprises, who are very unlikely to have workforces conveniently concentrated in any operator’s 5G footprint. Enterprise-wide collaboration means, effectively, global collaboration. Vonage could offer a way to pull in the workers that aren’t within an Ericsson-partnered operator’s footprint.

The pitfall with this is that if we considered collaborative services to be OTT services, and if we believe that Teams and Zoom are existing examples, then the Internet is enough to extend service to the world, isn’t it? Not if we assume that 5G-specific service features are to be part of the picture. What Ericsson could do with Vonage is provide what’s essentially a 5G-collaborative interexchange carrier capability, one that could be married to local operator 5G features and the 5G features of other operators who also buy into Ericsson’s 5G strategy.

One of the points often overlooked in discussions about advanced 5G applications is the question of how an enterprise service extends beyond a single operator’s footprint. Do operators “federate” (to use a term they often use) things like slices, or something at a higher level? I’d argue that past history suggests that operators are happy to interwork services but much less so to federate lower-level elements, which 5G features like slices would surely be. Vonage could be a means of turning collaborative communications into a federated service.

It could even be more. Might Vonage platform APIs be a means of defining a more general model of service federation? I’ve not been able to review the details of their APIs, but the public information is at least suggestive of support for a broader mission. Operators might see this as a technical benefit, and they might also see it as a way of avoiding a marriage to cloud providers in order to provide wider-geography scope to their services.

And, of course, it could be less than all of this. Vendors in the telecom space have proven they have no better grasp of the evolution of services and the importance of the cloud than their customers. Ericsson may have just spent almost seven billion dollars on something that had potential, but potential they couldn’t realize. We’ll see.

Could Super-Apps Save Operator Profits?

Nobody doubts that network operators are looking for new revenue sources. Even the operators themselves doubt that there’s much consensus on what sources might be credible. An idea that’s floated around both operators and OTTs is what a recent Light Reading article refers to as a “super app”. Patterned after the WeChat app in China, a super app would presumably be a kind of universal portal, a one-stop shop for a lot of activities. Since portals are a strategy operators have favored for other reasons, might this be an example of a new opportunity?

Operator portals emerged, at least into my personal visibility, over a decade ago, as operators looked for a way of reducing the need for customers to interact with operator representatives for things like routine service changes and (increasingly) customer support. In terms of service changes, portals have been generally successful, but both operators and customers have been less excited by the customer support dimension. Few operators have really even looked at extending their portal into super-app territory.

What’s the difference? Portals are designed to support user interaction with a company’s own services/products. What’s behind the super-app concept is the theory that by expanding the portal to supply access to other services and sites, the portal becomes more valuable and the customer is encouraged to use it, thus increasing engagement with the portal provider.

The first and most obvious question about the viability of the super-app opportunity for network operators is whether “increasing engagement” is useful. If you’re a Google or a Meta or a Microsoft, it’s easy to see how being a window into your customer’s world would give you a competitive advantage. Knowing what a user is doing online is inherently valuable, after all, providing that you can leverage that knowledge, which all OTTs are surely able to do. But what about operators?

There is considerable regulatory and consumer concern about having an ISP “see” traffic. Many worry to the extent that they’ll use a different DNS service just so that the operator can’t see their requests for website IP address access. People use VPNs to hide their traffic. Given that, would they accept an operator’s portal? How long do you think it would take for someone to publish a story about the risk to privacy?

This doesn’t mean that a super-app strategy would be bad, though, just that it might be harder for a network operator to make it good. If OTTs have it easier, that would mean that competing with them to create a super-app would be an uphill fight, unless there was something an operator could leverage. Is there? To answer that, look at the cable players like Comcast.

Comcast has a sort-of-super-app already. It has an ecosystem of services built around the video Xfinity property. It adds security and other optional features too. Clearly an ISP can be a super-app player, but most operators don’t have a strong video story to build on/around. In the US, AT&T tried and ended up spinning out its content stuff. Verizon never really tried, though it did resell streaming video. There may be a signal here; cable companies (because they were originally built on video and evolved to broadband) have an easier super-app play than telcos, because they really started with broadband. The implication is that if you aren’t a content player out of the box, you have little chance of buying or building yourself into the space.

Could security be a play? Certainly in the business space, but operator attempts to sell consumer security services have met with only limited success. There are some indications that today’s ransomware world could introduce an opportunity, but how would the operator really play in ransomware detection? That would almost surely mean getting into the secure email business, which is another space that’s seen only limited successes.

Home monitoring was another initiative that some operators tried, but here again the service has had only limited traction. The problem with monitoring is that there’s installation involved, and most operators have a higher unit cost of outside support labor than competing security companies. Could operators send the consumer something for self-install? Again, there are already players who do that, including both Amazon and Google.

As interesting as I think super-app revenue might be, and as interested as operators might be in securing some, I think the simple truth is that it’s too late. Operators were at one time touching their customers in a real sense. You had installers, outside plant people. To manage the decline in revenue per bit, operators initiated operations cost management practices, many of which focused on getting rid of those field people. It might have been smarter to have figured out what else could be done with them, but they’re gone now and it’s too late.

Is there no hope? The only thing I can think of at this point is to build on the support-portal concept. A network connection is a user’s window on the world, but also the world’s window into the user’s facility. A lot of diagnostic work could be done by an operator, on the network connection the operator provides (which is a big part of their current portal thinking), on the local network, and even on network devices. However, this sort of thing has minimal chance of generating significant consumer revenue, and consumers are the source of bucks that operators need to tap.

Which likely means that operators have waited too long to make a go of the super-app concept. That’s the downside of their reluctance to move beyond connectivity services, even in the face of what should have been a stark reality back in the ‘90s. When a “service” like Internet access emerges, it creates what’s in effect a more generalized dialtone. Other services can then ride on it, and that shifts the provider of the access service to one of providing an underlying and largely invisible pipe. People pay for services they recognize.

This is why I think AT&T is smart in thinking about what sort of facilitating features they could add. If you don’t want to climb the value stack all the way to the retail summit, then you need to at least get yourself to base camp.

Operators are Discovering Orchestration

One very interesting thing I’ve started seeing is an increased operator focus on “service orchestration”, a topic that’s dear to me but has been as much tire-kicking as real commitment on the operator side. What’s behind this, and where might it be taking things?

I interact with a lot of operator people, but in most cases their service orchestration or lifecycle automation questions have been more in the nature of fact-finding. In the last month, I’ve had 11 operators open discussions on specific needs and projects, which is more than happened in the year prior to that. Clearly there’s new stuff going on, and (without compromising anyone’s confidences) I can get an idea of what that something is.

There are two drivers of interest, at least within my Group of Eleven. First, 5G deployment, which introduces an explicit requirement for function hosting. Second, service outages (often massive, like that of Rogers in Canada) that suggest that traditional operations centers may be too prone to errors. While both of these drivers are directly connected to the sudden interest, the nature of what kind of project the interest will drive (and when it will happen) is fairly different.

Let’s start with today’s challenge, as I think it’s reflected in the inquiries. Networks are cooperative systems where individual elements (devices, in most cases) are expected to collectively behave to support a set of user services. These devices have to be parameterized and managed to ensure service availability. That requirement has been getting more complex because of the growing dominance of consumer broadband services in the service mix. There are a lot more consumers, obviously, and consumers are unsophisticated with regard to network management. That combination then combines with operators’ shifting of service management tasks to portals and more automated systems. It also creates complexity, which always generates errors and raises operations costs. Thus, we could expect some interest in at least service lifecycle automation even if nothing was happening in the hosting area.

Something is happening, though. 5G, as I’ve noted, introduces a specific need for hosted functions, and at least half of those 11 operators believe that future service evolution will broadly mandate hosted functions. That means that they see function hosting likely expanding beyond 5G, and that sub-group has a slightly different slant on orchestration.

What I’m seeing in the 5G-specific inquiries is a concern about the ETSI orchestration and management process overall. There seems to be some discomfort with the ETSI Management and Orchestration (MANO) implementation, specifically with the fact that it’s disconnected from Kubernetes, which more and more operators understand is the way that cloud software overall is already focused. The obvious solution to this is something like Nephio, which would frame a function-hosting architecture around Kubernetes. I think operator interest in Nephio is emerging from this group, in part because some operator members of the project are among my Group of Eleven.

All this is good, but there is still an issue, IMHO. 5G raised the issue of MANO credibility in comparison with something like Kubernetes, a cloud approach. The issue is that MANO’s problems are potentially deeper, and it’s too early to know whether Nephio is going to address them all.

MANO is a device-centric approach, by which I mean that the goal of MANO is to deploy a network function that looks like a device, and that can be managed using traditional device-management tools, both at the NMS level and at the OSS/BSS level. This approach is fine where there are no issues created by presuming that a network function is a virtual form of a device, but that’s not the case even for 5G functions; they never were associated with an appliance at all. For the sub-group of my Eleven who were already looking at functions beyond 5G, this is a problem already, but most of the 5G-specific sub-group are still getting their arms around the overall MANO approach, and don’t see the potential risks, of which two seem to dominate.

One such risk is the “ships-in-the-night” management model. Higher-level operations (NMS/OSS/BSS) doesn’t “know” about virtual functions at all. Lower-level operations (MANO) doesn’t know about services at all. The effective integration of management and orchestration activities in both areas depends on creating a kind of shim that ties things together, which in the NFV model is the VNF Manager or VNFM. The VNFM is what makes a virtual function look like a device, by mirroring the “Physical Network Function” or PNF management model. There are a lot of potential slip-ups in this process, and it surely complicates the on-boarding of functions.

Another risk is the “composed function” problem. Recall that NFV presumes that all VNFs are created by virtualizing a real device. That means that VNFs are always “atomic”, that a set of cooperative VNFs has to be considered a network of devices. In 5G, this problem is (so far) minimized by the fact that the 3GPP sort-of-conceptualized even the VNF aspects of 5G architecture as a device model. If we start to think more broadly, we could see that even 5G elements might benefit from being composed from lower-level ones—the “cloud-native” or “microservice” approach. Get beyond 5G and you surely need that capability.

There’s a third potential problem that seems to cross all the potential solutions to function hosting, which is addressing. What you host functions on, the resource pool, has to be addressable. What you host there, the functions themselves, also need to be addressable. You don’t want your functions to be able to address the resource pool directly, nor do you want them to be able to address each other if they’re not part of the same service or at least part of a “service community” that represents the set of functions that are related to each other by ownership and cooperative properties. Finally, you probably don’t want functions or resource pools to be addressable by service users.

NFV never really addressed this issue, never set up a strategy or policy on how addresses for functions would be assigned and controlled. Kubernetes presumes that nodes (what you host on) and pods (functions) are assigned addresses from a specific CIDR block, and pod addressing is most likely based on some virtual-network framework. Thus, there are two levels of explicit “subnetworking”, node and pod. The service (application) level creates a third, and to get service elements to be visible in this address space, you have to expose them explicitly.

All this exposing is great, but easy to lose track of. Some cloud providers (Amazon, for example) has a tool that allows for address management in public cloud infrastructure, and something like this could be adapted to managing addresses for function hosting, but you can see from my comments that where services are made up of a combination of real devices and virtual functions, there may be situations where device management at the network level, and function management at the hosting level, have to be coordinated. In a network made up of physical devices and hosted virtual functions, the top layer of my three subnetworks has to be exposed to the same address space as the real devices are using, and management of the real-device properties of those functions has to be done within that address space. The hosted piece, the three-layer subnetwork structure, starts with that space, and dives down to the pod/function-subnetwork and node-network levels.

There’s an underlying (and usually ignored) lesson here for operators now getting serious about orchestration, and for all of us in networking. You can’t orchestrate something without understanding both the hosting and connection context, and there are differences between those things in the world of applications and the cloud, versus the world of services. The good news is that I’m seeing, for the first time, some serious thought being given to this issue, and that’s the first step toward identifying and solving problems.