5G Winners and Losers: What Differentiates Them?

Every new technology creates winners and losers, and 5G is no exception. Light Reading talks about this, specifically in terms of mobile operators, but I think we need to look a bit harder at this issue. 5G is important to operators and vendors alike, after all, and it’s also important to use 5G as an example of how a hotly promoted technology does in the real world these days.

The LR article talks about Verizon and T-Mobile as the exemplars of losing and winning, respectively. The story seems to link the Verizon problem with the slowdown the mobile industry faced after a good 2021. The implication, necessarily, is that T-Mobile somehow avoided that problem, and to me that begs the question of why that should be. After all, Verizon is a big telco with a great home territory and an opportunity to create symbiosis between wireline and wireless services. There has to be more to it.

One obvious truth is that wireless has been, for decades, more profitable than wireline and T-Mobile is a wireless operator, where Verizon is both. While Verizon’s territory has higher demand density than rival AT&T and in fact comparable to EU telcos, the fact remains that return on infrastructure in wireline is under considerable pressure. And, unlike AT&T, Verizon hasn’t been on the leading edge of technology modernization for their broadband services overall. It’s my view that this combination has limited the value of Verizon’s dual-model broadband market position.

Another point is that Verizon has tried harder than perhaps any other operator in the US to promote the notion that 5G services are differentiable based on speed. Their push for 5G as something that would matter a lot to consumers set them up to exploit the early 5G hype, which peaked in 2021, and they enjoyed a nice pop based on that exploitation. However, in the world of tech media, every technology is first hyped to the sky and then faces a stark choice. You either have to redefine it so it looks like it’s met its hype-set goals, or you have to turn on it. 5G suffered the latter fate in late 2021 into this year, and so Verizon was vulnerable to the shift. T-Mobile never pushed speed that way, it simply said it had 5G, and that kept it out of the artificial 2021 upside and the real 2022 downside of 5G in the media.

Both T-Mobile and Verizon have 5G home Internet options, and it’s hard to say which of the two is doing better based on released financial data, but the stories I get suggest that T-Mobile is ahead in this space, and for sure they have a broader coverage map (estimated 120 million homes) versus Verizon (20 million homes). The broader availability of T-Mobile helps them justify a more aggressive ad campaign nationally, which of course then helps them sustain their coverage lead. However, Verizon’s home Internet is almost twice as fast, based on actual user experiences. T-Mobile also lacks any wireline broadband option that would compete with their 5G home Internet service, something that may also make their ad campaign more aggressive.

One question this raises is whether a strong 5G-to-the-home option could be the best answer for an operator who wants both mobile and fixed broadband. This question is particularly important for smaller countries and that depend on tourism. Should they consider true wireline, meaning fiber connectivity, for homes and businesses, or should they go bold and try to do everything with 5G, including millimeter-wave technology for home and business? That move could save a lot of money for them.

Another question that may be more pressing to the Tier One operators is what this might mean for business 5G service and 5G Core, including network slicing. Many operators (including Verizon) have hyped up the notion of IoT applications of 5G, meaning sensors connected via 5G. That strategy hasn’t gotten broad traction (or, frankly, much traction) because the great majority of IoT uses fixed installations for their devices, and WiFi, a custom protocol like Z-Wave or Zigbee, or even wiring will serve at least as well and cost less. Network slicing and private 5G have also been pushed to the business community, with highly publicized but very minimal success. In fact, my contacts tell me that the majority of private 5G really going in is simply modernizing private LTE.

Anyone who looked realistically at 5G from the first (as I’ve tried to do) would conclude that it was going to succeed as an orderly evolution of wireless, which is what it really was. There was never any good chance that it would open new markets in the near term, meaning that new stuff wouldn’t drive 5G adoption and that operators couldn’t expect to earn new 5G revenues. The Verizon/T-Mobile comparison in the Light Reading article, to me, demonstrates that operators who didn’t depend much on the hype did better in the long run.

The interesting thing is that there almost certainly are new applications that would require or at least benefit from 5G, and that these applications could boost 5G operator revenues. Why aren’t we seeing anything about this? Two reasons.

First, the media process is always driven by the insatiable desire for clicks and ad eyeballs. Bulls**t has no inertia, unlike real markets, so there’s a tendency for the media to jump way out in front of any tech trend because it’s a new path to those desired clicks and eyeballs. Often the slant that’s taken early on is a slant that’s easy and sensational, which is rarely the case with real-world stuff. Thus, when the right one comes along, application-wise, it sounds pedestrian compared to the hype, so it’s not news.

Second, networking is still trying to get over the Problem of the Twentieth Century, which was that we had more stuff to deliver than we had effective delivery mechanisms. It’s not that it’s trying to solve that problem—we’ve largely solved it—but that it’s still behaving as though the problem existed. When the network was the limiting factor, network technology unlocked a lot of pent-up stuff and never had to give a single thought to how to develop an application. Now it does, and the industry still clings to that Field of Dreams.

These two factors are why hype is destructive to value, and they are both operating in some form or another in pretty much all of tech. We live in an age of ecosystems but we think and plan like we’re in an age of products. No vendor, no operator, can hope to succeed on a large scale without a technology advance on a fairly broad front, but few if any can get their heads and wallets around such an advance. Tech needs to look forward enough, and broadly enough, to secure its own optimum future.

Where is Meta Taking the Metaverse Now?

Meta’s quarter missed across the board. This is its second quarter of issues, and its stock has been declining steadily, to the point where it’s lost about half its value. Obviously this isn’t a good thing for Meta, but the big question is what it might mean to the OTT space, the metaverse, and the tech markets overall.

One essential truth here is that social media may be social, but society is fickle. The whole social media thing is about being in touch with pop culture, which changes rapidly. Not only are the users gadflies, any successful social platform has a community of users who are happy to complain about things they don’t like, which serves as the source for new platforms that fix those issues. We’ve had social-media failures before (remember Second Life?) and we’ll continue to have them, because that’s the nature of social media.

Regulators have no love for social media either. Meta’s efforts to use its market capital to buy up players has met with regulatory scorn; the FTC has just sued to block Meta’s acquisition of the maker of the Supernatural fitness app. So think about it; your own space is doomed to social death, you can’t use your current gains to buy into adjacent areas…not a happy picture.

Meta was smart in that it realized this, which is why it jumped so aggressively on the metaverse concept. The problem for Meta there is that it’s essentially an all-or-nothing bet on a concept that’s going to take considerable time, investment, and luck to bring to maturity. Meanwhile, to avoid Street condemnation, they have to tell the world what they intend, which means that others (like Microsoft) are free to jump out and to their own thing in competition. Meanwhile, social media is changing as it always does, and not to Meta’s benefit.

How did Meta let things come to this? I think that like most companies, they’ve had their eyes on their feet instead of the horizon. To be fair, the regulatory shift that Sarbanes-Oxley represented shifted companies’ focus from longer-term to this-quarter, which sure makes watching your feet look smart. The problem is that this view not only disguised the risk something like COVID represented, it disguised what a recovery from that risk would mean.

Facebook is a more immersive form of social media, compared to something like TikTok, which Meta admits is a major threat to it in the near term. Meta introduced Reels in Instagram to shift its focus to compete better, but if you think about it, they should have been planning an evolution as soon as COVID hit. People sitting at home under quarantine conditions use social media one way, but those same people use it another way when they’re back out in the world in what’s been called “relief” behavior.

This too shall pass away, as the old saying goes. Facebook succeeded largely because it created a trend, and now it’s in a position where responding to others’ initiatives is critical. By the time Facebook makes Reels a successful TikTok competitor, what will social media look like? Just a quarter ago, we might have said “the metaverse”, and that still might be true, but the problem is that short-term Wall Street pressure is now causing Meta to short-change its metaverse.

The current Meta advertising on the metaverse is focusing not on a social experience but on a personal one. Their vision of the metaverse has always depended on virtual reality, which means that their Reality unit (where the metaverse lives) has necessarily been looking at how to make a metaverse look better. A social metaverse needs to look compelling, but it also has to be realistic in the way that avatars that represent people can interact. That, as I’ve noted in past blogs, demands lower latency in processing a collective vision of reality in which the collected users (via their avatars) can live and move. Otherwise, avatars will “lag” and any interaction will be jarring rather than representative of real personal interactions. The problem, obviously, is that improving latency to make interaction realistic means things like edge computing, meshing of edge sites, low-latency access, and so forth. All of these things are possible, and many might be within Meta’s ability to drive forward, but not when their profits are under near-term pressure.

A metaverse where kids can learn about dinosaurs, the rain forest, or endangered species is surely helpful from an educational perspective. Another one where surgeons can train in virtual reality to hone their skills is helpful in improving surgical outcomes. Are either big money-makers? That’s the challenge here, and to face it Meta is making the metaverse into a kind of super video game platform with evolutionary capability.

Many, if not most, of the metaverse applications Meta now seems to be promoting could be delivered via a gaming platform, if it were augmented with high-quality virtual-reality capability. Microsoft’s vaunted counter to Meta’s metaverse move was the acquisition of Activision Blizzard, which of course is a gaming platform. That raises the question of whether Meta can meet any short-term metaverse goals without having a gaming franchise of its own to leverage. If not, then the questions are how fast the “evolutionary” capability could be delivered, and what form it could take. The two are clearly related.

To evolve the metaverse to the vision that was first laid out, the “social metaverse” that Meta implied with its announced restrictions on the personal space of avatars, would require edge computing and low-latency meshing of the edge locations, or confining the users to locales that were able to assure low latency within them. This would be not only an evolution of the metaverse as it seems to be constituted today, but an evolution in social media, to favor the creation of virtual communities that mirrored the ability to regularly interact.

Among young people, most social-media interactions are with others they know in the real world, and usually see regularly. The obvious question then is whether a “metaverse” for those people is even interesting. Remember, this is the group that’s made TikTok a success, and they tend to use it for short interactions when they’re either physically apart or want to establish a kind of private back-channel to a shared real-world activity. Think people at a party chatting about another attendee.

This raises two critical questions about Meta’s future. First, is its current challenge due to the fact that social media is evolving away from the longer-term immersion that Facebook represented. If it is, then the metaverse is the wrong answer. Second, does Meta already know this, and is now trying to repurpose its metaverse initiative to fit a different market niche?

It also raises a question for the metaverse community, especially the startups. If the metaverse is just a super-game, then what real opportunities does it open? Startups, because of their VC financing, are notorious for wanting a quick buck, and how many VCs who backed the metaverse model will believe that can be achieved now? Worse yet, how many of those VCs would have backed a startup whose payoff depended on cracking the gaming space? Add to this the fact that some reports are saying that the current situation with tech VCs is similar to a stock market crash, and you have some funding risks to consider.

The biggest risk, of course, is that a big shift from Meta will be broadly interpreted as an indicator that the metaverse is failing as a concept. The truth is that we’ve not moved far enough along in laying out the ecosystem it will necessarily become to understand even what makes it up. We don’t know what technologies will be critical, or what the ROI in various metaverse applications will be. The real danger is that we may delay answering those questions, and thus delay the realization of what I think will prove to be an important, even critical, concept.

A Software-Centric View of Service Orchestration and Automation

What would an intent-modeled service lifecycle automation system look like? I’ve often talked about service modeling as an element in such a system, but what about the actual software? A service model without the related software is the classic “day without sunshine”, and in the early 2000s, an operator group in Europe actually pointed out that one modeling initiative seemed useful but might not be suitable for implementation. I think we’re due to talk about that issue now.

The software framework for service modeling was first described (as far as I have been able to determine) in the TMF work on the “NGOSS Contract”, which stands for “Next-Generation OSS”. The fundamental notion of that work was that a service contract would act as the steering mechanism for lifecycle events. When an event occurred, the model would tell software where that event had to be steered, based on traditional state/event (or graph) theory. Making this work requires two software elements.

The first element is explicit in the NGOSS Contract vision; it’s the set of destination processes to which events are steered. Implicit in the approach is the notion that these processes are functions or microservices that would receive, via the contract, the combination of event data (from the original event) and contract data (from the contract). Thus, the processes have everything they need to run. Also implicit is the idea that the process would return a “next state” indication and potentially a refresh of some contract data. This all combines to mean that you can spin up a destination process wherever and whenever it’s needed, as many as needed. It’s fully scalable.

The second element is implicit in the vision; there has to be software that makes the event-to-process connection via the contract. It’s this second element that creates the complexity in the implementation of a service lifecycle automation software system. To try to cut through the complexity, let’s start with what I’ll call the “ONAP Approach.”

The ONAP model is an orchestration system that is driven by events. The architecture is inherently monolithic, in that while there might be multiple instances of ONAP, the presumption is that the instances are controlling independent domains. Events generated within a domain are queued and processed, and to the extent that ONAP would admit to service modeling (which is a minimal admission at best, in my view), that monolith would then use a model to invoke a process, which you’ll recall was our first software element.

The obvious problem with this is that it doesn’t scale. If there are multiple events, the single process will have to queue them for handling, and thus it’s possible that the central event process would be overwhelmed by an event flood of the type that could occur with a massive failure.

As work accumulates, the state of the system as reflected by the set of generated events evolves (which is why the events were generated) but the state of the system as known by the central process will be whatever it was at the completion of the last event it processed. You could have Event T=10:47 sitting in the queue waiting to inform you that Resource C had failed, just as your processing of Event T=10:46 decides that Resource C must now be used to substitute for the Resource B that event reported had failed. If there is a single event queue and central process per administrative domain, then all services will compete for that single resource set, and the chances of a delay that creates a state disconnect between the real network and the process expands.

One possible solution is to have an instance of a process be associated with each service. I looked at that in my first attempt to do an implementation of NGOSS Contract for that group of EU operators, and while it was useful for high-value business services, it required that you dedicate a contract handler for each service, no matter how valuable it was to the operator. That limits its utility, obviously, so the next step was to see if you could make the central contract process itself into a function/microservice, something you could instantiate when you needed it.

My approach to this was to think of a contract like an order to build a bike. If you have a blueprint for Bike A, and your manufacturing facility has the ability to follow that blueprint, you can create an instance of Bike A from an order, right? So a “Service Factory” (to use my name) has the ability to fill a given set of contract orders. Give an instance of that factory an order, and it could fill it. Put in software process terms, I could spin up an instance of a compatible Service Factory when I received a lifecycle event, give it the event and the contract, and it could steer the event to the process. Remember, the contract is where all the data is stored, so there is no need for persistence of information within a Service Factory instance.

Then, let’s suppose that our bikes are built by a flexible manufacturing system that can follow any blueprint. A single Service Factory model can now be instantiated for anything. Putting this in software process terms, I can create a central “factory” process that can be instantiated any time I create an event and need to map it to a process. All I have to do is associate the event with a contract, but how does that happen?

In my approach, it happens because with two exceptions, all contract events are generated by contract processes. A service model is a hierarchy of intent model elements. Each element has a “parent” and multiple (potentially) “children”. Model elements can pass events only up or down a level within that hierarchy. Because it’s the event processing elements that have to generate anything, including an event to a hierarchy partner, and because those processing elements are always given the associated contract’s data, they pass within a single contract. No need to identify what’s involved.

It would be important to deal with collision risk on updating contract data from event-linked processes. You could serialize the event handling per service at the queuing level, you could send a process only the data associated with the model object that the event is associated with, you could provide a locking mechanism…the list goes on. My preferred approach was to say that an event-linked process could only alter the data for the part of the service model it was associated with.

What about those exceptions, though? The first exception is the events generated from the top, meaning service-level events like adding, removing, or changing a service. Obviously these events are associated with a contract by the higher-level customer or customer-service-rep portal, so they have what they need. The second exception is a bit more complicated.

What happens when something fails? This would be an event generated at the bottom, and obviously there’s not necessarily a clear association between a resource and the service(s) that depend on it. My approach is to say that there is a clear separation between what I’ll call a “service” object and what’s a “resource” object. The former are part of the service model and pass events among themselves. The latter represent a resource commitment, so it’s at the bottom of any resource-domain structure, where the rubber meets the road. Each object type has an SLA that it commits to, based on the fact that each is an intent model. Service object states with respect to the SLA would be determined by the events passed by subordinate service objects; if I have three sub-objects, I’m meeting my SLA if they are.

Passing events within a service object hierarchy means having some central mechanism for posting events. Logically, this could be associated with the service contract itself, in the form of a pointer set to define where to push and to pop the queue. That way you could use a central event handler, redundant handlers, area- or service-type-specific handlers, or even a dedicated per-service handler.

Resource objects can’t assume that because resource-to-service bindings are not exclusive and may not be visible, unless we make them so. Thus, a resource object has to be capable of analyzing its represented resources to establish whether they’re meeting their SLA. In my sample implementation, this object had only a timer event that kicked it off (it was a daemon process in UNIX/Linux terms). When it ran, it checked the state of the resources it represented (their MIBs, perhaps) and generated an SLA violation event to its superior object if that was indicated.

With these approaches, it’s possible to make both the software elements of a lifecycle automation system into scalable microservices which means you could build a service lifecycle automation system, based on a service model that’s the kind of hierarchy I’ve described. Note that this would also work for application lifecycle automation. I’m not suggesting this is the only way to do the job, but I do think that without the attributes of an approach like this, we’re spinning our wheels in service or application orchestration.

Where is Networking Really Heading?

It’s not likely that many doubt that network spending is under pressure. That’s true even in the mobile space, despite all the hype that’s been surrounding 5G. We’ve heard a lot about open-model networking, not only in 5G with O-RAN but with white-box switching and routing. We’ve heard that hosted features and functions are the way out of connectivity commoditization for operators. And, no surprise, we’ve really heard all of this before. What, if anything, is coming out of all the noise now?

We need to make a point, though, before we start. Nearly every major operator and network equipment vendor are public companies who, like all public companies, answer first to their shareholders. That means that they answer to Wall Street, to the financial markets worldwide. Despite what many believe and say, they can’t take steps to make customers happier if those steps compromise their bottom line…at least not for long. That sort of move would result in a shareholder revolt and a change in management, followed by a change in direction to preference profits again.

The pressure in the network industry is profit pressure. That pressure has already wrung a lot of operations cost reductions from the operators, but our first network industry challenge is that the emphasis on opex management was on less on a durable strategy and technology shift than on a quick response to satisfy financial markets. That plucked all the low opex apples, which means that the residual opex reductions that might be gained can no longer fund massive technology changes.

Current trends in corporate earnings make it clear that neither operators nor their vendors are finding it easy to satisfy those financial markets. The shape of the industry in the future will depend on what happens as these two groups, whose long-term interests are symbiotic but whose short-term relationships are almost adversarial, wrestle out a compromise they can both live with.

Let’s start with the network operators. It’s obvious that the decades-long decline in profit per bit or return on infrastructure has not really been stemmed. However, it’s becoming obvious that this classic measurement isn’t as useful as it was believed to be. The problem for operators is lackluster revenue growth, resulting from the fact that the number of users of network services are growing very slowly, and that current users are unwilling to add things to their service list that would increase ARPU.

Growing the user base is likely the driver behind the operator fascination with 5G-connected IoT devices. Imagine billions of sensor/controller elements, all with their own mobile services plan! What could make a telco CFO’s heart beat faster than that? That these sensor/controllers, lacking any independent income source, would have their plans paid for by businesses and consumers was swept aside. Needless to say, that concept has no real credibility.

ARPU growth is the only other option for revenue gain, and the problem there is that neither the business nor the consumer segment of the market is eager to spend more. Businesses are finding it more difficult to justify even current costs of network services, and consumers are starting to realize that broadband speeds over 100 Mbps or so don’t map well to improved quality of their Internet experience. What else do you sell, to whom, and how?

In most industries, the buyer would be expected to transform their own business model. In telecom, that seemingly basic rule hasn’t worked out, largely because the business transformation would likely involve a technology transformation, and operators rely on network equipment vendors to provide technology. Those vendors, with their own Wall Street mouths to feed, are reluctant to upset the apple cart of their current revenue streams, in favor of something new where they might end up with a smaller market share and profit.

The increased operator emphasis on open-model networking is likely due largely to this. While all the old reasons for open networking, notably the vendor lock-in problem, remain valid, the biggest new factor in the picture is the perception of operators that their traditional vendors aren’t doing enough to resolve their ARPU dilemma. Whether open-model networking, even if it’s adopted, would resolve the deadlock on new service technology will likely depend on the nature of the vendors who provide it.

Startups are the normal source of technology innovation, but there are issues for startups in the network infrastructure or service infrastructure space. The first is that VCs are not particularly interested in funding network startups, particularly those who are aimed at selling to network operators. The second is that startups are usually seen by operators as risky partners.

Network equipment vendors are another source of innovation, but as I’ve already noted, operators believe that these vendors tend to promote a stay-the-course strategy for their operator buyers, to preserve their own bottom lines. I think that’s generally accurate, but the Ericsson decision to buy Vonage may represent a shift in network vendor thinking. It’s too early to say exactly what Ericsson has in mind here, or whether a voice-centric platform would be helpful to operators even if Ericsson positioned it optimally.

Transport or IP network vendors like Ciena, Cisco, Juniper, the IP piece of Nokia, etc., are at least somewhat credible to operators as a source of innovative service support, but in the operators’ view these vendors have largely ignored the question of service transformation in their thinking. Adding edge-connection features to connection services (security is an example) isn’t transformational enough for most operators. I think this group of vendors could in fact create a transformational, open-modeled, service ecosystem, but perhaps they have the same inertia problems that operators do.

How about the software players? Operators believe that both IBM/Red Hat and VMware represent credible players in an open-model world, especially those operators who see cloud technology as the key to integrating new ARPU-generating services with their current service set and infrastructure. One interesting truth, though, is that operators cling to software-provided virtual functions (NFV and the NFV applications called out in 5G specs are examples) rather than a more general composed-service model that could admit to cloud-hosted elements of an experience. They then criticize the vendors for being stuck in virtual functions!

Another possible source of open-model service-enhancing initiatives is the open-source community or open-model groups like O-RAN. The problem with these groups is that they tend to fall into two categories with regard to what drives them. The open-source community has the skill to do the job and a history of moving quickly, but they often can’t engage senior decision-makers in the telecom space, and tend to take a bottom-up approach that limits their ability to completely solve network problems. The open-model groups are often backed by operators, can engage with senior management there, and can take a broader view of the problem. Unfortunately, they don’t always take that broader view, and they usually move at the same glacial pace as the operators themselves. It’s a tossup whether this group can do what’s needed.

The final possibility is the cloud providers, and here is where I think we can actually expect, rather than simply hoping for, progress. The reason is scope of operation versus scope of infrastructure. Virtually every viable network service is really a national or global opportunity, but network infrastructure for ISPs, telcos, cable companies, and the rest of the operator community tends to be regional or even local. That leaves providers with a stark traditional choice—do I offer services in an OTT model that can spread over any network infrastructure but that competes with “real” OTT players, or do I build out somehow to cover a bigger geography with my technology? Neither option is realistic, and the cloud providers are offering a third, which is host my differentiated service technology in the public cloud when my prospects are outside my infrastructure footprint. The problem with this, of course, is that it enriches the cloud providers and reduces the impact of “differentiating services” on an operator’s own bottom line.

One potential path to a better solution, a path that almost any vendor could take and that standards groups or open-source bodies could surely take, is federation among operators. Ironically, this approach has been contemplated for almost two decades and there have been a number of initiatives that addressed a federation model and were fairly broadly supported. None generated a useful result, perhaps because the problems of next-gen networks were not yet acute. As I’ve noted recently, Ericsson could have this goal in mind with its Vonage acquisition, but likely only for voice and collaborative services.

A federation model would allow operators to offer each other wholesale features and other resources to support composition of new services. While it would still mean that a given operator might have to pay for out-of-footprint resources, other operators could also be paying that operator, and overall operator return on infrastructure would certainly be higher than it would be if public cloud resources were used instead of federated ones.

Of course, there’s always the answer that operators seem to love; stay the course. Think that somehow, against all odds, the past happy time when bits were worth something will be restored. That’s the thinking that prevails in most of the world today, but it’s under increased pressure and I don’t think it can be sustained much longer. That means that vendors of whatever cloth they may be cut will have an opportunity to improve their market share if they can get in front of the change. Which ones will do that is still an unanswerable question.

How Human is AI?

A Google employee raised a lot of ire by suggesting that AI could have a soul. That question is way out of my job description, but not so with questions that might lead up to it. Are AI elements “sentient” today? Are they “conscious” or “self-aware?” At least one researcher claims to have created a self-aware AI entity.

This topic is setting itself up to be what might be one of the most successful click-baits of our time, but it’s not entirely an exercise in hype-building or ad-serving. There are surely both ethical and practical consequences associated with whatever answer we give to those questions, and while some discussion is helpful, hype surely isn’t.

One obvious corollary question is “how do we define” whatever property we’re trying to validate for an AI system. What is “sentient” or “self-aware”? We’ve actually been arguing for at least a century on the question of biological sentience or self-awareness. Even religions aren’t taking a single tack on the issue; some confine self-awareness to humans and others admit at least indirectly to the notion that at least some animals may qualify. Science seems to accept that view.

Another corollary question is “why do we care?” Again, I propose to comment only on the technical aspects of that one, and the obvious reason we might care is that if smart technology can’t be relied upon to do what we want it because it “thinks” there’s something else it should be doing, then we can’t rely on it. Even if it doesn’t go rogue on us like Hal in “2001”, nobody wants to argue with their AI over job satisfaction and benefits. Is there a point in AI evolution where that might be a risk? A chess robot just broke a girl’s finger during a match, after all. Let’s try to be objective.

Technically, “sentient” means “capable of perceiving things” or “responsive to sensory inputs”. That’s not helpful since you could say that your personal assistant technology is responsive to hearing your voice, and that a video doorbell that can distinguish between people and animals is responsive to sight. Even if we were to say that “sentient” had to mean that perceiving or being responsive meant “capable of reacting to” doesn’t do us much good. Almost everything that interprets a real-world condition that human senses can react to or create could be considered “sentient”. And of course, any biological organism with senses becomes sentient.

“Conscious” means “aware of”, which implies that we need to define what awareness would mean. Is a dog “conscious”? We sort-of-admit it is, because we would say that we could render a dog “unconscious” using the same drug that would render a human unconscious, which implies there’s a common behavioral state of “consciousness” that we can suppress. Many would say that an “animal” is conscious but not a plant, and most would agree that in order to be “conscious” you need to have a brain. But while brains make you aware, do they make you self-aware?

We can do a bit better with defining self-awareness, at least with animals. Classic tests for self-awareness focus on the ability of an animal to associate a mirror image of itself with “itself”. Paint half of a spider monkey’s face white and show it a mirror, and it will think it’s another monkey. Paint some of the great apes the same way, and they’ll touch their face. “That is me” implies a sense of me-ness. But we could program a robot to recognize its own image, and even to test a mirror image to decide if it’s “me” through a series of movements or a search for unique characteristics. Would that robot be self-aware?

One basic truth is that AI/robots don’t have to be self-aware or sentient to do damage. It’s doubtful that anyone believes that chess robot was aware it was breaking a girl’s finger. AI systems have made major errors in the past, errors that have done serious damage. The difference between these and “malicious” or “deliberate” misconduct lies in the ability to show malice and to deliberate, both of which are properties that we usually link with at least sentience and perhaps to self-awareness. From the perspective of that girl, though, how much of this is really relevant? It’s not going to make the finger feel better if we could somehow declare the chess robot’s behavior malicious by running some tests.

This broad set of ambiguities is what’s behind all the stories on AI self-awareness or sentience. We don’t really have hard tests, because we can easily envision ways in which things that clearly shouldn’t meet either definition might appear to meet both. Is my robot alive? It depends on what that means, and up until recently, we’ve never been forced to explore what it does mean. We’ve tried to define tests, but they’re simple tests that can be passed by a smart device system through proper programming. We’re defining tests that can’t work where behavior is programmable, because we can program it in.

So let’s try going in the other direction. Can we propose what AI systems would have to do in order to meet whatever test of sentience or self-awareness we came up with? Let’s agree to put self-awareness aside for the moment, to deal with sentience, something that might be approachable.

One path to sentience could be “self-programming”. The difference between a reflex and a response is that the former is built in and the latter is determined through analysis. But anything that can solve a puzzle can behave like that. I’ve seen ravens figure out how to unzip motorcycle bags to get at food; are they self-aware because they can analyze? Analyzing things, even to the point of optimizing conditions to suit “yourself” isn’t exclusively a human behavior, and in fact can be found even in things that are not self-aware. Scale may be a possibility; a sentient system would be able to self-program to deal with all the sensory stimuli from all possible sources, through a combination of learning and inference. Children are taught sentient behavior, either working it out through trial and error or being instructed. Either is likely within the scope of AI, providing that we have enough power to deal with all those stimuli.

We can’t dismiss the role of instinct though. Sentient beings, meaning humans, still respond to instinct. Loud noises are inherently frightening to babies. Many believe that the fear of the dark is also instinctive. Instincts may be an important guidepost to prevent trial and error from creating fatal errors.

Culture is another factor, and in AI terms it would be a set of policies that lay out general rules to cover situations where specific policies (programs) aren’t provided. Cultural rules might also be imposed on AI systems to prevent them from running amok. Isaac Isamov’s Three Laws of Robotics are the best-known example:

First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These laws are more useful in our quest for a standard of sentience than you might think. Each of them requires a significant extrapolation, a set of those broad policies, because what might “allow a human being to come to harm,” for example, requires a considerable exercise in judgment, meaning inference in AI terms. “Hitting a melon with a hammer will harm it. Hitting a human with one would therefore likely harm it” is an extrapolation of something an AI system or robot could be expected to apply, since conducting the first test wouldn’t be catastrophic in terms of social policy, and the rule could make explicitly testing the second hypothesis unnecessary.

I think that it would be possible, even with current technology, to create an AI system that would pass external tests for sentience. I think that some existing systems could pass enough tests to be mistaken for a human. Given that, we can then approach the complicated question of “self-aware” AI.

You and I both know we’re self-aware, but do we know that about each other, or any other person? Remember that sentience is the ability to respond to sensory inputs through the application of reasoning, meaning inference and deduction. Our ability to assign self-awareness to another depends on our ability to sense it, to test for it. We have done that with some animals, and have declared some to be self-aware and others not, but with animals we have biological systems that aren’t explicitly trying to game our tests. An AI system is created by self-aware humans who would be aware of the tests and capable of creating a system designed to pass them. Is such a system self-aware? I don’t think many would say it is.

The problem with the step from sentience to self-awareness is that we don’t know what makes us self-aware, so we cannot test that process, only symptoms, which can be mimicked by a simple AI system. We may never know. Should we be worried about self-aware AI going rogue on us? I think we have plenty of more credible, more immediate, things to worry about, but down the line? Maybe you need to ask your robot vacuum.

Is Nokia vs Ericsson an Open-Model vs OTT-Voice Duel?

Nokia surprised a lot of people by beating Q2 forecasts, sending its stock up on July 21st. It’s beating rival Ericsson in stock performance for 3, 6, and 12 months, too. Ericsson, as I noted in a blog on July 18th, has bet billions on a Vonage acquisition, and Nokia seems to have been on an open 5G model. Why have the two taken different paths, which seems best tuned to market conditions, and what does this all say about the telecom space? That’s what we’ll look at today, starting with Nokia.

While Nokia has been criticized for being “open in name only” with respect to 5G, it has certainly moved to support the open model. I summed this up in my blog, so rather than repeat the criticisms, I want to look at whether the open approach has been helpful.

Operators have long been concerned about vendor lock-in, particularly in mobile infrastructure. The problems Huawei has been having, relating to the US government’s position on the company and the use of its technology, has raised the perceived risk level. Open-model 5G, including O-RAN, is a logical response, but it wasn’t something Nokia cited explicitly on their earnings call last week. They stressed execution and managing the supply chain issues, which is kind-of-pablum for earnings calls these days. The only mention of 5G was the comment that Nokia got off to a slow start and has caught up.

To a degree, Nokia is a kind of natural winner in the 5G space. 5G is a combination of mobile network upgrade (and Nokia has a division for that) and a general infrastructure capacity upgrade (for which Nokia also has a division; two, if you count optical separately). It’s always an advantage to cover most or all of the bases in a network upgrade like this; if you don’t the other vendors you admit may fight you for some or all of the pieces. Both Nokia and Ericsson can provide mobile-specific transport elements, but Nokia has a pretty strong general metro infrastructure story.

Did Nokia’s open-model embrace help, then? A quick check of comments operators have made to me about them was interesting. Roughly five of every six said Nokia was “open”, far more than competitor Ericsson garnered. Slightly less than half said that “open” was an essential attribute to them, so that suggests that Nokia had an advantage among roughly 38% of operators I sampled. Nokia and Ericsson matched each other in other “essential” attributes, like expertise, cost, and support for the required feature set, so this is a good thing for Nokia.

Ericsson, as I noted, didn’t fare as well with “open” comments. Only two operators in five said they were “open”, slightly less than half the number who described Nokia that way. Ericsson doesn’t have the product breadth in metro infrastructure Nokia has either, which gives Nokia an edge that’s difficult to measure.

What about Ericsson’s Vonage acquisition? As I noted in my blog on the deal, Ericsson seems to be focusing their attention on UC/UCC services, which are connection services. That’s a comfortable place for operators, but it doesn’t move the ball all that much, particularly when the same sort of services could be offered as OTT services (as Vonage has been doing). However, almost the same percentage of operators who thought Nokia was “open” thought that the Vonage deal “could be positive” for Ericsson’s customers.

Wall Street didn’t like Ericsson’s quarters; their stock took a hit. Nokia’s went up on their own quarterly results. On the basis of all of this, it would appear that Nokia has positioned itself better, but it’s not totally clear what role “open” participation had in their results.

We might even be justified in saying that Nokia may have staved off open-model 5G just a bit, if one takes the broadest sense of the term. Operators, confronted with no major/credible vendor who promised an open-model 5G, might well have been way more active than they have been in seeking open solutions from less familiar vendors.

The problem those less-familiar vendors goes beyond the obvious point that they’re not the usual suppliers of mobile or telecom infrastructure. Because few vendors provide the actual RF components of a mobile network, and many of the most active supporters of open-model 5G are smaller firms, their solutions are necessarily integrated offerings. Operators have long been wary of this model, making exceptions largely when one member of the open project is a giant in another space. That’s one reason, IMHO, the operators have been promoting a public cloud strategy.

The other reason, of course is that 5G service requirements spread far beyond the traditional wireline footprint of operators. That means that they don’t have real estate to place metro hosting in all the places where they need, for competitive reasons, to offer service. This is the best reason I can see for the Ericsson/Vonage deal. If Ericsson could use the Vonage platform to create a means of “federating” the elements of a 5G service, or even offer to be the “interexchange player of last resort” to link locations away from where an operator would have facilities for hosting, they might be able to fend off the public cloud people. If that were the case, Ericsson could expect to see its fortunes improve considerably, and literally.

Another possible reason for the deal is the AT&T link I mentioned in yesterday’s blog. If AT&T is seeing a need to transform its business voice services to VoIP and a universal 5G platform, then the use of the Vonage platform might help, and it might be at least a contributing reason for AT&T’s selection of Ericsson in its 5G expansion.

For Nokia, anything that could help Ericsson poses the principle risk to Nokia’s current success. As operators respond to commoditization in the connection services space, they are likely to have to elevate their current services, including voice, to something like OTT. Vonage could help support that transition too, and if Ericsson pushes Vonage’s platform properly they could generate enough operator interest to threaten Nokia’s momentum.

But is that likely? All telcos are conservative, glacier-like in their movement. Telecom vendors tend to be the same, and Ericsson has always been perhaps the Great Marketing Sloth in a group that’s pretty slow overall. Nokia, with so many different corporate DNA contributions over the years, has at least some senior people with a more aggressive background. There are a lot of ways they could fend off this latest Ericsson push, and if they do they may take the top position in the telecom infrastructure space for quite a while.

We can’t declare Nokia’s upside a victory for open-model networking, and it’s too early to say just how well Ericsson will, or even can, play its Vonage acquisition. Any way you look at it, the Nokia/Ericsson and AT&T/Verizon duels suggest that we’re in for some interesting times in telecom.

Has AT&T Done Enough?

I probably watch AT&T more closely than I do any telco, any network operator. Not only have I chatted with literally hundreds of AT&T people over the years, I’m convinced that they are a poster child for “telco-tries-to-be-realistic”, and I certainly think they need to do that. I also think this quarter’s earnings prove that point, and maybe offer some insights to operators overall.

The first thing that jumps out from the transcript of AT&T’s call this quarter is the comment “we’re continuing our progress, improving our infrastructure and expanding our customer base across our twin engines of growth, 5G, and fiber.” They also said they had more quarterly postpay adds than ever before. In fiber, AT&T reported over 300,000 fiber adds, and the tenth straight quarter with over 200,000. While AT&T might not agree, I think that 5G success can be linked to smartphone promotions to facilitate a switch to 5G or even a switch to AT&T from a competitor. I think that fiber wireline Internet emphasis is defensive; they need to fend off players like Comcast who are MVNO partners with a competitor. DSL won’t do the job.

On the business side, AT&T has issues, but it’s hardly alone. They commented that “We’re seeing more pressure on business wireline than expected.” If you read (or listen) further, you find that AT&T is seeing businesses gradually shift away from what they call “legacy voice and data services”. There is no question that businesses have been far more resistant to VoIP and Internet-based connectivity than consumers; consumers of course never really had any data service but the Internet.

The most interesting comment on business services is “On the data front, VPN and legacy transport services are being impacted by technology transitions to software-based solutions. Today, approximately half of our segment revenue comes from these types of services.” What this is saying is that SD-WAN and virtual networking in general, using Internet connectivity as a “dialtone”, are starting to displace IP VPNs and business-specific broadband access.

Let’s stop here and parse this stuff a bit. Businesses, as I’ve pointed out in past blogs, link incremental spending to incremental benefits. Absent some justification, they want to see costs go down, not up. The fact that network connectivity hasn’t been linked to any transformational changes in productivity puts price pressure on even current network spending, and makes any increase hard to justify. That shows that “business connectivity” is commoditizing.

Viewed in this light, the 5G and fiber move has another level of justification. AT&T notes that business customers are replacing legacy voice with mobile services. The flight from legacy business data services combines with this to put business revenues under immediate pressure, so one solution is to try to beef up residential revenues, and another is to deploy 5G assets to capture any flight from legacy voice, and beef up fiber to extend quality Internet access to more places, including places where there are branch offices of their valued (and fleeing) enterprise customers.

Government spending is also under pressure, which AT&T says accounts for 20% of business service declines. Here, where it’s policies and not market conditions that establish the purchase justification framework, the company really can’t offer any affirmative options beyond the same 5G/fiber-broadband focus that they’ve adopted to support consumer and enterprise. I think it is very possible that targeted software-defined data services and mobile/VoIP OTT-like services that would appeal to businesses on cost could also improve AT&T’s ability to win government deals.

The reason I noted “OTT-like services” as appealing is that AT&T also said that out-of-area service extension contributed about 20% to business service revenue declines. They said “This pressure will be managed through opportunities to operate more efficiently, movement of traffic to alternate providers, symmetrical wholesale pricing adjustments and natural product migration trends.” While they didn’t say this explicitly, a migration to a software-defined business VPN strategy for out-of-region branch connectivity would surely help manage these “wholesale” costs.

This sort of service shift also raises the issue of 5G dependence. 5G is seen by many (apparently including AT&T) as a platform for the creation of wireless features that would earn more revenue, which simple 5G migration does not create. The problem is that, like business services, 5G value-add features like network slicing would have to be “federated”, wholesaled from a provider who actually had facilities in a given geography that AT&T didn’t cover directly.

A shift to business services based on mobile and software-defined capabilities is an OTT strategy. AT&T has in the past commented on the idea of building “facilitating services”. According to what AT&T said in March, 2022, “On what I refer to as Act Two, we are doing a lot of work today that is enabling us to open up aspects of the network for others to come in and start at offering value-added services associated with it.” It’s very possible, even likely, that AT&T intends to use these opened-up assets to enhance OTT services, making them differentiable on AT&T networks where they’d simply be riding on top of other networks.

One thing that seems pretty clear is that you can’t admit to commoditization of transport and a growing dependence on OTT without having any revenue strategy to offset the inevitable declines at the lower level. This may be another example of what Ericsson could be hoping to exploit with the Vonage deal, since it was Ericsson that was picked by AT&T for its 5G expansion. That would mean that AT&T and Ericsson/Vonage would have to codify how the Vonage OTT stuff could exploit APIs that AT&T exposes below. Does AT&T use the Vonage platform developer program to do some of the linkage?

Another thing that seems clear is that it’s likely that most, if not all, telcos will have to face these same issues at some point. AT&T has the lowest demand density of the Tier One telcos, which means that its natural return on infrastructure investment is under the most pressure. However, rival Verizon had an objectively bad quarter while AT&T had at least some bright spots. Verizon has the advantage of high demand density, but that may have lulled the company into complacency, particularly with respect to how to deal with the commoditization of connection services and pressure on return on infrastructure investment. And, of course, Verizon’s territory is dense enough to encourage competition.

A final point, perhaps the most critical point, is that business service infrastructure and residential broadband infrastructure have to converge. When you have a problem with return on infrastructure you need to take advantage of common facilities wherever possible. The fact is that there are no branch office “business access” technologies in common use that can measure up to the performance of residential broadband. Branch locations, at the least, need to connect through the same facilities as consumers. I think AT&T’s fiber and 5G plans, which are focusing on areas of high demand density within their territory, are a step toward “common-ization”, and here again I think AT&T is taking a lead over other telcos.

Not as much as they should, though. AT&T has tried harder than any other Tier One to confront the future, but they’re still entombed in telco amber, limited in their ability to address the future by the blinders of the industry. The issues they saw first, and have tried to address aggressively, are being felt by others now. Has AT&T done enough to gain the running room they need to do the rest? I suspect we’ll get the answer to that in 2023.

How Do We Orchestrate Complex Services?

One of the things that 5G and the notion of “virtual network functions” has done is demonstrate that “provisioning” is getting a lot more complicated. A virtual function that mirrors a physical device lives in two worlds, one the world of network operations and the other the world of application/cloud hosting. If we expand our goal to provisioning on-network elements (virtual functions are in-network), or if we create virtual functions that have to be composed into features, it gets more complicated yet. What does “service orchestration” mean, in other words?

In the world of networking, services are created by coercing cooperative behaviors from systems of devices through parameterization. Over time, we’ve been simplifying the concept of network services by adopting an “IP dialtone” model overall, and with our increasing reliance on the universality of the Internet. This model minimizes the provisioning associated with a specific service; Internet connectivity doesn’t require any provisioning other than getting access working, and corporate VPNs require only the use of a specific feature (MPLS) that’s pre-provisioned.

Application networking has also changed over time. The advent of the cloud, containers, and Kubernetes has combined with basic IP principles to create what could be called the subnet model. Applications deploy within a subnet that typically uses a private IP address space. Within this, all components are mutually addressable but from the outside the subnet is opaque. The on/off ramps are then selectively “exposed” to a higher-layer network through address translation.

If we apply this combination to the world of Network Functions Virtualization (NFV) using the ETSI ISG rules, we can see a “service” that is made up of discrete hosted functions that are “service chained” with each other, and which obviously also have to connect with real devices. NFV took a network-centric view in service chaining and internal service connectivity, so the virtual functions were hosted in the “application domain” and connected in the “network domain”. VNFs lived in the network domain as virtual forms of devices.

What happens when we move away from original-ETSI-ISG models, to “containerized network functions”. It depends, and that’s the nub of our first challenge in orchestration. Do we create CNFs by containerizing VNFs but otherwise leave things as they were? If so, then we’ve not really moved away from the ETSI approach, only changed the hosting piece. VNFs are still devices in the network domain. Or, do we adopt the cloud/container/Kubernetes model of hosting, and extend the network assumptions of an “application domain” to include devices? Or will neither work?

This isn’t a philosophical issue. The reason why the NFV ISG locked itself into its model of networking was to preserve existing management practices and tools. If a VNF is a virtual form of a physical device, then we could argue that a network service is created by device management as usual, and that the virtual-device piece is created by software hosting. Ships in the night, except of course that if a virtual device fails and can’t be restored at the “hosting” level, it has to be reported as failed to the network service layer.

If ships in the night are still passing, so to speak, then the current management and orchestration practices associated with networks and applications can continue to be applied. There isn’t a compelling need to integrate them, so “service orchestration” may not be a big deal. In the world of NFV, this seems to be a safe bet.

But the world of NFV may not be the end of it. If we are deploying services, can we compose a service where a real device or a virtual device might be employed? If so, then our deployment orchestration process has to have the ability to recognize equivalence at the functional level but divergence at the realization level. We could even envision a situation where somebody wants “SASE-Device” as a part of the service, and that want might be fulfilled by 1) loading SASE functionality as a VNF into a uCPE element already in place, 2) shipping a uCPE or an SASE device and then setting it up, or 3) deploying SASE functionality inside a cloud data center.

That latter point implies that we have to consider “local conditions” when setting up a service. Those conditions could include not only what’s locally available already, but also perhaps constraints on what can be used, such as cost. This in itself suggests that it may be necessary to mingle provisioning steps across the network/hosting boundary. Ships in the night may collide.

The dilemma that faced the NFV ISG, and the choice to book ships in the night, is now being faced by Nephio, for the same reasons as it faced the ISG. Service orchestration, in a unified sense, is a big challenge. I took the time to lay out a complete model in ExperiaSphere, and it took six sets of tutorial slides to describe it. The late John Reilly of the TMF (one of my few personal heroes) created a model that, if properly extended, could have done much the same thing, but a decade earlier (NGOSS Contract). Implementation difficulties, then, may be the issue in service orchestration.

Or, it may be that we’ve not faced the issue because we’ve been able to avoid it, which is what I think is the case. From both the network side and from the hosting side, jumping between ships in the night seems unnecessary. The question is whether that’s true.

Currently, it probably is. As long as we stick with the NFV-established vision that a VNF is just a virtual device, then what we’re doing is codifying a vision where separation of orchestration doesn’t change just because we may add different virtual devices in the future. That’s because the NFV vision is not really limited to current physical devices; what we’re really saying is that the network behavior of a virtual function can be separated from the hosting. That’s true with 5G, which is the only truly standards-sanctioned use of NFV.

I’m not so sure that those ships can continue to pass, night or otherwise. There are a number of trends already visible that suggest we may need integrated provisioning.

First, the question of whether physical devices should be required to decompose themselves into microservices before being assembled into virtual devices was raised at the first US meeting of the ISG in 2013. At the time, there was great concern that requiring this sort of decomposition/recomposition would make vendors unwilling to submit their current device logic in VNF form, and I agreed with that. I wish I’d brought up the question of whether future VNFs might be better composed from microservices, but I didn’t. If we are to compose a VNF, then multiple software instances have to be selected and deployed to create the network-layer managed entity, and that may be beyond simple ships-in-the-night orchestration.

Second, operators worldwide are trying to automate service lifecycle management, from order entry through end of life. Automated lifecycle management has to be able to accommodate differences in “local conditions”, which for residential services means in a variety of areas, and for business services means in multiple locations. The more location differences there are, the more important it is to be able to change an abstract requirement (an SASE, to use my earlier example) into a locally optimal specific deployment.

Third, we are already seeing operator interest in “facilitating services” designed to bridge between their connection services and on-the-network or OTT services. It’s hard to imagine how these will be supported without a fairly comprehensive orchestration capability, not only because they’re likely to require feature deployment to accommodate OTT partners’ locations, but also because they’re a form of federation.

Which is the final reason. Even 5G and VNF-based services are already facing the question of how they are supported when an operator’s footprint won’t match the footprint of the customer. Every operator who has this problem (which is pretty much every operator) will conclude that they don’t want the customer to contract for each geographic segment independently, but if that isn’t done then the operator who owns the deal will have to accept federated contributions from other operators. How, without better orchestration, would those be lifecycle-managed?

Of course, logically deducing that we need an orchestration solution doesn’t create one. We could always just stay the current course, and if we do we’re likely heading for a future where “the Internet” subsumes every other notion of networking. If we presumed universal IP like that, we could use virtual-networking tools (which Kubernetes, for example, already supports) to build service connections both to the users and between feature elements. The problems with this are that it could take a long time to evolve, and that if it does it’s difficult to see how network connection features like 5G slicing could be differentiated without mandating paid prioritization on the Internet.

This is a complex issue, one that’s going to take some serious thinking to resolve, and it seems like there’s an almost-universal tendency to just assign the problem to a higher layer. That’s not been working to this point, and I don’t think it will work better in the future. Adding layers adds complexity, in orchestration, security, and pretty much everything. We seem to be complex enough as it is.

Can We Define the Next Big Tech Thing?

I blogged yesterday to ask the tech industry “What Lies Beyond?” What I want to do here, and what I deliberately did not do yesterday, was try to answer that question. That’s because my answer would move from analytical assessment of what I consider to be hard facts, to interpretations that in the end are my personal judgment…an educated guess if you like. I don’t want to mix those things on a topic this important.

If you look at our relationship with tech as it’s evolved over the last 70 years or so, it’s possible to see a trend that’s hardly surprising, but not always considered. Tech is getting more involved with us. In business, tech is moving closer to the how and not just the what. We used to enter transactions into databases by punching cards to record them long after the event. Yesterday, I checked out of a market by waving my phone over a terminal, and everything that those cards carried, and more, is pushed through a whole series of applications.

In the consumer space, many people have built their lives around what is in fact a virtual world. Even I, a social-media Luddite, have many “friends” that I don’t see face to face, and some I’ve never seen that way at all. We are entertained by devices we carry with us, not ones we stick on a console in the living room. We crowdsource more and more, probably more than we should, because we can.

The way that tech has changed businesses and lives is important because it’s the sum of the changes we think are valuable that determines what we’re willing to spend on tech. Thus, if the question “What Lies Beyond” that I asked in yesterday’s blog is important, it’s important to understand how tech gets even closer to us, more involved with us. Features of technology are important only insofar as they support that continued shrinking of tech’s distance from our lives.

To me, there is a single term that reflects what we need to be thinking about, and that term is context. Suppose you had an oracle (a soothsayer, not the company!) who told you the absolute truth about everything. It would be great, right. Now imagine yourself sitting in a restaurant having lunch, and hearing the oracle say “when you mow your lawn, you need to set the mower an inch higher.” Or maybe, as you ponder an investment decision, hearing “you haven’t talked with Joan and Charlie for a while.” Both those oracle comments may be not only true but helpful, but not in the context in which they’re delivered.

What lies beyond our on-demand world is the anticipated world. What allows useful anticipation is context.

Context is made up of things. First, there’s what we are doing. Second is what we need to do it. Third is where we are doing it. Number four is what is the risk-reward balance of interruptions. Even if we’re pondering investments, knowing the building is on fire justifies interrupting our deliberations. Knowing that the lawn needs mowing probably does not.

The problem with addressing context is that it requires too much from us. None of our components are things that our gadgets can readily provide. By nature, context is systemic, and so we need to get the system in which we live, the real world, transported into our tech world. This is best accomplished, like many things, through a movement on both sides. Thus, the what-lies-beyond future depends on three things, the metaverse, IoT, and AI.

What underpins context is what I’ve called the “metaverse of things” (MoT) or a digital twin. In order for tech to be contextual, it has to have a model of our context that it can deal with. The combination of digital twinning technology and an injection of metaverse principles can allow IoT and other information resources to be introduced into a contextual model that can also “contain” the user.

I’ve blogged before about MoT, digital twinning, and contextual services, so I won’t repeat that here. Instead, I want to look at the tech impact of these concepts, by linking them back to the driving benefits.

Any widespread use of context demands at the minimum a broad view of presence; not just the simple “I’m online” stuff we get now, but more body-specific, motion-and-location-specific. This would mean a combination of “tags” (which could be watches, even smartphones, but could also be RFID-like elements) and “sensors” that could sense the tag positions. We could expect this to provide a mission for IoT, and we could also expect it to be multi-leveled, meaning that people in their homes or workplaces would “allow” greater precision in accessing those tags than those walking the street.

Because this sort of tag/sensor combination is useless if every MoT player decides to create their own system, we can also expect that this will standardize at three levels. First, obviously, the sensors and tags would be standardized. Second, the message formats would be standardized so applications with access rights could read the presence data. Finally, there would be a service level standardization that would provide for access to sensor/tag data with the regulatory levels of anonymity and to simplify software development by presenting “presence features” rather than raw data. You can’t have hundreds of applications accessing a simple sensor without blocking it.

The next thing you could expect is that latency becomes important, but it’s not just network latency, it’s the length of the control loop. The first step in keeping latency down is to provide edge computing facilities to shorten transit delay, and the second step is to improve network latency, first to the edge sites and then between edge sites. If we assumed that edge hosting was done in a metro area, major cities would likely be able to satisfy edge requirements from a metro center; for “metro areas” that are spread out (Wyoming comes to mind; the whole state was defined as a Local Access and Transport Area or LATA) it would probably be necessary to spread the metro out, or at least to manage a trade-off between the number of connected edge sites (to reduce latency in reaching one) and the latency of the connections.

The notion of presence as a physical property has to be augmented by presence in a behavioral sense, meaning that applications would need to understand that “what am I doing?” dimension. Some of that knowledge could come from physical presence; I’m driving a car, walking along, riding the subway. Some could come from my interaction context; I’m chatting with Jolene or Charlie, texting with my mom, or perhaps I’m receiving a call that’s been prioritized based on my policies. This information is generally available today, but in order for it to be a real driver of applications, it would have to be deliverable in a standard form and conform to regulatory policies on privacy and personal protection, just as physical presence would.

What I think this exercise demonstrates is that the future of tech depends on the creation of an ecosystem, not on the individual technologies that make it up. If you improve network latency you have removed that barrier to a set of applications for which we have a half-dozen other inhibitors to deal with. Same with edge computing or IoT sensors. Our problem in advancing tech to the next level, in the sense of being able to help workers more directly to enhance their productivity, or to improve quality of life for consumers, is that we’re looking at a strand of spaghetti and calling it a dish.

If we want to advance tech, in computing, networking, personal technology, or whatever, then we need to support this ecosystemic view in some way. If we don’t do that, then eventually we’ll likely stumble on a solution, but it will take decades where doing it right might take months. Which tech evolution do you prefer? I think you know my choice.

Tech: What Lies Beyond?

Why is it that we always seem to miss the obvious? OK, we had a COVID wave, during which some areas were locked down and many people avoided going out, particularly to “social gatherings”. Economic data shows that they accumulated record savings. What happens when the restrictions on their lives are lifted? Answer: They spend some of those savings. Call it “rebound behavior”.

How about this corollary? People stayed home, so what did they do? Educate themselves by reading the classics, or watch TV? You can guess that one. In a desperate attempt to find something to watch after the first week of isolation, they use some of that money they’re saving to subscribe to pay TV. What happens when they can go out again? Answer: They go out, and thus they’re not home to watch that pay TV, so they cancel their subscription.

The pandemic created massive shifts in human behavior, which created massive shifts in economic behavior. Returning to “normal” doesn’t mean that somehow with the advance of the calendar, we erase the past. We recover from it. That’s a big part of why we’re having inflation now, and it’s a big part of why we’re seeing changes in the market sectors that were impacted by the behavioral changes. In tech in general, and networking in particular, it would be nice to know what changes are transitory, part of rebound behavior, and what are likely to persist.

Objectively, the current impacts on the global economy and markets are related to “inflation”, and I put the term in quotes for reasons you’ll soon see.

Inflation the increase in price for goods, almost always associated with an excess of money to spend. The most familiar cause of inflation is boosting the money supply—we saw that with Confederate currency during the US Civil War, and with the German mark during the waning days of the Weimar Republic. A similar, though usually less dramatic, source is excess economic stimulation, which some have said is a factor in the current situation. But there’s another factor at play here, which is the fact that consumers during the pandemic accumulated record or near-record savings, and denied themselves a bunch of things. Pent-up demand is a powerful force once the factors suppressing demand are eased.

The normal response to increased demand is that happy vendors increase supply. That’s been a problem because there’s constraint on supply chains created by both COVID and the post-COVID mindset. We had a lot of people working from home or out of work. Not all were willing to return to business as usual. We had business failures, people who found other positions, and all this created distortion in the labor markets. Governments have also been stressed by conditions, including civil unease, and have responded by trying to gain advantage somewhere. In short, we’re in flux in the global economy, while the economies of the world adjust to the evolving market realities. That’s why global stock markets have been so volatile.

Are we in, or about to be in, a recession? If so, it will be an unusual one. Unemployment traditionally spikes when we’re entering a recession, and that’s not happening. Consumer spending usually dives, and that’s not happening either…yet. The question, from a broad economic perspective, is whether these twin not-yet factors turn negative because of increased worry, before some of the economic shifts and shakes have shaken out. If they do, then we can expect that there will be a recovery in the second half. If not, then when we finally see daylight will depend on how far our twin factors go down the tube before we see stability in the fundamentals of the global economy.

For tech, we can expect a variation on the theme here. Business will typically try to plan capital purchases for stable economic conditions. If they see an issue, they will often take action to minimize their exposure if things get worse. Some banks’ decision to increase reserves for bad loans is an example, and so are decisions (so far by a limited number of companies) to slow-roll tech spending plans, particularly new projects and new spending predicated on a general improvement in revenues. This sort of reaction tends to happen when there’s a perception of risk that crosses over into another budget period. Companies (assuming, for convenience, calendar-year budgeting) will often push projects into later quarters if they feel uncomfortable with conditions.

As of today, I’m not hearing too many cases of project deferral. So far, the largest constraining factor on vendor revenues has been delivery challenges created along the slippery slope of the current supply chain. Since the supply chain constraints on the production of goods is also perhaps the largest source of inflationary pressure, that means that fixing the supply chain would fix things overall, fairly quickly. When, then, could we expect some relief there?

That’s hard to say. I’ve attempted to run my model on buyer behavior to guesstimate something, but this is uncharted territory. I told a friend in the investment business that I expected that the quarter just ended (April-June) would still probably generate some unhappy news, but that it would also generate some good news. The next quarter, then, would be when I’d expect the good news to dominate. Given that the stock market is driven by the prospect of something more than by its realization (by the time what’s coming is clear, everyone else has already taken action, so you need to move on rumor), we could expect stocks to be somewhat ahead of overall economic conditions.

That doesn’t mean that tech is going to roar back to pre-COVID days, but better. A lot of buyers have been planning for the worst, and some of the steps they’ve contemplated wouldn’t be inappropriate in good times. As is the case with the economy overall, there’s going to be a race between positive moves with the supply chain, and negative reactions to inflation, central bank interest rate increases, and overall economic concerns.

Two things will make the difference between tech vendors at risk and tech vendors with an opportunity. The first thing is breadth of the products on which their revenue depends. This is not a time when the broadest solutions win; it magnifies the risk that at least some of those products will be under project deferral pressure. The second thing is a strong link to a credible future requirement set. Buyers are still looking forward, but they’re potentially looking beyond their normal horizon. If they see a vendor who seems to offer something outstanding in support of a distant shift, that vendor will look better in the near term. This second point is linked to the concept of a “planning horizon”.

From the very first, my surveys of enterprises have indicated that companies considering capital purchases will assign an expected useful life to those purchases (three to five years), but in the last two decades they’ve responded to increased interest in “technology substitution” by defining what is in effect an expected useful life of the approach or architecture on which their purchases are based. Over time, enterprises have tried to look further forward than the “useful life” of equipment, thinking that when a piece of equipment reaches end-of-life, it would likely be replaced by a comparable device. However, if the entire approach is now subject to review, an obsolete piece of equipment may not be “replaced” directly, but be part of a broader shift in approach.

What I am seeing from some enterprises is an indicator that the planning horizon is being pushed out, as well as the “useful life” period being extended. That means that enterprises are now sensitive to risks to approach, which means they want to see vendor engagement with the kinds of future developments that could create a change in approach. This is why having a strategy that specifically links to credible shifts in technology and architecture is a benefit.

The obvious question is just how a vendor could create links to credible shifts in technology and architecture. It’s likely the answer to that would be different for enterprises versus service providers, and perhaps even across the various global regions. It’s also likely that the best answer would include things that aren’t particularly to the advantage of the vendors.

Start with a basic truth, which is that pure connectivity is under pricing pressure from both sides. Consumers may well believe “bandwidth hype” more than enterprises, but neither group is showing buyer behavior that suggests that they’d pay more per unit capacity than they do today. In fact, there are strong signs that they would want to pay less. What’s needed to boost tech is simple; credible benefits. Simply increasing bandwidth doesn’t deliver that, particularly for businesses. We need to go beyond.

What lies beyond? That’s a question that could have been addressed two decades ago. In the business space, I noted that we’d seen cyclical changes in tech spending that reflected periods when new tech opened new business benefits for exploitation. We ended our last one in roughly 2000, and we’ve not had one since. Could we have? I think so, and three waves of spending improvement, validated by real government data, seems to back that up. We have an opportunity to answer the “what’s beyond” question today, too. The long-term success of tech in avoiding commoditization depends on doing just that.