Is AI an Extension of Automation? If So, that Could be Bad

I’ve blogged multiple times on AI and ChatGPT, so you may be bored with the topic. Bear with me, because this time I’m going to look at what the real, long-term, threat of the technology could be. It may not be as simply scary as it’s popularly portrayed, but in the long term it could perpetuate and even cap a negative shift that information technology and “automation” started.

Think back to the late 1700s. Many of the really good jobs required considerable strength; blacksmiths, stone masons, and so forth. Others required special skills and dexterity, like cabinetmakers. Then along came machines, and suddenly almost anyone could produce the kinds of goods that specialized, high-strength, high-skilled, workers would have produced, and in much greater quantity. The industrial revolution was populizing in two dimensions. It made almost everyone a potential producer, and it made goods in such quantities that almost anyone could afford them.

Now think forward to the 1960s. Computer systems were still fairly rare, but many large companies had them. There was an elite of “programmers” who were suddenly high-value employees because they could tame these new monsters and bend them to the company will. It didn’t change everyone’s life, but it did create a new kind of elite. If you really, really, understood how to use computers, you were almost literally in a class by yourself. One big insurance company found that about one person in fifty could be taught to program well enough to be commercially useful.

Since that early computer age, we’ve been making computers faster, cheaper, more effective. A personal computer is now within reach for many, but not necessarily within their grasp. If you really know computers well, a PC empowers you to do things that someone without your skill could do only with considerably more time invested, if at all. Jobs that benefit from computer literacy turn out to be the best-paying because of the productivity multiplier computers offer. Computers also started running machines themselves, the first of what we think of as “robotics”. Automation, meaning the conversion of human tasks into tasks a computer-driven machine can do, started to put the kinds of people that the industrial revolution empowered out of work. And that is the risk of AI.

I do not believe that something like ChatGPT could, even in five years, equal a well-qualified expert in a given field. Remember my two friends, one in financial and other in tech media, who said essentially that they believe ChatGPT could do as well as some subordinates they knew? I agree, but not as well as my friends could do. AI is at the low end of a human capability scale in terms of answering questions. One could argue, as my friends seemed to be doing implicitly, that it was “average”. Well, so are most of us, by definition, and what happens if a lot of those “average” people in media or finance were replaced by AI? How do they earn a living?

I grew up in the Pittsburgh area, and when I was back there twenty years ago you could find both the rusting hulks of big manufacturing and mill operations, and also find some of the workers who’d been employed there. They had great jobs for their time, and I talked with some of the sons of those workers who expected to have the same kind of jobs when they grew up. They didn’t. Manufacturing has been automated and off-shored. A lot of the “blue collar” workers who made the US a powerhouse in the space have been automated away too.

Office workers, “white collar” workers, weren’t threatened…until AI came along. If ChatGPT can write a story on tech, will somebody start a media company that relies on that and doesn’t bother employing reporters? If ChatGPT can write basic financial analysis, do all the low-level positions in the financial hubs of the world get replaced by a browser window? Automation didn’t come for the white-collar types, but AI is coming for many of them.

We talk about job categories being lost as though we were talking about parking spaces. If there’s a limited number in one lot, you move on to another. Consider, though, what happens if AI can replace all those “average” people? Say half the total population, maybe more. Where’s the next parking lot that could handle all those looking for space? How does an economy, a politic, handle a situation where such a big chunk of the worker population can be replaced by AI? The industrial revolution was populizing. It made more people participants in the economy. The tech revolution is depopulizing; it’s making more people redundant.

I’m not qualified to talk about the morality of something like this; I’m sure others will give it a shot. I think I am qualified to say that a society cannot survive wholesale economic disenfranchising of a big chunk of its population.

Back when I was a junior programmer, the company that trained and employed me used to give programmers a special “any-time” lunch pass so we never had to wait in line to eat in the cafeteria. I used to see the look in the eye of those in line for the next time as I passed them to eat when I wanted. It wasn’t resentment, it was envy, because at that time most people thought that computers would level the playing field further, create the “second industrial revolution” with the same positives as the first. I wonder what those people would be thinking today, when they realize that the people going to the head of the line were working on a technology that could replace them completely?

A more relevant question comes to mind. Suppose that I’d been working on that super-AI concept? A concept that could have likely displaced many, most, or perhaps even all those people in the line for lunch? It would have spared me any hostile glares, but if we assume that other programmers in other companies were working on the same thing, then it raises a very important question. How would those displaced people get the money to buy insurance, or the products or services of those other companies? Without the products and services, could my insurance company employer have paid for the compute power needed for AI, and have paid me and my programming peers? A consumer isn’t just a body, its wages and spending form the basic element of the economy…except when it doesn’t, in which case it’s not a consumer any longer.

Most of the stories about the impact of ChatGPT and AI are stupid, click-bait stupid. Many who read it will never live to see the kind of impact that the stories predict, and some of those impacts will never happen. The real risk, the risk of creating mass economic redundancy in a world where class differences are already causing political polarization and warfare, is a long way off. But so are potential answers, resolutions. We need to create solutions at least as fast as we’re creating the problems, and I’m not convinced that’s happening.

Could Juniper be On the Verge of a Seismic Positioning Change?

Over the last two years, I’ve been pretty clear regarding the importance of virtual networking. It’s not only that the use of the cloud, and the expanded use of virtualization and containers in the data center, demands it. SD-WAN is a form of virtual networking that’s exploding as a means of connecting thin sites and cloud components to the corporate VPN. Finally, it’s inevitable that the Internet becomes the dominant means of connecting both consumers and businesses, and the latter application demands virtual networking for traffic segmentation. It also promotes Network-as-a-Service (NaaS) in its broadest sense, which further exploits virtual networking and may well link SD-WAN with the other virtual-networking models.

I’ve also been clear that I believe Juniper has the strongest product line of all the vendors in the virtual networking space. They have their Contrail virtual-network technology, they acquired what I believe to be the premier SD-WAN tool with 128 Technology, and Mist AI gives them a foundation for automation of network operations, something critical in the highly dynamic and essentially fuzzy world of the virtual network. But I’ve also been critical of Juniper’s positioning of their assets in the virtual network space. Innovative products require innovative messaging, and recently there have been some signs that Juniper is getting more serious about singing their virtual-networks song.

Containers have become the go-to application hosting model for most enterprises, both within the data center and in the cloud. Kubernetes is the preferred container orchestration tool, but even large enterprises have concerns about adopting open-source software in its native form, compiling the source into loadable units, and maintaining all the other elements of a container ecosystem. OpenShift from Red Hat is a supported container platform that includes orchestration, and one of the elements of the platform is the Container Network Interface, a plugin that provides containers with virtual-network capability. Juniper had a Kubernetes plugin for Contrail, and now it’s released CN2, which is a virtual-network SDN/CNI strategy for OpenShift.

Juniper has taken steps to integrate CN2 tightly with OpenShift to make it easier to install, and easier to adopt for companies that already have a CNI strategy (which, realistically, is most OpenShift users). This is particularly valuable for companies who are looking to integrate their data center virtual networks with their cloud networks in a hybrid cloud configuration. Given that IBM (Red Hat is an IBM company) is one of the leaders in promoting hybrid cloud strategies for enterprises, this gives Juniper a hook into what might well be the most critical applications of virtual networking, and thus of networking overall.

Juniper also announced its “SD-Branch” strategy, which combines Mist AI management (via the Juniper Mist Cloud) with the Session Smart Routing Juniper acquired with 128 Technology. In this announcement they’ve not only tied in AI, but also the assertion that the explicit connectivity control offered by Session Smart Routing is a key piece of zero-trust security and enhances security overall. Other Juniper security tools can be layered on to this framework. Finally, the SD-Branch bundle includes Marvis Virtual Network Assistant, a tool to provide easy operations support for complex multi-technology networks that include SD-WAN.

These two virtual-network-related announcements were blogged about on the same day, which certainly raises the question of whether Juniper might be looking at creating explicit integration between the two. Such a move would do a lot to unify Juniper’s own story, and also to address the fact that a business network these days is increasingly a composite of the Internet, a VPN, the data center network, and the cloud network. Could Juniper be planning to unite its virtual-network families, and by doing so create a truly universal virtual-network model?

In my earlier blog on NaaS, I pointed out that NaaS couldn’t be simply a usage-priced VPN, in part because the Internet and the cloud foreclosed such a simple vision. CN2 and SD-Branch could be combined to create a true, highly functional, NaaS framework that would embrace current VPN commitments (and even true private networks) and at the same time be suitable for the inevitable shift toward using the Internet broadly for cloud and business connections. Not only that, I think that those capabilities are already in place so it’s just a matter of collecting them into a kind of super-product. Since Juniper is unique in the features it already has, this would give Juniper a major lead in the NaaS space, and I think that “true NaaS” is where networking overall is heading.

The same moves could give some life to Juniper’s Cloud Metro story, which is IMHO the most critical transformation of our time in network infrastructure. As I’ve said in many blogs (in many ways, too!), the metro piece of the WAN is the most critical piece in terms of infrastructure evolution. It’s close enough to the edge to allow for personalization, which makes it a critical place to inject service features. It’s close enough to be reached with low-latency connectivity, which means that all real-time applications that evolve out of IoT, the metaverse, or a combination of the two will likely have to be hosted there. It’s also, as the logical edge of the core network, the place where any optically intensive core transformation will have to take place. In short, if the Internet is really going to become universal, it’s likely the metro deployments that will launch any technical changes designed to facilitate that universality.

You could argue that yet another Juniper blog, this one on Apstra’s importance as a data center automation platform based on intent modeling, is an important metro step. Metro isn’t just networks, it’s also hosting, which means that it’s an edge data center. Could Juniper be looking at Apstra as an element in metro? The Apstra blog was five days before the CN2 and SD-Branch blogs.

Those who have followed my views on Juniper, NaaS, and metro over the last couple years will likely realize that what I’ve said they could do here hasn’t changed much in that time. What may be changing is Juniper’s intent, with the emphasis the key factor here. We’re on the verge of the biggest service transformation in the history of networking. Therefore, we’re on the verge of the biggest network infrastructure transformation, and the biggest transformation in the stuff that builds the infrastructure. Could Juniper’s announcements be an indicator that it’s really going after this Great Transformation? If so, and if they succeed, then we could be in for a very interesting 2023.

Private 5G: Now Dead or Never Alive?

It’s already getting hard to find positive things being said about 5G, and candidly most of the negativity is well-deserved. Hype doesn’t conquer all; although it does produce more interesting stories, reality eventually impinges. At any rate, Light Reading offered another round in the 5G downside fight, with the story on how the shine is wearing off private 5G. Again being candid, I have to wonder if there was ever any real shine at all.

This particular story seemed to stem from the comments that Verizon made during its January earnings call, where the CFO admitted that “A couple of areas where we are behind versus our expectation [are] the mobile edge compute and 5G private networks”. I don’t think Verizon was alone in not seeing private 5G meet expectations, but (returning to candor) I wonder how many of those who expressed lofty private 5G hopes had their fingers crossed behind their backs.

Back in 2021, which happens to be the year when Verizon executives were claiming a TAM of between $7 billion and $8 billion for private 5G, enterprises were telling me that they didn’t see any more value in that technology than they’d seen in private LTE. The article quotes AT&T as pointing out the fact that private LTE was similarly hyped and similarly disappointed. In fact, more enterprises said that advances in WiFi made private 5G less useful than said it was useful at all. Still, operators, mobile network vendors, and even cloud providers were eager to jump on the technology as a promising revenue source for themselves, and a heady benefit set for enterprise users. Why?

I can’t get on-the-record comments from vendors on things like private 5G, and few comments even off-the-record. However, I did get the sense that a lot of vendors and providers who had been seeing 5G as creating new revenue opportunities were engaged in the common practice of dodging negative proofs. You see a given technology as a way to get positive media coverage, and you dive into it. Maybe some believe it, but probably most in their hearts have doubts. Then, as the proof that the positive views were unjustified, you jump to a related area that’s not yet disproved.

OK, you all know how I feel about this sort of hype, but there is perhaps a question to explore in this flood of exaggeration and disappointment. How many is the “some” who actually believe this stuff? Is there self-delusion going on here, or is there simply manipulation? It turns out that there are both.

Over the last decade, I asked marketing professionals whether they could be more effective pushing something they believed in versus something they doubted. The numbers varied a bit, but generally about three times as many said they were better at marketing things they believed. Most who told me that said that when they believed something they were “more convincing”, and those who could say why that was suggested that people were willing to get more aggressive pushing something they thought was true, because proof to the contrary wouldn’t blow back on them.

The same group of professionals also admitted (by about the same margin) that people who at least appeared to believe strongly in some new technology were more likely to get meaningful roles in marketing it. That suggests that the ability to appear to love something gets you promoted if that something becomes a focus. That combines with the other factors to create organizations who are likely to be cheerleaders for things, whether they really believe them or not. Given that hype gets you editorial mentions and potential sales engagement, that’s probably enough to promote some hype-targeted campaigns, even products.

In the case of 5G, though, the big reason for dodging negative proofs seems to be that everyone really did believe 5G was a revolution. Why not? It had such broad support. Yes, the media led the cheers, but operators, vendors, analyst firms, and pretty much everyone else validated 5G. It was budgeted after all, at a time when having specific funds available meant a lot. How could something budgeted fail? It couldn’t, and it didn’t, of course. It was just that the model of “success” wasn’t exactly what was expected. 5G turned out to have very little user-visible value beyond LTE. When operators started to pull back on 5G spending, vendors started to look to non-operator buyers to justify what they’d already done, and private 5G got its start.

What, exactly, is private 5G supposed to do? Clearly an enterprise couldn’t deploy a private 5G network as an alternative to public mobile services. Clearly, within a facility, it’s very difficult to see why private 5G would offer much versus WiFi, which is far less costly and more versatile. OK, maybe a campus location could use private 5G where WiFi roaming might be complicated, but wouldn’t that same location do better with public mobile services, including 5G? And after years of standards work, we still don’t have any decisive features to justify private 5G. In fact, it’s hard to find features to differentiate public-network 5G services.

The root problem here wasn’t 5G, or mobile networks, or standards processes, though. It was bottom-up or supply-side thinking. The Field of Dreams mindset leads you to the promised land if “they” actually “come”. If that happy result isn’t a pure accident, if “planning for serendipity” isn’t a great idea, then you have to assume that the question of who will come and why has been asked before the field has been sown. You also have to assume that the value proposition becomes clear before you start expecting a lot of traffic.

This may explain why ETSI just announced the adoption of “Software Development Groups” or SDGs. These, the organization says, will “accelerate the standardization process, providing faster feedback loops and improving the quality of standards.” I think that there’s little question that SDGs could translate standards into something actually available faster than just writing specifications would do, but that doesn’t resolve the private 5G problem or other problems of the same type.

Suppose 5G had advanced at a breathtaking pace instead of a glacial one? Would that have meant that real applications that could benefit from 5G, raise operator revenues, add business benefits, would have emerged faster? Why? Just having an SDG to build code doesn’t mean that the basic mission would somehow change, it could just mean that the failings of the process might be recognized sooner. When we had ample numbers of providers of private 5G technology, did users suddenly jump on it? Nope, so why would they have done so had software been available three or four years earlier?

If you’re asking the wrong question, no amount of efficiency in getting your response is going to overcome the fact that it was the wrong question. You can still plant hopeful corn rather than asking what would motivate people to travel to your field. Software design can start wrong and stay wrong, just as easily as formal standards can.

ETSI’s SDGs can “improve the quality of standards” only by making standards into something that responds to market needs and opportunities. If you subordinate SDGs to standards processes, that’s not going to happen. ETSI has to accept that the problem with standards is the way they’re conceptualized, that needs and opportunities have to be identified first, validated, and only then addressed with standards and software. That this would mean a likely difficult transition from supply-side thinking is obvious, and that the transition could have been made a decade or more ago and was not is also obvious. That demonstrates that issuing a press release isn’t likely to create the transition now. ETSI has to make explicit changes to align initiatives with realistic opportunities. If they do, they might be able to save 6G.

Is NaaS Really Too Expensive or Are We Thinking About it Wrong?

Enterprises are obviously getting concerned about cloud costs, and according to those I’ve talked with, the concerns are greatest because of usage pricing. While it’s possible to get an IaaS instance on a fixed price, things like scalability and even traffic can introduce at least a usage-price component to the picture. It’s this added dimension that tends to create cost overruns and CFO angst.

The same issue may be hitting network-as-a-service, or NaaS, according to research cited in an SDxCentral article. The article also says that research shows that cloud-native enterprises and startups will be the earliest adopters of NaaS. My own contacts agree with the main point about NaaS costs, but with some important qualifications. They do not agree with the second. In fact, my research suggests that any NaaS use by the groups cited may be more accidental, and due more to a tolerance for usage pricing than any proactive NaaS decision.

It’s always dangerous to generalize, so the old joke goes. Better to say it’s dangerous to make general comments on any hot tech term because you can bet there’s no consistent definition on which to base comments and analysis. NaaS, to some, is just about expensing networks versus basing them on capital equipment. NaaS, to others, is about being able to define connectivity on a per-user, per-application basis. To another group, it’s about pay-as-you-go, and that’s the definition that renders the main thesis of NaaS being held back by costs valid. For the other models, usage pricing isn’t implicit and so the argument that cost is holding NaaS back is invalid.

One thing this demonstrates is that NaaS, like pretty much all those other hot tech topics, needs very careful service positioning in order to succeed. Tossing out the term invites misunderstandings given all those possible meanings, and that’s going to create a world of uncertainty among buyers. I’ve found that a buyer, presented with a vague or poorly understood value proposition, will tend to say they fear cost even though cost may not be the root of the problem. If you’re pushing NaaS in a form that doesn’t expose users to usage pricing, you need to say so.

Even if NaaS has some usage pricing component, that doesn’t necessarily expose users to higher costs, or even unpredictable costs. The problem here is that NaaS providers tend to oversell their offerings, perhaps even sell them in situations where NaaS isn’t a good answer, and perhaps selling a NaaS model that’s not useful for the specific situation an enterprise might present. A “pure usage” NaaS model without other features tends to be valuable when expected traffic is highly bursty and it’s necessary to size the network service to the peaks to assure QoE. If there are features to manage per-user/per-application connectivity, then that may add enough security benefits to mean that some usage-price risk can be accepted.

Speaking of that risk, the article’s statement that NaaS can lead to higher costs during “a massive surge in data flows”, and that’s true. It’s also true that NaaS can lead to lower costs when data flows are limited. Enterprises looking at NaaS to save money should (and do, according to those I’ve talked with) expect to see more of the latter than the former, or they have no reason being NaaS prospects to start with.

What about the contention that startups and cloud-native enterprises are most likely to be early NaaS adopters? The big problem here (as you likely guessed) is that we have our vaguely defined NaaS term coupled with an at-least-as-vaguely-defined cloud-native term. What are the probabilities that two UFOs will collide? How would you know without knowing the probability of even one UFO existing? But let’s press on to look at the technical side.

Realistically, cloud-native applications are ones that exploit the cloud’s unique benefits, which are things like scalability, agility, and resilience. It’s hard to see how you could use cloud features to maximize those benefits without ending up with cloud services that are themselves usage priced. Would a “cloud-native” enterprise who accepted usage-priced cloud services then reject NaaS because it was usage-priced? I don’t think so. I also wonder whether the workflows associated with a cloud-native application would even benefit from a traditional usage-priced NaaS model, since each component of the application would likely have a fairly static traffic flow; it would be the number of components that would generate more traffic.

How about the notion that startups would be more likely to adopt NaaS? I think you have to assume that the startups in question are OTT startups, since it’s hard to imagine a startup business in a traditional retail or manufacturing sector being much more than a single site, and thus not much of a consumer of any network service other than the Internet. OTT startups could be NaaS consumers to avoid having to capitalize network equipment, but OTT services are delivered over the Internet and so it’s far from clear why an OTT would need any “service” other than the Internet, in NaaS or in any other form.

What is the big barrier to NaaS? I think it goes back to the definition, and if we translate that into technical terms it means that NaaS needs some specific baseline feature set. Enterprises can buy VPN services not just because they know what they cost, but also because they know what they are. What is NaaS?

Realistically, NaaS is achieved by using the Internet as the transport network. NaaS is SD-WAN, in other words, at the core technology level. Given that, there’s no reason to think that usage costs would be a barrier to adoption, since the Internet isn’t priced that way. That leaves two questions. First, should we interpret the so-called price resistance to NaaS as a rejection of paying for anything other than Internet connectivity? I think that’s a “Yes!” Second, what NaaS features could then be added to an Internet-based NaaS to create a truly compelling service? That’s a question I think we need to work on answering, and I’ll look at it down the line.

Cloud Native, 5G, and The Real Relationship

What could be better, after talking about the pitfalls of cloud-native development, to talk about how cloud-native and 5G are locked in a deadly hype embrace? There are a couple of stories to reference; one that says the 5G emperor has no clothes, and a second that asks whether 5G can really offer an alternative to wireline broadband. It sure looks to me like the media side of the networking space is struggling to get a non-hype vision of 5G but doesn’t quite know what to accept and what to reject. So let’s try to clothe the 5G emperor ourselves.

The first of the stories wants to dress up the 5G theme of faster services to users. The claim is that “In order to run cloud-native applications over 5G, they need advanced features like network slicing, microservices and user plane function (UPF) which are only supported by 5G core networks – not 4G LTE.” That might seem logical on the surface, since cloud-native sort-of-begats microservices, but I don’t think that’s true.

An application that uses a network service uses the features that are visible. Cloud-native applications could, and are, written all the time to run over not only 5G Non-Standalone (NSA) but even 4G, for the simple reason that all most cloud-native applications need is connectivity. In fact, nearly all cloud-native applications run without any “Gs” at all, because they run on wireline broadband.

That’s not all. The UPF feature of 5G is just the on-ramp to the “data network”, which is usually the Internet. Network slicing is invisible to a 5G user except as a means of partitioning a 5G network by class of service or for MVNO operation, neither of which are required for applications to run. Microservices, or lack of them, in 5G Core would be opaque to service users because they’re internal to the implementation. What is true is that any true 5G-specific applications are likely to be edge computing applications, and edge computing might be promoted by the deployment of 5G Core, providing we assume that both 5G Core and 5G NR are feature-hosted in the metro.

But while I disagree with the justification offered, I agree with the conclusion that the 5G emperor is virtually buck naked. That’s the case for one simple reason that has nothing whatsoever to do with the nitty gritty technology. The reason is that you can’t develop a standard to guide the future of application development without a very clear and specific notion of what application you’re trying to encourage.

Let’s imagine a neanderthal community in a lovely valley somewhere in Europe. The community has been slogging through the woods, up and down the valley and adjacent hills, for generations. Then we have, born into the community, a neanderthal standards-writer. He looks at the river and says “if we had locks, boats could move upstream more easily.” Or he says “if we had paved roads, we could move our goods using carts at a much faster pace.” The problem is that neither boats nor carts have been invented, so before either of these admittedly magnificent innovations can be exploited, we’ll likely need millennia of progress to develop what does the exploiting.

The problem with 5G isn’t that there’s no 5G Core, or even that there is one that’s not being exploited. The great majority of 5G users can’t even tell they’re on 5G, much less whether there’s a 5G Core inside whatever it is that they’re on. It’s that all the features of 5G Core were noodled without any vision of utility, so (surprise, surprise!) there is no utility to be had.

OK, so how about the question of using 5G (as fixed wireless access or FWA) for home/business broadband? According to the second article, this is another naked-emperor 5G fable. “If you don’t have access to cable or DSL, a 5G home internet service will likely prove much better than the 4G LTE you may be depending on now. If you have access to an affordable traditional internet service, you may want to think twice before abandoning it for a 5G alternative.” I have to disagree with that one too.

The article points out that there are really two forms of 5G, the millimeter-wave stuff and the traditional mobile-network-spectrum stuff. Depending on which option you have, and which is available to you, the speeds could be anywhere from about 30 Mbps download up to near-gig speeds. Some areas can only get 4G LTE speeds. However, people who have tried it tell me it’s always faster than DSL most say that it’s competitive with most basic cable broadband services, though not as fast as the high-end tier in well-served areas. So 5G FWA, when available, is likely to be quite decent.

Maybe the most important point about 5G FWA’s value is that it’s the fastest-growing home broadband technology. The main reason is that it has the lowest “pass cost” of any broadband delivery option. Pass cost is the cost associated with getting broadband infrastructure to a point where customers can be served if they sign up. Fiber, CATV, or other physical-media technologies require that the operator deploy the media into neighborhoods and then connect to subscribers, and the cost of that initial deployment is significant, and has to be accepted before anyone can sign up and contribute revenue.

Not only is pass cost low for FWA, connect costs are even lower. Nobody has to wire anything to the home to get service. Most FWA installations are plug-and-play, with the customer simply mounting an antenna somewhere suitable and connecting to it. The combination of low pass cost and low connect costs means lower service costs.

Another benefit of this lack of physical media is that it can open broadband competition in areas where anti-competitive practices have closed them. Many residential subdivisions have cut deals with wireline providers (cable companies in particular) that restrict access of other providers in return for the selected provider pre-deploying (and perhaps kicking in some developer dollars). FWA jumps over this and makes a competitive option available.

There is an availability barrier for 5G FWA, just like there is for other home broadband technologies, but the barrier is coming down for more and more users as competing operators (Verizon and T-Mobile in the US, for example) open up more areas. The sweet spot for 5G FWA is the suburbs, where tall buildings aren’t likely to interfere with the signals and where other broadband options can be limited by the pass costs. It’s not unreasonable to believe that 5G FWA will be the top option for home broadband in price and performance in many areas by the end of 2023. That doesn’t sound like a technology in trouble.

Technology coverage these days tends to follow a three-phase trajectory. First, everything is the Last Hope of Civilization As We Know It, hyped to the sky. Then it’s a cesspool of lies, and finally it’s just old news, ignored forevermore. 5G is in the second phase, where stories compete to find new reasons why it’s worthless. It isn’t that, but it’s also not the “everything” that the first step in our 5G trajectory had claimed. What it really is may be hard to determine, though, because once it enters that last phase you’ll never hear anything about it again.

The Uneasy Relationship Between ChatGPT and Search

I’ve already blogged about my own experience with the wildly popular ChatGPT, but that analysis opens a question that was also raised recently online, which is whether ChatGPT could be a threat to search in general and Google/Alphabet’s revenue model in particular. So could it? The answer is related to some of the points I made in that earlier blog, but at the same time it depends on other broader points.

The key point of the article I reference is “However, with ChatGPT’s introduction, Google could quickly be pushed into irrelevancy as users throng for more simplistic answers than indexed pages.” A quest for more simplistic answers is the root of the issue. Large-language-model AI engines like ChatGPT rely on creating composite views by analyzing articulations. That means that they can easily digest a vast amount of material and frame a free-text summary of what they’ve found. You can see how that works in the examples I provided in my blog. The result is what you could call “guided consensus”, which means text that’s produced by constraining all the relevant articulations based on the query. 5G’s potential and 5G’s potential based on CIMI Corporation views are likely the results of collecting articulations based on different constraints.

It’s this concept of consensus and articulation that’s important. I characterized ChatGPT’s output as being shallow, lacking insight. I heard from two long-time friends, one a Wall Street type and one a former editor-in-chief of a tech publication, and got surprisingly similar comments. The former noted that the ChatGPT results would have been considered good analysis on Wall Street, and the latter that they’d have been happy had reporters created the story that ChatGPT did. What this says is that “shallowness” isn’t just an attribute of the kind of stuff students might want to generate, it’s perfectly acceptable in many other spaces because mass market consumption doesn’t depend on insight quality.

I could have done a search on the same terms I sent to ChatGPT, and had I done that I’d have gotten (as the article suggests) a list of indexed pages that hit the key terms. From those, I could have produced a similarly shallow piece, but with more work needed to check each link, develop a viewpoint, and then turn that into text. But I could also have done the same search, looked deeper, and created insight. What my two friends demonstrated was that even in fields we might think are technical, insight takes second place to speed of producing pablum. That means that all those who want facile shallowness would in fact be likely to flee from search to ChatGPT. So search and Google are doomed? Not so fast.

First, nobody is going to do a ChatGPT-like tool without any hope of financial gain. They’re even less likely to deploy it at such a scale as to threaten searches, because the cost of running all the queries would be daunting. The future of public, open, chatbots is the present of search, meaning ad sponsorship. I can see Google, Microsoft, and others tuning their own chatbots to serve ads. But even that wouldn’t be likely to replace search, not because the results would be shallow (that’s fine with most), but because they’d necessarily be tainted.

You can stick ad-sponsored links in search results easily (and everyone does that, despite the fact that most people ignore them). You could also stick them in a chatbot text response through a YouTube-like pre-roll before the user sees the response. How many ads would a user be willing to watch, though? I think one might be a stretch, but surely not more than that. How many are offered in search results? A quick test on my part yielded an average of six on the first page and five on the second. Eleven to one? Unless advertisers paid more for ChatGPT-type ads, I don’t think there’s enough revenue there to build out a mass-market infrastructure.

So the real pressure of ad sponsorship would require that the results be biased in favor of an advertiser. Ask ChatGPT who makes the best routers, and it will reply with a qualifier that it’s a complex disclaimer, and a list that’s not exactly complete or useful. Might a router vendor buy a mention in that sort of result, maybe even to be featured? It’s hard to see how that wouldn’t be a natural result, because without that capability the value to advertisers would be minimal and the revenue to the chatbot provider wouldn’t cover costs much less create profits.

Suppose that advertisers could buy their way on to that list. Ask “who makes the best IP router” and it responds with a list whose order depends on the advertisers’ contribution. Suppose somebody buys the top router and it turns out to be junk, even a fraud? At the very least, this is going to create a barrage of bad publicity for the chatbot, and it’s not beyond the realm of possibility that it would generate lawsuits. There is a difference between doing a search for “best IP router” and getting a list of mentions of the phrase, and getting a text answer to my chatbot question. The former is research, and the latter looks for all the world like an opinion. That difference is that makes ChatGPT valuable, but also what makes it a potential risk.

A bigger question, though, is whether consumers would trust something like ChatGPT if its generated text was determined in part by payments from advertisers. I asked an enterprise CIO who contacted me on my original ChatGPT blog, and he said “Why would I? I can get the vendors to lie to me face to face if it’s lies I want.” OK, that’s cynical in a sense, but it’s also true. Would you pay for a report whose contents were determined by the highest bidder?

The biggest question, though, is whether anyone would advertise via chatbot given all of this. Students writing papers or taking exams aren’t very good prospects for sales, after all. Would Street analysts publish research knowing that some of the material was determined by the companies involved and not objective? Would a tech publication use a story about a product announcement that was biased because the company involved paid for prime handling? Any doubts in these areas could contaminate the ad sponsorship model, which would then force chatbots to charge for their results. We know how that would go; everything on the Internet (according to user-think) is supposed to be free.

I think that the question of how a chatbot can be profitable is the biggest question in chatbot-based AI. If ad sponsorship doesn’t work, then I think there’s little chance that chatbot use would rise to the point where it threatened search or Google. If it does, then of course Google and others in the search game would simply deploy a kind of chatbot front-end to their current search process and within it, adopt the workable ad sponsorship model. Search already creates the mass of data with web crawls, and the constraints via the search terms. A little add-on and it could generate reports and textual answers, and still be the kind of search we’re used to. A third-party chatbot would have to pay to collect all that data, and then process the user queries and deliver responses at the volume needed to keep people engaged. How often would you use a search engine if you couldn’t get on it because it was overloaded?

So forget chatbots? No, because it may be as interesting to see how providers like OpenAI think they can monetize this sort of thing as it is to see what tests one can past. Because monetization is the only test that will really matter in the end.

Cloud-Native Models and Cloud Spending Impacts

Enterprises have been telling me that they’re finding the cloud to be a lot more expensive than they’d expected. That has a lot of potential consequences, not the least of which being the risk to cloud provider revenues or the risk to the careers of a lot of developers. What’s really interesting to me is that the great majority of these enterprises were only now embarking on the fact-finding piece of their story. Why is the cloud more expensive? Just bad expectations, bad application design, poor feature choices, or what? A bit of all of the above.

One of the big problems is the expectation that “moving everything to the cloud” was the stand-tall-at-your-professional-association-meetings champion strategy. Most enterprises admit that they believed that theory, at least to a degree. Since moving everything to the cloud would be stupid if it were more expensive, the theory begat the notion that the cloud was automatically cheaper. That limited the pushback on cloud projects, and a lot (as much as a full third of all projects, according to enterprises) of the stuff that got done should never have been approved.

Obviously, at some point, enterprises started to get smart on this. Moving everything to the cloud gradually evolved into creating cloud-hosted front-ends to the “everythings” that were apparently destined to run in the data center after all. The goal was to gain as much of the agility and resiliency benefits of the cloud as possible, without pushing stuff into the cloud that was simply not economical to run there. This front-end role had the effect of improving cloud economies for those who adopted it, but it also meant that the “front-end” cloud piece of an application was developed independently. Cloud software was new software, and this realization in turn gave rise to one of the many cloud’s fuzzy concepts, that of “cloud-native”.

InfoWorld did a piece on this recently, which includes the traditionally aggressive analyst forecast that “by 2025 more than 95% of application workloads will exist on cloud-native platforms”, up from 30% in 2021. Even leaving aside the question of what a cloud-native platform is, this seems a bit unrealistic to me, but let’s face it, even what an “application workload” might be is a question in itself. The article identifies risks for cloud-native development, one of which is cost overruns.

The value of risk-and-benefit pieces on cloud-native hinge on how you define it. InfoWorld provides a definition of “cloud-native development”, which is “…the process of designing, building, and running applications in a cloud computing environment. It leverages the benefits, architectural patterns, and capabilities of the cloud to optimize performance, scalability, and cost efficiency. In other words, we deploy everything to provide cloud-like capabilities, no matter where it runs, cloud and not.”

This adds up to the declaration that “cloud-native” design is about more than the cloud, that it’s about some agile application model that permits but doesn’t mandate cloud deployment. Can we somehow derive a vision of that model? Can we tell how it would impact cloud spending? We have to somehow link this model to the front-end/back-end approach of today, both technically and financially.

My own modeling of cloud opportunity says that the broad cloud opportunity for front-end missions is not 95% of workloads in the cloud, but a maximum of perhaps 55%, and likely more like 45%. But you still don’t have a clear technical picture of what that front-end model looks like, and it turns out that’s the biggest issue in cost overruns for the cloud.

It’s often helpful to look at the extreme ends of a range of things to see what’s going on overall. The “cloudiest” end of cloud-native is functional computing, where applications are divided into microservices that are loaded and run as needed. Since application features are totally decomposed, you can make changes to one without impacting the others, and since individual microservices are small pieces of code, testing is pretty easy. The other end of our cloud-native spectrum is containerized applications, which are much more monolithic-looking. They may be scalable and resilient, but because they were authored that way not because it’s an intrinsic property of the architecture, as is the case with microservices and functional computing. Let’s look at this scale to understand how it impacts cloud costs.

If applications slide toward the functional end, they are almost always more expensive to run. True functions are loaded when needed, meaning that they’re usage-priced, and the per-usage costs add up quickly. Even more persistent microservices are more expensive because cloud instance costs are applied per component, and they add up there too. Functional computing and microservices are a great strategy for stuff you’re not going to push a million messages through, but not so good when that might happen. The InfoWorld article’s comments on cloud cost overruns focus on this particular model, IMHO.

As you slide toward the center of our range, you’re creating larger software components, so perhaps we should think of this part as being featurized computing. You model software as a set of features, which might in turn be made up of functions/microservices but are pre-assembled into a load unit. This means that your code is still agile, but it also means that you have to scale and replace entire features rather than little microservices. It also means that it’s going to be cheaper.

Keep sliding, now toward the container end of the scale, and you start to see a model that looks very much like the way you’d build applications in the data center. Containerized applications often look very much like “regular” data center applications, so what we’re seeing is a further collectivizing of software functions into a bigger load unit. However, you can still scale and repair/replace because the software is designed to allow that. Some developers tell me that the bigger “features” can even be made to be as replaceable as microservices if you use some back-end database state management rather than holding data within the load units. Obviously this would be the cheapest of all to run.

The problem I see in all of this is that we’ve conflated two things that are actually very different. One is the modular structure of the application. Anyone who’s ever done modern programming already knows that you don’t write huge monolithic programs, you write classes or functions or subroutines or whatever you want to call small functional elements. You then collect them into what I’ve been calling a “load unit”. The key to true cloud-native is to think and write in terms of microservices. The key to optimum cloud costs is to load in terms of collections of the components you’ve written, selected and designed to be as scalable and resilient as possible and maintaining the ease of change inherent in the notion of a logic hierarchy rather than a monolith.

The real problem with cloud cost management at the software level is that enterprises aren’t being encouraged to think this way. They think development units equal deployment units, that you can’t develop microservices and deploy persistent containers. That forces the cloud to trade between optimal development and optimal execution, and the trade isn’t necessary. You can write software one way, and deploy all along my scale of options. Depending on where you expect to run the software, you assemble the load units differently.

So does this allow you to “move everything to the cloud”? No, of course not. At some point in the way application logic naturally flows (input, process, output; remember?) you reach a point where you can’t really scale or replace things easily, where data center techniques work best. Or a point where database costs, or access costs, make cloud hosting way too costly. But with the separation of development and deployment units, you can restructure your application for wherever that point happens to be. In my model, that’s what gets you from 45% in the cloud to 55% in the cloud.

Applications designed this way can be assembled and tested in various groupings, and the statistics associated with the tests can be used to get a pretty good handle on what the structure will cost when it’s deployed in the cloud. In any event, it can be deployed and tested there. You should always go with the most concentrated structure of load units that will meet business goals, because every iota of agility and resiliency you try to add beyond that point will raise costs without raising benefits.

This isn’t how we look at cloud applications today, but it’s how we should be looking at them.

Is the New Mobile Modulation Strategy, OTFS, a Game-Changer?

Spectrum is expensive, and as this Light Reading article points out, anything that promises additional capacity for a given chunk of spectrum sounds good. The current focus is orthogonal time frequency space (OTFS) modulation, which some are promoting as a fundamental technology for (gasp!) 6G. But is sounding good enough; is this yet another way of pushing 6G hype just as people have finally decided that 5G is over-hyped beyond redemption? You’ve got to start by asking what OTFS-generated extra capacity would actually be used for.

Roughly doubling the capacity of current spectrum could mean doubling the potential per-user connection speed. I don’t have a shred of doubt that will be one of the big PR hooks sunk into this debate, because “faster” always sounds better…even when it isn’t. 5G demonstrated that given the essential limitations of what we can do with mobile devices, users may not see any difference between a “fast” and a “normal” connection. The one place where I think we could argue that OTFS could contribute would be FWA, where it could offer a better Internet connection to more people than the older 5G technologies.

Connecting more devices is another question mark. In most markets, the number of smartphone users has plateaued, so the only way that “more devices” becomes relevant is if we assume IoT connections would explode. They probably will, but not IoT connections using mobile technology. Recall, too, that the notion that 5G would create an IoT land rush was one of the oft-cited justifications for 5G, and it didn’t happen.

So how about the value of OTFS in reducing the cost of spectrum? If you could do twice as much per unit of RF, couldn’t you buy half as much spectrum? That sounds logical too, but it ignores two basic truths. The first is that most of the suitable spectrum has already been bought. Does anyone think that the companies who bought spectrum at auctions could go hat-in-hand to the governments involved and get a rebate because they don’t need as much?

But couldn’t they sell it off? Wouldn’t OTFS allow more competitors to acquire (directly or third-party) spectrum, increasing the number of players? Yes for sure it could, but you have to wonder why those who already have spectrum would sell it off to facilitate competition.

Finally we have the “high-mobility” justification, and here at least we have a glimmer of value. “High mobility” here really means “moving fast”, faster than an automobile could drive and faster than most trains. The problem with really fast-moving users is that they create the Doppler effect on signals, and the faster they move the more likely it would be that the signal couldn’t be decoded properly. OTFS is much more tolerant of that.

But what, besides very fast aircraft, really challenges current mobile technology? Nothing, and unless we believe we’ll all have personal superfast flying cars in the near term, we can assume that we’d be packed into a commercial aircraft. Given that, couldn’t we provide a single fast link, perhaps an OTFS satellite, to the aircraft itself, then WiFi to the passengers? This would have the added benefit of making the new approach compatible with current smartphones.

And so we introduce the question of phones. In order to use OTFS you’d need a specialized smartphone/device capable of receiving and sending it. The phone manufacturers would love that, but the problem is that unless there’s a real benefit to the average smartphone user, OTFS wouldn’t likely push everyone to upgrade their phones in the near term. That would mean that OTFS spectrum might have to be new spectrum, so the older devices could still be supported.

It seems to me that we have two possible OTFS scenarios on tap. In one, we get limited OTFS deployment for the FWA and high-mobility applications, but we don’t push it aggressively for the average smartphone user. The other is that we push OTFS, via the 6G connection, as the next step in mobile service evolution.

We’re already talking about 6G, absent any clear technology decisions relating to it, much less actual deployment. That’s happening because we’ve used up 5G as a way of generating clicks and ad serves to users. So do we really believe that the same thing won’t happen to 6G? There are realistic missions for things like the only relevant 5G Core feature, network slicing, but who wants realism? Everyone wants clicks, so we shouldn’t expect 6G realism of the kind our first option would require.

But even setting my hype cynicism aside (reluctantly, of course) there’s the question of whether those missions would really justify any deployment. We have FWA now using 5G, and we have satellite broadband for thin areas and fiber for higher demand densities. What niche would an OTFS FWA service fill?

So are we left with our second option, making OTFS and 6G the explicit next step in mobile services? It would seem so, but that has its own challenges.

Top of the list is the inevitable delay in getting something out there in the real world. How long did it take for real 5G to evolve? I think that given our 5G experience, we could reasonably say that 6G could be real by about 2028 and pervasive by 2030. Why then would telcos be tossing money at the technology now? Are the niche applications so interesting? I don’t think so. We have here another example of how Field of Dreams leads inevitably to UFO-think.

Operators don’t want to believe that they can’t somehow go back to Alexander Graham Bell and Thomas Augustus Watson. What they want is a technology that can promise that, which is very different from one that can certainly deliver it. For one thing, there is no technology that could do that, but for another what the telcos really need now is the promise, something they can use to help buoy up their stock for a while. 5G did that, but it’s feet of clay are getting clear even to Wall Street. What’s after “5” if not “6”? And 6G, being a current glimmer in a standards-writer’s eye, has the advantage of a UFO. Since it’s not likely to land immediately and present itself for inspection, you can assign any properties to it that you find helpful.

I’m not saying that OTFS isn’t a good idea (it is), but that it’s an idea that isn’t likely to create any near-term payoff for operators. That’s bad because they need things with a near-term payoff. You can’t be asking for subsidies from big tech on one hand, and on the other focusing your attention on technology shifts that cannot possibly alleviate your profit per bit issues. The first requirement any telco asking for subsidization in any form must meet is the requirement to do everything possible to put their own house in order. I don’t believe that OTFS can help with that, and in fact focusing on it likely perpetuates a service focus that’s destructive to profits in the long run.

Can ChatGPT Replace us All?

I doubt there’s anyone in tech, and perhaps any parent, who hasn’t heard of ChatGPT. It’s often characterized as a “robot” that can do your homework, run your business, and so forth. I’ve said in the past that I believed its responses weren’t all that sophisticated, but I didn’t offer any specific proof. Now I want to try that.

What happens if ChatGPT writes a blog on 5G? I asked it to “write a 1000 word essay on why 5G technology will not increase telco revenues” because I believe that to be true, and I wanted to see whether it would muster relevant facts. OK, I wanted to see how many of the points it made were points that I might have made, or at least would agree with. Let me start with what it came up with, in italics, and insert my own views in normal text.

5G technology is the fifth generation of mobile networks, and it promises to deliver faster internet speeds, lower latency, and more reliable connections than its predecessor, 4G. While 5G has the potential to revolutionize many industries and bring about new, innovative use cases, it is unlikely that it will significantly increase telco revenues.

One reason for this is that 5G is a capital-intensive technology, requiring significant investments in new infrastructure, such as small cells and base stations. These investments will likely decrease telco profits in the short term, as companies will need to spend large amounts of money to upgrade their networks. Additionally, competition in the telecommunications industry is fierce, and companies may not be able to recoup their investments through increased prices for consumers.

All of this is true, but it ignores the fact that the same competitive forces cited actually compel the telcos to deploy 5G. Imagine, given all the media hype about the consumer benefits of 5G, somebody decided not to deploy. It also ignores the fact that 5G offers telcos the benefit of higher user density per cell, which is important to them in managing cell density and costs.

Another factor that may limit telco revenues from 5G is the limited availability of spectrum. Spectrum is the invisible airwaves that mobile networks use to transmit data, and it is a finite resource. The limited amount of spectrum available for 5G means that telcos will need to share it with other users, such as the military or weather radar systems. This could lead to increased congestion on the network and decreased capacity, which may limit the potential for telcos to increase their revenues through 5G.

I have to disagree with this, given that spectrum is required to offer cellular services in any form. In any event, telcos have already acquired suitable spectrum in most cases, and spectrum limitations are more likely to impact new competitors; see below.

Furthermore, 5G is also expected to bring about increased competition in the telecommunications industry, as new players enter the market. Companies such as Google, Facebook and Amazon have been investing in 5G-related projects, and they may disrupt the traditional telco revenue streams by offering their own 5G services. These new competitors may offer more innovative and affordable 5G services, which could lead to a decrease in revenue for traditional telcos.

Any 5G initiatives by the new competitors are unlikely to actually be competitive with telcos. Most of these companies are really interested in promoting their tools and clouds as hosting points for 5G elements.

Another reason for this is that 5G will enable new technologies like the Internet of Things (IoT), which will make it possible for billions of devices to connect to the internet. While this has the potential to create many new revenue streams, it also means that telcos will need to handle a much larger number of devices and connections. This will likely require significant investments in new infrastructure and capacity, which may offset any potential revenue increases.

Again, I have to disagree. IoT’s multiplicity of devices is one of the reasons why more users per cell, a 5G attribute, could be important. The problem with IoT is that the great majority of IoT opportunity doesn’t require any form of mobile service at all.

Finally, 5G has the potential to increase the adoption of new business models, such as software-defined networking (SDN) and network function virtualization (NFV). These models will allow telcos to offer more flexible, on-demand services, which may lead to increased revenues. However, they also have the potential to decrease revenues by making it easier for customers to switch between providers.

Feature differentiation opportunity makes it easier to switch between providers? I don’t get this at all.

In conclusion, while 5G technology has the potential to revolutionize many industries and bring about new, innovative use cases, it is unlikely that it will significantly increase telco revenues. The capital-intensive nature of 5G, limited availability of spectrum, increased competition, and the emergence of new technologies and business models may offset any potential revenue increases. While 5G is a powerful technology, telcos will likely need to look to other sources of revenue to make the most of it.

The right conclusion, but the wrong reasons.

OK, I can’t speak for all of you, but I think that the ChatGPT response was shallow, incomplete, and sometimes inconsistent.

Suppose I change the question to add “based on CIMI Corporation views”? Here’s what I get:

It is widely believed that the rollout of 5G technology will lead to increased revenues for telecommunications companies. However, this assumption is not necessarily supported by all industry experts. The CIMI Corporation, a leading market research and consulting firm, has published a report arguing that 5G may not significantly increase telco revenues.

One of the main reasons for this is that the cost of deploying 5G networks is significantly higher than that of 4G networks. The deployment of 5G requires a much denser network of small cells, which are significantly more expensive to install and maintain than traditional macrocells. Additionally, the cost of acquiring and maintaining the necessary spectrum for 5G is also much higher than for 4G.

Another factor that may limit the potential for increased revenues from 5G is the intense competition in the telecommunications market. With many companies vying for a share of the 5G market, prices for 5G services are likely to be driven down, which would in turn limit the potential for increased revenues.

Furthermore, CIMI Corporation argues that the increased speeds and lower latency of 5G may not necessarily lead to increased demand for mobile data services. While 5G may enable new use cases such as virtual reality and autonomous vehicles, it is not clear whether these new use cases will drive significant demand for mobile data services. Additionally, many consumers may not see a significant difference in their day-to-day experience with 5G as compared to 4G, which may limit the potential for increased revenues from 5G services.

Another point from CIMI Corporation is that, 5G is not only a technology, but also a business model, and that telcos need to consider the various business models that will be required to monetize 5G services. While traditional business models such as selling mobile data plans may still be relevant, telcos will also need to explore new business models such as offering network slicing, edge computing, and other value-added services. However, it is uncertain how successful telcos will be in monetizing these new business models, which could limit the potential for increased revenues from 5G.

Overall, while 5G technology has the potential to enable new use cases and improve mobile data services, it may not significantly increase telco revenues. The high costs of deploying 5G networks, intense competition in the telecommunications market, and uncertainty surrounding the demand for 5G services are all factors that may limit the potential for increased revenues from 5G. Additionally, CIMI Corporation argues that telcos need to consider the various business models that will be required to monetize 5G services.

OK, I’m flattered by how ChatGPT characterized CIMI Corporation, and somewhat impressed that it apparently was able to actually identify and include things I’ve said. Interestingly, when asked to provide information on CIMI Corporation or me, it responded that it had no information available.

Some references to things I actually said in the past made the report a bit more consistent with my views, and this time it picked up my main point, which was that 5G needed a business model and not just a hope for one. Still, I wouldn’t be tempted to have ChatGPT write a blog for me based on my experience here. I wonder how many of you would have read, liked, or commented on LinkedIn on either of these two ChatGPT blogs?

You might wonder how this squares with the reports that ChatGPT can pass a bar exam or be used by students to write reports that, presumably, would get a decent grade. I did some other fiddling, stuff I won’t bother to repeat here, and this is what I learned.

You can ask ChatGPT a specific question in many, many, fields and get an acceptable answer. Ask it a question from a bar exam, a CPA exam, and so forth, and I think you’d likely get a decent answer. Of course, you could also do a search on the same specific question. What ChatGPT seems to me to be doing is creating a free-form answer to a question where a search would give you sources of the answer from which you’d have to pick. I think it could, properly used, do schoolwork and pass exams, but I think that its ability to actually write essays or reports is much more limited…for now.

Surprisingly, my little exercise demonstrated to me why Microsoft would want to invest in this, and why Google and others are suddenly pushing “AI”. Large Language Models like ChatGPT, fed not only text but also perhaps with things like “source credibility”, could create something that would be at least as insightful as the average human-generated document would be on the same topic. That could revolutionize searches, revolutionize education, and create all manner of opportunities and risks.

Don’t expect ChatGPT to do everything you can, in other words, but keep an eye on it over your shoulder as you advance in your career. It may be creeping up on you from behind.

What’s Happening to Business Network Services?

There may be subtle signs of problems in earnings reports, and I think that’s the case with those of Verizon and AT&T for the 4th calendar quarter of 2022. I covered the reports broadly in our first TechsRay podcast, but one specific point needs a bit more development, and that’s the business services trend both operators report.

Business services are important to operators, not only as a revenue source but also because they represent the most credible place to target any enhancements to basic broadband. Net neutrality issues constrain operators’ ability to create differentiating features in consumer services, and consumer marketing isn’t exactly an operator strong suit either. On the other hand, there are no neutrality/regulatory constraints on business services, and the Network Functions Virtualization (NFV) initiative actually targeted businesses for hosted-function-based advanced service features, including security. It stands to reason that this area should be growing as operator initiatives take hold.

Wrong. Both AT&T and Verizon reported a decline in business services in Q4. Neither company talked about the specific reasons for this in their earnings presentations or on the call, but I’ve heard things from both enterprises and operators that sure seem important.

One important point was that there has been some replacement of MPLS VPNs by SD-WAN or cloud-network technologies. It’s hard to get a precise picture of how much the impact is, but running comments I’ve gotten through my model yields an estimate that this replacement accounts for less than a quarter of the decline. In the majority of these cases, the operator offered the replacement service, though multi-national MSP competition was also a factor.

The remainder of the decline appears to be linked to discounts negotiated by companies, usually driven by competition for current services, or a switch of providers to get lower prices. I couldn’t identify any statistically significant number of users who switched providers based on features rather than on cost.

The thing that should be scaring both AT&T and Verizon (and other telcos as well) is that the majority of the revenue decline isn’t due to things like SD-WAN or cloud networking. Those two factors could be expected to cut current revenues by almost half if they were to be adopted everywhere. Given that there are literally billions of dollars at stake here, you’d think operators had a specific plan to address this space. Remember, there are no real regulatory barriers to feature-driven service enhancements here. But even the operators’ leading-edge thinkers admit that there’s nothing really being done.

I think that what we’re seeing is first and foremost a clear example that operators have totally lost pricing power in business services. That view is reinforced by the view of operators that the revenue per bit on business services is declining faster than the business service revenues are overall. That means operators are actually offering more capacity for the same (or lower) price. But why is this? Two main reasons.

One reason is the consumerization of broadband services. Well over half the enterprise network procurement people I’ve chatted with say that it’s hard for them not to take an aggressive discount position for business network services when every executive in the company is seeing their home broadband deals get better and better.

The other reason is that cloud computing is introducing new network alternatives, but alternatives that are often totally embedded in application service features. Do IT professionals understand that adopting SASE, for example, may unload traffic from the VPN? In theory, a company who created a cloud front-end for every application, and for both out-of-company users and their own workers, could end up with nothing but Internet connections everywhere. Since SASE includes SD-WAN in nearly all cases (technically, current jargon would say that SASE without SD-WAN is SSE), it can also socialize the notion of explicit VPN replacement even for applications or users who aren’t on the cloud at all.

We appear to be in a situation where forces largely out of the control of the operators are acting to erode their business service revenues significantly, and perhaps for some totally eliminate the VPN that’s the foundation service for enterprises. Neither of these revenue-reducing drivers are going to be reversed naturally, and we may be at a point where both are beyond any possible influence by operators. Think “the Internet is the universal digital dialtone” and “the cloud is a public-hosted business extension of the web server” and you get the idea.

If this is true, then operators’ business service revenues are likely doomed…except perhaps for mobile services. But even there, we have BYOD issues that make it difficult to differentiate business mobile from consumer mobile. A combination of Internet VPNs and cloud SASE could well make even mobile broadband shift decisively toward the consumer service model.

So what? Enterprises would likely be happy to see this kind of convergence, because whatever business services revenue they lose is money saved by the businesses. For operators, the loss of billions and the likelihood that the Internet and cloud could totally and finally disintermediate them, has to be scary. That may be why the EU-area telcos are looking for big-tech OTT subsidies for the traffic they carry. If “the Internet” is the only network in the future, then all the players in it have to rethink how they’ll cover the cost of infrastructure. The current no-settlement model might not be sustainable.

Even here, there’s a question. People tend to think that the Internet should be free, which of course cannot be the case. The foundation of the Internet, and big tech in particular, is ad sponsorship. Global adspend couldn’t possibly grown enough to cover the cost of operator infrastructure. It’s not really even enough to fund the growth OTTs expect in their own revenue streams.

What’s surprising to me about all of this is that Verizon has the best opportunity fundamentals of the two, and yet seems to me and to Wall Street as being the one facing the most difficulties. They should have been a leader in facilitating services to businesses, and they’re lagging instead. They should have had a realistic view of business service revenues, and somehow they’ve missed that. Are they in trouble, as some articles are suggesting? I don’t think so…yet. But they will be.