Is the New Mobile Modulation Strategy, OTFS, a Game-Changer?

Spectrum is expensive, and as this Light Reading article points out, anything that promises additional capacity for a given chunk of spectrum sounds good. The current focus is orthogonal time frequency space (OTFS) modulation, which some are promoting as a fundamental technology for (gasp!) 6G. But is sounding good enough; is this yet another way of pushing 6G hype just as people have finally decided that 5G is over-hyped beyond redemption? You’ve got to start by asking what OTFS-generated extra capacity would actually be used for.

Roughly doubling the capacity of current spectrum could mean doubling the potential per-user connection speed. I don’t have a shred of doubt that will be one of the big PR hooks sunk into this debate, because “faster” always sounds better…even when it isn’t. 5G demonstrated that given the essential limitations of what we can do with mobile devices, users may not see any difference between a “fast” and a “normal” connection. The one place where I think we could argue that OTFS could contribute would be FWA, where it could offer a better Internet connection to more people than the older 5G technologies.

Connecting more devices is another question mark. In most markets, the number of smartphone users has plateaued, so the only way that “more devices” becomes relevant is if we assume IoT connections would explode. They probably will, but not IoT connections using mobile technology. Recall, too, that the notion that 5G would create an IoT land rush was one of the oft-cited justifications for 5G, and it didn’t happen.

So how about the value of OTFS in reducing the cost of spectrum? If you could do twice as much per unit of RF, couldn’t you buy half as much spectrum? That sounds logical too, but it ignores two basic truths. The first is that most of the suitable spectrum has already been bought. Does anyone think that the companies who bought spectrum at auctions could go hat-in-hand to the governments involved and get a rebate because they don’t need as much?

But couldn’t they sell it off? Wouldn’t OTFS allow more competitors to acquire (directly or third-party) spectrum, increasing the number of players? Yes for sure it could, but you have to wonder why those who already have spectrum would sell it off to facilitate competition.

Finally we have the “high-mobility” justification, and here at least we have a glimmer of value. “High mobility” here really means “moving fast”, faster than an automobile could drive and faster than most trains. The problem with really fast-moving users is that they create the Doppler effect on signals, and the faster they move the more likely it would be that the signal couldn’t be decoded properly. OTFS is much more tolerant of that.

But what, besides very fast aircraft, really challenges current mobile technology? Nothing, and unless we believe we’ll all have personal superfast flying cars in the near term, we can assume that we’d be packed into a commercial aircraft. Given that, couldn’t we provide a single fast link, perhaps an OTFS satellite, to the aircraft itself, then WiFi to the passengers? This would have the added benefit of making the new approach compatible with current smartphones.

And so we introduce the question of phones. In order to use OTFS you’d need a specialized smartphone/device capable of receiving and sending it. The phone manufacturers would love that, but the problem is that unless there’s a real benefit to the average smartphone user, OTFS wouldn’t likely push everyone to upgrade their phones in the near term. That would mean that OTFS spectrum might have to be new spectrum, so the older devices could still be supported.

It seems to me that we have two possible OTFS scenarios on tap. In one, we get limited OTFS deployment for the FWA and high-mobility applications, but we don’t push it aggressively for the average smartphone user. The other is that we push OTFS, via the 6G connection, as the next step in mobile service evolution.

We’re already talking about 6G, absent any clear technology decisions relating to it, much less actual deployment. That’s happening because we’ve used up 5G as a way of generating clicks and ad serves to users. So do we really believe that the same thing won’t happen to 6G? There are realistic missions for things like the only relevant 5G Core feature, network slicing, but who wants realism? Everyone wants clicks, so we shouldn’t expect 6G realism of the kind our first option would require.

But even setting my hype cynicism aside (reluctantly, of course) there’s the question of whether those missions would really justify any deployment. We have FWA now using 5G, and we have satellite broadband for thin areas and fiber for higher demand densities. What niche would an OTFS FWA service fill?

So are we left with our second option, making OTFS and 6G the explicit next step in mobile services? It would seem so, but that has its own challenges.

Top of the list is the inevitable delay in getting something out there in the real world. How long did it take for real 5G to evolve? I think that given our 5G experience, we could reasonably say that 6G could be real by about 2028 and pervasive by 2030. Why then would telcos be tossing money at the technology now? Are the niche applications so interesting? I don’t think so. We have here another example of how Field of Dreams leads inevitably to UFO-think.

Operators don’t want to believe that they can’t somehow go back to Alexander Graham Bell and Thomas Augustus Watson. What they want is a technology that can promise that, which is very different from one that can certainly deliver it. For one thing, there is no technology that could do that, but for another what the telcos really need now is the promise, something they can use to help buoy up their stock for a while. 5G did that, but it’s feet of clay are getting clear even to Wall Street. What’s after “5” if not “6”? And 6G, being a current glimmer in a standards-writer’s eye, has the advantage of a UFO. Since it’s not likely to land immediately and present itself for inspection, you can assign any properties to it that you find helpful.

I’m not saying that OTFS isn’t a good idea (it is), but that it’s an idea that isn’t likely to create any near-term payoff for operators. That’s bad because they need things with a near-term payoff. You can’t be asking for subsidies from big tech on one hand, and on the other focusing your attention on technology shifts that cannot possibly alleviate your profit per bit issues. The first requirement any telco asking for subsidization in any form must meet is the requirement to do everything possible to put their own house in order. I don’t believe that OTFS can help with that, and in fact focusing on it likely perpetuates a service focus that’s destructive to profits in the long run.

Can ChatGPT Replace us All?

I doubt there’s anyone in tech, and perhaps any parent, who hasn’t heard of ChatGPT. It’s often characterized as a “robot” that can do your homework, run your business, and so forth. I’ve said in the past that I believed its responses weren’t all that sophisticated, but I didn’t offer any specific proof. Now I want to try that.

What happens if ChatGPT writes a blog on 5G? I asked it to “write a 1000 word essay on why 5G technology will not increase telco revenues” because I believe that to be true, and I wanted to see whether it would muster relevant facts. OK, I wanted to see how many of the points it made were points that I might have made, or at least would agree with. Let me start with what it came up with, in italics, and insert my own views in normal text.

5G technology is the fifth generation of mobile networks, and it promises to deliver faster internet speeds, lower latency, and more reliable connections than its predecessor, 4G. While 5G has the potential to revolutionize many industries and bring about new, innovative use cases, it is unlikely that it will significantly increase telco revenues.

One reason for this is that 5G is a capital-intensive technology, requiring significant investments in new infrastructure, such as small cells and base stations. These investments will likely decrease telco profits in the short term, as companies will need to spend large amounts of money to upgrade their networks. Additionally, competition in the telecommunications industry is fierce, and companies may not be able to recoup their investments through increased prices for consumers.

All of this is true, but it ignores the fact that the same competitive forces cited actually compel the telcos to deploy 5G. Imagine, given all the media hype about the consumer benefits of 5G, somebody decided not to deploy. It also ignores the fact that 5G offers telcos the benefit of higher user density per cell, which is important to them in managing cell density and costs.

Another factor that may limit telco revenues from 5G is the limited availability of spectrum. Spectrum is the invisible airwaves that mobile networks use to transmit data, and it is a finite resource. The limited amount of spectrum available for 5G means that telcos will need to share it with other users, such as the military or weather radar systems. This could lead to increased congestion on the network and decreased capacity, which may limit the potential for telcos to increase their revenues through 5G.

I have to disagree with this, given that spectrum is required to offer cellular services in any form. In any event, telcos have already acquired suitable spectrum in most cases, and spectrum limitations are more likely to impact new competitors; see below.

Furthermore, 5G is also expected to bring about increased competition in the telecommunications industry, as new players enter the market. Companies such as Google, Facebook and Amazon have been investing in 5G-related projects, and they may disrupt the traditional telco revenue streams by offering their own 5G services. These new competitors may offer more innovative and affordable 5G services, which could lead to a decrease in revenue for traditional telcos.

Any 5G initiatives by the new competitors are unlikely to actually be competitive with telcos. Most of these companies are really interested in promoting their tools and clouds as hosting points for 5G elements.

Another reason for this is that 5G will enable new technologies like the Internet of Things (IoT), which will make it possible for billions of devices to connect to the internet. While this has the potential to create many new revenue streams, it also means that telcos will need to handle a much larger number of devices and connections. This will likely require significant investments in new infrastructure and capacity, which may offset any potential revenue increases.

Again, I have to disagree. IoT’s multiplicity of devices is one of the reasons why more users per cell, a 5G attribute, could be important. The problem with IoT is that the great majority of IoT opportunity doesn’t require any form of mobile service at all.

Finally, 5G has the potential to increase the adoption of new business models, such as software-defined networking (SDN) and network function virtualization (NFV). These models will allow telcos to offer more flexible, on-demand services, which may lead to increased revenues. However, they also have the potential to decrease revenues by making it easier for customers to switch between providers.

Feature differentiation opportunity makes it easier to switch between providers? I don’t get this at all.

In conclusion, while 5G technology has the potential to revolutionize many industries and bring about new, innovative use cases, it is unlikely that it will significantly increase telco revenues. The capital-intensive nature of 5G, limited availability of spectrum, increased competition, and the emergence of new technologies and business models may offset any potential revenue increases. While 5G is a powerful technology, telcos will likely need to look to other sources of revenue to make the most of it.

The right conclusion, but the wrong reasons.

OK, I can’t speak for all of you, but I think that the ChatGPT response was shallow, incomplete, and sometimes inconsistent.

Suppose I change the question to add “based on CIMI Corporation views”? Here’s what I get:

It is widely believed that the rollout of 5G technology will lead to increased revenues for telecommunications companies. However, this assumption is not necessarily supported by all industry experts. The CIMI Corporation, a leading market research and consulting firm, has published a report arguing that 5G may not significantly increase telco revenues.

One of the main reasons for this is that the cost of deploying 5G networks is significantly higher than that of 4G networks. The deployment of 5G requires a much denser network of small cells, which are significantly more expensive to install and maintain than traditional macrocells. Additionally, the cost of acquiring and maintaining the necessary spectrum for 5G is also much higher than for 4G.

Another factor that may limit the potential for increased revenues from 5G is the intense competition in the telecommunications market. With many companies vying for a share of the 5G market, prices for 5G services are likely to be driven down, which would in turn limit the potential for increased revenues.

Furthermore, CIMI Corporation argues that the increased speeds and lower latency of 5G may not necessarily lead to increased demand for mobile data services. While 5G may enable new use cases such as virtual reality and autonomous vehicles, it is not clear whether these new use cases will drive significant demand for mobile data services. Additionally, many consumers may not see a significant difference in their day-to-day experience with 5G as compared to 4G, which may limit the potential for increased revenues from 5G services.

Another point from CIMI Corporation is that, 5G is not only a technology, but also a business model, and that telcos need to consider the various business models that will be required to monetize 5G services. While traditional business models such as selling mobile data plans may still be relevant, telcos will also need to explore new business models such as offering network slicing, edge computing, and other value-added services. However, it is uncertain how successful telcos will be in monetizing these new business models, which could limit the potential for increased revenues from 5G.

Overall, while 5G technology has the potential to enable new use cases and improve mobile data services, it may not significantly increase telco revenues. The high costs of deploying 5G networks, intense competition in the telecommunications market, and uncertainty surrounding the demand for 5G services are all factors that may limit the potential for increased revenues from 5G. Additionally, CIMI Corporation argues that telcos need to consider the various business models that will be required to monetize 5G services.

OK, I’m flattered by how ChatGPT characterized CIMI Corporation, and somewhat impressed that it apparently was able to actually identify and include things I’ve said. Interestingly, when asked to provide information on CIMI Corporation or me, it responded that it had no information available.

Some references to things I actually said in the past made the report a bit more consistent with my views, and this time it picked up my main point, which was that 5G needed a business model and not just a hope for one. Still, I wouldn’t be tempted to have ChatGPT write a blog for me based on my experience here. I wonder how many of you would have read, liked, or commented on LinkedIn on either of these two ChatGPT blogs?

You might wonder how this squares with the reports that ChatGPT can pass a bar exam or be used by students to write reports that, presumably, would get a decent grade. I did some other fiddling, stuff I won’t bother to repeat here, and this is what I learned.

You can ask ChatGPT a specific question in many, many, fields and get an acceptable answer. Ask it a question from a bar exam, a CPA exam, and so forth, and I think you’d likely get a decent answer. Of course, you could also do a search on the same specific question. What ChatGPT seems to me to be doing is creating a free-form answer to a question where a search would give you sources of the answer from which you’d have to pick. I think it could, properly used, do schoolwork and pass exams, but I think that its ability to actually write essays or reports is much more limited…for now.

Surprisingly, my little exercise demonstrated to me why Microsoft would want to invest in this, and why Google and others are suddenly pushing “AI”. Large Language Models like ChatGPT, fed not only text but also perhaps with things like “source credibility”, could create something that would be at least as insightful as the average human-generated document would be on the same topic. That could revolutionize searches, revolutionize education, and create all manner of opportunities and risks.

Don’t expect ChatGPT to do everything you can, in other words, but keep an eye on it over your shoulder as you advance in your career. It may be creeping up on you from behind.

What’s Happening to Business Network Services?

There may be subtle signs of problems in earnings reports, and I think that’s the case with those of Verizon and AT&T for the 4th calendar quarter of 2022. I covered the reports broadly in our first TechsRay podcast, but one specific point needs a bit more development, and that’s the business services trend both operators report.

Business services are important to operators, not only as a revenue source but also because they represent the most credible place to target any enhancements to basic broadband. Net neutrality issues constrain operators’ ability to create differentiating features in consumer services, and consumer marketing isn’t exactly an operator strong suit either. On the other hand, there are no neutrality/regulatory constraints on business services, and the Network Functions Virtualization (NFV) initiative actually targeted businesses for hosted-function-based advanced service features, including security. It stands to reason that this area should be growing as operator initiatives take hold.

Wrong. Both AT&T and Verizon reported a decline in business services in Q4. Neither company talked about the specific reasons for this in their earnings presentations or on the call, but I’ve heard things from both enterprises and operators that sure seem important.

One important point was that there has been some replacement of MPLS VPNs by SD-WAN or cloud-network technologies. It’s hard to get a precise picture of how much the impact is, but running comments I’ve gotten through my model yields an estimate that this replacement accounts for less than a quarter of the decline. In the majority of these cases, the operator offered the replacement service, though multi-national MSP competition was also a factor.

The remainder of the decline appears to be linked to discounts negotiated by companies, usually driven by competition for current services, or a switch of providers to get lower prices. I couldn’t identify any statistically significant number of users who switched providers based on features rather than on cost.

The thing that should be scaring both AT&T and Verizon (and other telcos as well) is that the majority of the revenue decline isn’t due to things like SD-WAN or cloud networking. Those two factors could be expected to cut current revenues by almost half if they were to be adopted everywhere. Given that there are literally billions of dollars at stake here, you’d think operators had a specific plan to address this space. Remember, there are no real regulatory barriers to feature-driven service enhancements here. But even the operators’ leading-edge thinkers admit that there’s nothing really being done.

I think that what we’re seeing is first and foremost a clear example that operators have totally lost pricing power in business services. That view is reinforced by the view of operators that the revenue per bit on business services is declining faster than the business service revenues are overall. That means operators are actually offering more capacity for the same (or lower) price. But why is this? Two main reasons.

One reason is the consumerization of broadband services. Well over half the enterprise network procurement people I’ve chatted with say that it’s hard for them not to take an aggressive discount position for business network services when every executive in the company is seeing their home broadband deals get better and better.

The other reason is that cloud computing is introducing new network alternatives, but alternatives that are often totally embedded in application service features. Do IT professionals understand that adopting SASE, for example, may unload traffic from the VPN? In theory, a company who created a cloud front-end for every application, and for both out-of-company users and their own workers, could end up with nothing but Internet connections everywhere. Since SASE includes SD-WAN in nearly all cases (technically, current jargon would say that SASE without SD-WAN is SSE), it can also socialize the notion of explicit VPN replacement even for applications or users who aren’t on the cloud at all.

We appear to be in a situation where forces largely out of the control of the operators are acting to erode their business service revenues significantly, and perhaps for some totally eliminate the VPN that’s the foundation service for enterprises. Neither of these revenue-reducing drivers are going to be reversed naturally, and we may be at a point where both are beyond any possible influence by operators. Think “the Internet is the universal digital dialtone” and “the cloud is a public-hosted business extension of the web server” and you get the idea.

If this is true, then operators’ business service revenues are likely doomed…except perhaps for mobile services. But even there, we have BYOD issues that make it difficult to differentiate business mobile from consumer mobile. A combination of Internet VPNs and cloud SASE could well make even mobile broadband shift decisively toward the consumer service model.

So what? Enterprises would likely be happy to see this kind of convergence, because whatever business services revenue they lose is money saved by the businesses. For operators, the loss of billions and the likelihood that the Internet and cloud could totally and finally disintermediate them, has to be scary. That may be why the EU-area telcos are looking for big-tech OTT subsidies for the traffic they carry. If “the Internet” is the only network in the future, then all the players in it have to rethink how they’ll cover the cost of infrastructure. The current no-settlement model might not be sustainable.

Even here, there’s a question. People tend to think that the Internet should be free, which of course cannot be the case. The foundation of the Internet, and big tech in particular, is ad sponsorship. Global adspend couldn’t possibly grown enough to cover the cost of operator infrastructure. It’s not really even enough to fund the growth OTTs expect in their own revenue streams.

What’s surprising to me about all of this is that Verizon has the best opportunity fundamentals of the two, and yet seems to me and to Wall Street as being the one facing the most difficulties. They should have been a leader in facilitating services to businesses, and they’re lagging instead. They should have had a realistic view of business service revenues, and somehow they’ve missed that. Are they in trouble, as some articles are suggesting? I don’t think so…yet. But they will be.

Can We Build a Metaverse Model as a Collective Community Effort?

Metaverse-wise, I’ve got good news and bad news, and they’re actually the same news. We seem to have gotten some momentum up for an open metaverse concept, and there’s both a Linux Foundation and World Economic Forum framework being put into place. That’s good because we need an open metaverse, but bad because collective processes like these tend to advance glacially, and because I don’t see clear signs that either initiative has really grasped the essential piece of the metaverse concept.

The WEF initiative is part of their “Global Collaboration Village”, which it says is “the first global, purpose-driven metaverse platform to enhance sustained multi-stakeholder cooperation and action at scale”. While the metaverse focus is explicit, the details suggest that what’s really being established is an ad hoc group that will be working to create an open metaverse model. That, to me, suggests that there’s little chance of near-term results being created.

The press release I’ve cited suggests that what’s really happening here is the creation of an “extension of the World Economic Forum’s public private platforms and in-person meetings and will provide a more open, more sustained and more comprehensive process for coming together”. In other words, this is an application of the metaverse concept to the specific mission of collaborative support. That’s a valid application, but it’s not the foundation of what we need, which is a general model of metaverse-as-a-platform.

Not to mention the question of how open the concept will be. Accenture and Microsoft are the partners of the WEF in building this, and both these companies are tech giants who have their own profit motives to consider. One has to wonder in particular whether Microsoft Teams, which is a key collaborative platform already, might either be integrated into the approach, or whether the metaverse collaboration might end up as a Teams enhancement.

To me, this particular initiative is perhaps an early example of “metaverse-washing”. Collaboration isn’t the metaverse, just a small application that in fact is largely related to the “social-metaverse” concept that frankly I doubt has much near-term utility. There are just too many pieces needed for it, and those pieces are somewhat disconnected from the more general metaverse platform requirements. This will likely raise metaverse visibility but not advance it in a meaningful way.

The Linux Foundation initiative, called the Open Metaverse Foundation, addresses at least the openness requirement. The OMF is “home to an open, vendor-neutral community dedicated to creating open standards and software to support the open, global, scalable Metaverse.” There are eight Foundational Interest Groups (FIGs) defined at present, and while all of this is (if anything) more in the embryonic than even the infancy stage, they seem to be more related to requirements-gathering than to architecture creation.

That raises the big question that any metaverse collective is going to have to address, explicitly or implicitly. Have we accepted the notion that the “social metaverse” model, meaning a virtual reality space in which users “live” and “work” is the sole objective for the metaverse? It seems to me that question has to be addressed explicitly, and neither of these initiatives really seem to be doing that, though both seem to come down on the “Yes, that’s the metaverse” position with respect to the virtual reality model. I think that’s a big mistake, or at least a big and risky assumption.

“Virtual reality” is fine as a metaverse foundation, if we relax what we mean by “reality”. To me, a metaverse first and foremost is a digital twin, a virtual, software-managed, model of some real-world system. There is no need for a metaverse to represent humans at all, and surely no need for the virtual reality to be user/human-centric. A virtual world and a virtual reality are different; the former is a subset of the latter despite the more inclusive-sounding name. A factory assembly line is a virtual reality, and so is a warehouse, a building or home, a road or intersection, a conference room or a totally artificial structure, city, or even world.

Architecturally, then, a metaverse is a model that is synchronized in some way with elements of the real world. Those Elements (let’s call them that for now) are “objects” in the traditional sense of software objects. They’re also “intent models” or “black boxes”, which means that they assert a set of properties that allow them to be manipulated and visualized, but their contents are opaque from the outside. Thus, an Element might be a singular something tied to sensor/control technology, or it might be a digital-twin metaverse in itself. The OMF metaverse foundational interest groups include the “Digital Assets” group, but rather than focusing on things like animation as the example on the OMF website does, the group should be working on the metadata and APIs that represent the properties and interfaces that are exposed.

The reason I’m raising this point is that it’s fundamental to preventing the problem that seems to plague, and eventually stymie, all these collective efforts at open standards. You have to constrain the problem or it’s unsolvable. The platform architecture of the metaverse, the presumption that it’s a model populated by digital twinning and manipulated through a set of APIs and parameters/metadata, is a constraint. Accept it explicitly and you can then populate those FIGs with meaningful goals that add up to the creation of a generalized model. Constrain architecture properly and you can un-constrain mission and application.

The opposite is also true, of course. If we don’t constrain architecture properly, meaning base the notion of a metaverse platform on the most general vision of “virtual reality” which is the digital twin concept, then we are going to end up constraining the mission set because everyone’s view of what “reality” is will color their contribution and create, in some way, a consensus direction.

If you look at the Discord group dedicated to the OMF, you see what I think is an example of this. There are gamers, programmers, etc. but the dominant interest is VR, which means a social-metaverse slant. I see that in the FIG selection too. Given that, there is a visible risk that what we could classify as “assumption inertia” will lead us to a metaverse model that’s not going to cover the real scope of metaverse applications. That’s not as bad as an explicit turn in the wrong direction, but it’s still bad if our goal is an optimum metaverse future.

Orange is Working Hard to Transform Profits, but Will it Work?

I doubt that there are many who believe things are coming up roses for the network operators globally. The challenges with declining profit per bit have been recounted by just about everyone, including operators themselves. Wall Street is getting antsy regarding the sector, and a growing number of analysts are expecting some major changes.

Like many others, I’ve been saying that the basic problem operators face is that their connection services set is inherently a featureless commodity that users will take for granted unless there’s something to complain about. Operators have no pricing power, no differentiation at the service level, and every initiative aimed at somehow rehabilitating basic connection services with things like “bandwidth on demand” have failed. It seems inescapable that they need to climb the value chain…and yet….

…here’s Orange, who has in fact done that with things like content creation (OCS) and business services (OBS), and is now being criticized for the very moves people like me have promoted. A French publication characterized those units as “burdens” on the business, and past executives who are seen as being instrumental in the creation of these above-the-network units have been defending their decisions. Next month, Orange is expected to announce plans to either stimulate the units they believe are underperforming or cut them back, even out. I noted earlier this month that Orange has already decided to sell off its content business.

The article I cite notes that Orange is expected to take a “back to basics” position, focusing more on connection services rather than on something higher-layer and potentially with greater profit potential. The big question will be less what they say than how they expect to make what’s almost a “recover past glory” theme actually recover it. Like all EU operators, Orange has some fundamental plusses and minuses.

I’ve noted before that what I call “demand density”, the economic potential of a market to produce communications revenue, is a critical factor in wireline ROI. France has a demand density that’s roughly the same as Spain’s or that of the US overall, which is far lower than that of Germany or the UK. That means that unlike more dense markets, infrastructure in France isn’t naturally profitable. Another measure is “access efficiency”, which is a measure of how efficient broadband modernization would be based on density maps, rights of way, and other factors. France is again lagging Germany and the UK there, and roughly matching the US overall (or AT&T in particular). This means that broadband improvements would be harder there than in more dense nearby markets.

Mobile services in the EU are very competitive, and of course operators started shifting their profit-production focus to mobile decades ago. Today, mobile services have had no better chance to sustain pricing power and profit potential than wireline, for France and for the balance of the EU and UK. Again, the situation is comparable to that of the US. As is the case with wireline, there is no likely service kicker at the connection or basic services level that could change that.

That raises the key point, and the key question, Orange raises. You either have to make basic services more profitable or you have to make a go of services that because they are profitable necessarily extend beyond the basic. That’s all the choices there are. Orange tried the latter choice, and now it may be retreating to the first. But did they really try that second choice, and is there a first choice to retreat to?

We have content production, business services, and banking today. Is it smart for an operator to enter one of those spaces and face incumbents who know the space well? There’s a part of speech called the “synecdoche”, which in an expression substitutes an example of the whole, a member of the class, for the whole. Could Orange have inadvertently employed synecdoche here, thinking that a small (and sub-optimal) higher-level service example was a proxy for the whole space of higher-level services? Did they need to broaden their thinking, to go after spaces that were above connectivity but different from content or business services or banking?

I have to wonder whether there exists, in all the world, a more unlikely combination of service goal and goal-seeker than the goal of content production for a telco. Content is all about imagination, something that few would accuse telcos of possessing in any quantity. I think Orange had the right idea but the wrong service target set.

So can they go back to basics? Here my modeling suggests that they cannot do so easily. Telcos whose service area has a demand density and access efficiency at least three or four times that of the US can make connection services profitable with no special measures. Infrastructure passes enough opportunity dollars to make it work. Where demand densities are at least twice that of the US overall, and access efficiencies are likewise, I think that a rethinking of how networks are built, a shift to focus on creating capacity rather than on managing it, could also make basic services profitable. But France doesn’t have a demand density or access efficiency twice that of the US.

In the US, Verizon has a regional demand density that’s five or more times that of rival AT&T. AT&T’s density isn’t too far off that of France. That’s important because AT&T is working really hard to improve network costs and to define “facilitating services” that they could wholesale to OTTs to become a part of a higher-layer service. Verizon has been fairly quiet in both spaces, which I think is due to the fact that its territory is inherently more profitable.

Orange is one of the four operators who formed the “facilitating services” JV I described in yesterday’s blog. Obviously, they’re not just betting on a return to the past, but as I noted yesterday, their target service set is far from ideal, and any facilitating service option is going to take some time to socialize with the OTTs. In that time, there’s no meaningful revenue from the initiative, and Orange may hope that if improved basic services aren’t the whole, longer-term, answer to their profit needs, they’ll at least hold off the wolves till they figure out one.

Orange may not have the option to go back to basic connection services, and if it’s going to attempt that, it will have to do a fairly radical transformation of infrastructure and not just try to needle a few percent off capital cost of equipment. Opex transformation, once you’ve picked the low apples, necessarily relies on significant automation, and capex transformation has to look at the overall network model and not just at point-for-point substitution of cheaper gear for legacy boxes. It’s going to be interesting to watch Orange navigate all of this, because their situation is fairly typical. Other telcos need to pay particular attention, because most who aren’t in Orange’s shoes already will step into them before long.

Four European Telcos Form a JV, but For What?

Well, here’s a surprise. Four major European telcos (DT, Orange, Telefonica, and Vodafone) plan to create a joint venture that’s aimed at attempting to address their “disintermediation” and “profit-per-bit” problems. EU operators have already asked the EU to approve traffic subsidies for telcos by the OTT giants. Is this latest JV an indication that the telcos believe the EU won’t approve subsidies, that they believe the subsidies aren’t enough, or that they believe that there really are underlying technology issues that telcos need to address to be profitable?

The JV aims at creating a platform for digital advertising, based on the generation of a “pseudonymous” token that would be specific to a given OTT website. The token would identify the user without revealing personal information, and because there would be a unique token per website/OTT, it wouldn’t be possible to profile the user by combining cookie data. All of this seems directly linked to EU data protection and consumer protection requirements that are aimed at preserving the privacy of personal data.

There’s no question that user identification is a viable mission for a telco or telco JV. Knowing the relationship between a user (as a consumer) and the user’s connection to the Internet is surely the most reliable way of creating an online identity. Similarly, telcos are already charged with (and regulated for) responsible protection of consumer data. The concept, in fact, has already been in trial by DT and Vodafone in Germany, in mobile network applications. However, the article I referenced questions whether user concerns about privacy could derail the initiative. I doubt that, but I also think there are both positives and negatives about the JV’s ability to impact operator challenges.

Let’s first be hopeful. I’ve noted in past blogs that AT&T had raised the idea of operators offering “facilitating services”, which would be services that facilitated OTT behavior without actually creating a direct-to-consumer OTT offering. OTTs would then pay for that service set, and those payments would raise profit per bit. This initiative seems to be an example of such a facilitating service, likely the very first to be offered, and so it’s a potential prototype for further development in the space.

To be workable, a facilitating service has to have two primary properties. First, it has to have the potential for adoption by sufficient OTT mass to actually generate a useful revenue stream. Second, it has to be defensible against OTT competition from players who might elect to do the same thing on their own, or band together to do it collectively. The JV target might meet these goals.

Getting a piece of the ad dollars seems, in the surface, a reasonable way to address both these property requirements. Certainly there are enough ad-sponsored sites and services out there, and there seems little chance that the OTTs in the space would somehow be able to mount their own credible counter-strategy. But there are some “howevers” that we need to consider.

The first is the linkage between the telcos’ problem with differentiation and the JV approach. The great majority of traffic that’s threatening telco profit per bit comes from video content. Yes, much of the content includes advertising, but is the potential revenue from that source proportional to the traffic and operator cost associated with content delivery? Is the strategy also aimed at things like social networking, which may generate less traffic? A disconnect here would pose a risk that regulators would push back.

The second, and broader, question is whether something designed to protect user privacy can really facilitate anything at all for an OTT. Online ads are better than broadcast ads because of targeting, which presumes that the ads rely on knowing something about the user. While a token might anonymize the user in a personal sense, might it not have to reveal personal data without revealing identity? If that’s the case, what happens when the user has a relationship with the OTT (like a streaming subscription) that necessarily reveals identity? If personal information is really significantly more protected by the scheme, then ad targeting is hampered and the value of the JV would depend on the cost of sustaining another solution to privacy management, versus that the JV would offer.

The JV then may be one of those “not-the-beginning-of-the-end-but-the-end-of-the-beginning” things. It could demonstrate that telcos are willing to look hard at facilitating services, cooperate to provide them in a way that would be competitive given market drivers, and credible as sources. That’s a good sign IMHO because I think facilitating services are the key to improving profit per bit in the near term. But I also think this particular service is more a tactical response to regulatory policy evolution than a long-term revenue model.

Tactical responses are dangerous ground for telcos, whose inertial behavior is legendary. Only governments move slower, which is why the JV target could be reasonable in terms of speed of availability. It’s durability I’m concerned with. The biggest opportunities for facilitating services aren’t tactical.

What makes a facilitating service durable is a long-term barrier to competitive entry, and since “competitive entry” would be most likely to come from the OTT side of the game, it’s those OTTs we’d want to keep out. Having OTTs share a third-party (telco, in this case) facilitating service limits their differentiation, so there has to be enough of an offsetting barrier to their rolling their own service instead. It’s hard for me to see such a barrier outside a financial one.

Why could a telco get into a service an OTT could not? Because a telco, as an effective public utility, has a very low internal rate of return. Roughly speaking, the IRR is the ROI level below which investments will actively hurt a company’s financials. Because of their low IRRs, telcos can get into stuff that OTTs typically could not. Telcos could also likely fund positioning investment in infrastructure out of cash flow, which most OTTs would be reluctant to do.

All these plusses have to be balanced against the basic truth that the telcos can’t do facilitating service without a strong notion of what OTT activity they’re proposing to facilitate. If you look at the 5G space, you can find ample proof that rather than having their eyes on the OTT clouds of consumeristic demand and opportunity, telcos have their eyes on their feet. Somehow, they believe, saying just the right thing about stuff like 5G SA or network slicing is going to stimulate a whole market to seek out ways of exploiting that stuff, despite the fact that there’s no global scope for these capabilities and few OTT services that don’t demand global scope.

That raises the final issue, which is “feature federation”. A single telco like AT&T can surely create a facilitating service if they extend themselves, but would that single feature, sitting in the midst of a global feature desert, induce OTTs to rely on it? Even something like the proposed JV does nothing more than make the inadequate footprint of a facilitating service a little less inadequate, which is a long way from being adequate, credible, or compelling. How do you share features across operator boundaries the way we’ve shared connectivity in the past? There is no good answer to this today, and I think that telcos like those promoting the JV need to be thinking about working on something in that space. I know from past experience it won’t be easy and it’s likely to take a while. Meanwhile, OTTs may develop their own “facilitating services” and then it will be too late.

Marketing, Sales, Trajectories, and Success

What is the difference between “sales” and “marketing”? Why do incumbents have an advantage in deals? Are new technologies, particularly revolutionary ones, sold differently than established ones? Should they be? All of these questions relate back to something even more basic, which is “how does something actually get sold?”

That seems a pretty basic question to me, given that virtually every company relies on selling things to sustain their very existence. The problem is that while it may be a basic question, there’s a fairly astonishing number of companies that don’t seem to answer it, or even try to. Today, a lot of what we hope for from networking and IT depend on sales success, and we’re falling short in more cases every day.

Decades ago, I started to augment surveys of companies with modeling, and in order to model how a market was going to work, you need to understand those basic workings. Fortunately my survey base was happy to cooperate in creating what I called a “decision model”, a model that considered the factors that made up the progress from “suspect” to “customer”. I’ve updated it over time as buyer behavior changed, but many of the lessons of that early period are still valid today.

Most selling in tech tends to happen through formal bidding, and most products and services are introduced into a tech framework for which there exists an incumbent vendor who has a degree of account control. In some cases, the account is big enough to justify an on-site sales team. In those cases, traditional sales-centric processes are generally successful in moving goods and services.

Companies that have real account control, and particularly those with on-site sales resources, are usually able to promote technology advances more easily because they interact regularly with management in both IT and operating organizations. They don’t always use this capability, though, because of the risk that new technology will overhang current sales and introduce new competitors in a game the incumbent is already winning.

The problem arises when one or more of those conditions aren’t true, which is often the case for new technologies or radical approaches, and is also true if a non-incumbent wants to break into an account. In all these cases, it’s critical to be able to transition companies efficiently from being suspects to being customers. And it’s not easy.

A company is a “suspect” when they are sensitive to the possibility of a purchase. In this stage of their evolution, they aren’t committed to anything, only perhaps interested. Their attitudes, value propositions, opinions, and sensitivities are all over the place; think Brownian Movement. In this state, with all this variability, it is almost always fruitless to apply sales resources to the company because there’s no way of knowing how to make an effective approach, and because too much “education” would be required to prep for an actual order.

The sales process, I’ve found, can be compared to a funnel. At the wide end, you want it to intersect all those “suspects”, but you want the process to gradually order the chaos so that, at the narrow end, companies emerge in a state where sales resources can be applied to them effectively. You’re controlling their “trajectory”, and so I’ve called this “trajectory management.” A company would progress through the funnel, changing from “suspect” to “prospect” to “customer” with the application of specific pressures, and it’s creating and managing those pressures that form the basis for an effective marketing/sales program.

In today’s world, the first of these pressures is editorial mentions. A suspect sees a news or opinion piece that mentions a source of goods or services, and because they’re “interested” in the topic, probably only because the headline catches their attention, they read the piece. If we were talking about consumer products, we might see this as the decisive step, followed by visiting an online storefront and making a purchase, but obviously business tech isn’t an impulse buy. So what do we expect to happen? That’s a question rarely asked.

Realistically, in my research into buyer progression, what happens is a website visit. I saw a nice article mentioning Company A’s product/service. I was intrigued, so I visited their website. This establishes the first of what we could call the Trajectory Management Truths, which is editorial mentions sell website visits. It’s perhaps the most critical truth in all of marketing/sales, and the one that vendors mess up most often.

I get pitches for stories all the time, even though I say on my website that I won’t take any suggestions. Vendors and service providers want me to talk about them and send me what they believe is the compelling reason, and way over 90% of the time, it’s garbage. What they’ve done is violate the second Trajectory Management Truth, which is you cannot take a sales message into a marketing channel. Marketing is about mass distribution of information to prepare a company for a deal, not the process of taking an order, yet most of these pitches are sales messages. No outlet is going to accept a piece long enough to carry a convincing sales message, and nobody would read it if they did. The pitches needed to focus on getting an editorial mention first, meaning getting somebody to write and run something, then they need to leverage that mention into a website visit.

So, OK, let’s assume that your editorial mention has actually gotten the suspect to your website. What now? Look at the websites of any of the big vendors or service providers today, and you’ll find that there is no correlation between the structure of the website in general, or the homepage in particular, that links to the editorial bias the company is presenting. If you seeded a story about AI, for example, what do you think a reader of that story is looking for on your website? AI, obviously, and my research says that if they can’t quickly navigate to that topic and find something interesting (three or four steps, maximum), they’ll abandon their effort.

Any PR campaign needs to be anchored on the homepage. You pick up the theme of the PR and editorial slant and you run with it. If you do that, then editorial mentions, which sell website visits, can carry you to the next step in the process. Which is…?

Which is that website visits sell preliminary engagement for collateral. So let’s say the suspect followed the AI editorial mention to your website, and found the necessary links to the information they now want. What information is that? Not product specifications, not at this point. It’s marketing collateral, things like white papers or videos. This is another step that vendor websites and programs typically mess up, by providing too much undigested information at this critical point. You do not want a prospective buyer to make a purchase decision without ever having sales contact, so don’t give them the information that encourages them to do that.

What do you expect, if not a sale, at this point? The primary thing you want is for the suspect to identify themselves and indicate that they are considering and not just interested. This is where all that Brownian Movement starts to enter the narrower part of our funnel. If you construct your website so that the choices your suspect makes while navigating it lead them on the path they belong on for deal-making, they will consume information that tags them as a prospect and identifies the value propositions they’re most sensitive to. That enables optimum targeting of your next step, which is based on the principle that engagement collateral sells sales calls. The path out of the collateral step is the realization by the prospect that they want to talk to you. Real sales processes can now start.

Based on what? According to my research, successful sales strategies are based on issue ownership. A potential buyer is sensitive to three classes of issues, enablers, differentiators, and objections.

Virtually all enterprise purchases of technology have to meet a business case, and the issues that can make a business case are the enablers. If you do not own the enablers, you have left the most critical aspect of the deal on the table. Your only hope is that your competition doesn’t do any better, because a source that can demonstrate they can help the buyer make the business case can move forward. Enablers are so important that you don’t want to leave them for the last phase of the trajectory; they should be considered as early as possible in the trajectory, even at the level of editorial mention.

Differentiators are the issues that set you apart from other sources of products or services. Unlike enablers, which you have to jump out and seize, you have to take care with differentiators because introducing them invites having your prospect want to know who you’re counterpunching against. They won’t make a business case, so they won’t move the prospect along, so introduce them when it’s inevitable that the prospect has reached the stage where they’re certain to be looking at other sources.

I recently audited a test of a pitch from a vendor who was promoting “sustainability”. My question was “Do you believe that a company will buy your product simply because it’s sustainable?” They admitted they would not. Then I asked “Do you think they’d buy your sustainable product in preference to one that could deliver a higher ROI?” They took a bit more time, but eventually said that wouldn’t happen unless the ROI difference was minimal. They just proved that sustainability is a differentiator, not an enabler.

The most troublesome of the issue types is the objection. Obviously, the term means an objection raised by a prospect to some visible or alleged element of your value proposition. There are two questions that have to be asked before a response can be devised. The first is whether the objection is a signal of a missing enabler or differentiator; have you failed to raise a point when it was important. The second question is where the objection really originated. It’s one thing if the prospect actually raised it, another if it was raised by a competitor, and yet a third if it originated in editorial commentary.

You can’t afford to lose enablers, and if there’s a valid differentiator a competitor raised, you probably want to have an effective response disseminated to the sales organization. There’s a body of reasoning that suggests you should respond to competitive differentiators raised as objections by including them in material delivered to the prospect early in their trajectory, but my data doesn’t show that’s a good idea.

A final point, at the final point in the process, which is the actual sales dialog. Collateral sells sales calls, sales calls sell the product or the service. Most tech sales organizations work effectively if the early stages of the trajectory are handled, but there is one key point that’s easy to miss. Sales is a process of confidence transfer. The prospect, if they’re confident in the salesperson, will transfer that confidence to the representations and to the product. If they are not, then nothing much can be said or done to facilitate a close. This is important because it may be that the greatest value of trajectory management is that it increases sales confidence. Not only does that happen because sales efforts are more successful, there’s less time wasted on bad prospects, buyer education, and so forth. Shortcomings early in the trajectory fall to the sales force to correct, and that is never easy, never fun.

Is 6G Already on a Trajectory to Doom?

Let’s be honest, 5G has been a bit of a disappointment. There were all manner of wild claims made for it, starting when it was a gleam in the eye of standards-writers and continuing until real deployment started. Then people started to realize that the average mobile user would never know they had it at all. They realized that for operators, full 5G Core implementations with network slicing were going to be yet another technology looking for a justification. Then the media realized that they’d said all the good things they could have about 5G, so it was time to start questioning the whole thing…and looking for a replacement.

6G follows the recent trend in generational advances in cellular networking, and if you look at the definitions of 6G (like the one in Wikipedia), it’s hard not to compare them to early 5G promises. 6G is “the planned successor to 5G and will likely be significantly faster.” Wasn’t that the big deal with 5G versus 4G? Or “6G networks are expected to be even more diverse than their predecessors and are likely to support applications beyond current mobile use scenarios, such as virtual and augmented reality (VR/AR), ubiquitous instant communications, pervasive intelligence and the Internet of Things (IoT).” Again, wasn’t that the 5G story?

The truth is that we’re already on track to create a 6G mythology that will lead to hype, exaggeration, and disappointment. Is it too late to save 6G, and is there really any reason to save it at all? Let’s try to answer that question.

6G, like 5G, moves cellular to higher frequencies, higher cell capacities, and a larger number of potential user/device connections. It will likely lower network energy consumption, improve scalability, and it will perhaps allow for tighter integration between the network and contextual services, to use my term. If you wanted to offer a realistic summary of what to expect from 6G, that summary would be that it relieves limitations that would hit advanced applications based on 5G, as those applications became pervasive. That, it turns out, is the critical point about the whole 6G thing. 6G is in a race.

As I’ve pointed out in many of my previous blogs, there exists a “digital-twin-metaverse-contextual-services” model that could evolve to tie information technology more tightly into our lives and our work. In that model, we would all move through a whole series of “information fields” that could supply us with critical information at the very point where we interact with the world (and job) around us. These fields and our ability to use smartphones to tap into them would build that “hyper-connected future” that some vendors are already pushing. The problem with that exciting vision is that such a framework needs a whole lot more than a new generation of wireless. In point of fact, what we could really do in the contextual services space today wouldn’t likely tax even 4G, much less be sitting around and waiting for 6G to explode onto the scene.

As we wait, as 6G waits, it is subject to the same trajectory to doom that 5G followed. You start with unrealistic claims, convert them into wild expectations, stir in a lot of hype, and end with disappointment, disillusionment, and finally the worst thing that can happen to new technology, boredom. We are already on that path today, pushing the characteristics of something that has absolutely no technical substance yet. But whatever it is that we need in networking, be it excitement, profit, transformation, or whatever, we can assign to 6G. Because it has no technical substance, you can’t disprove any claim you make, and the wildest stories get the most publicity.

How is this a race? Well, 6G is following its trajectory-to-doom, and at the same time we’re starting to glimpse the elements of the sort of “contextual services” that could eventually exploit, even drive, it. We can’t justify radical technologies except through the exploitation of radical business cases. You can’t evolve to a revolution, so we either abandon the revolutionary or abandon safe, comfortable, and above all, slow progress in creating justifications. The question is whether the contextual framework can outrace that doom, and sadly, it’s not a fair race at all. 6G hype is likely to win.

The problem with contextual services can be explained by asking just what that “hyper-connected future” is connecting and what value that connectivity is bringing. Technology advances that don’t frame any real benefits and that can be deployed only by incurring real costs tend to stall out. That’s been the problem with all the new network services of the last couple of decades. I have offered, through contextual services, an example of what a hyper-connected application would look like, and think of all the pieces that are needed. You need contextual applications, you need sensors, you need “information fields”, you need network agents representing users and other elements, you need edge computing…the list goes on. If you have all that and it’s adopted, then 5G would indeed have to give way to 6G, but if deploying 6G is a major investment, then what kind of investment is needed for all that contextual stuff?

You can’t justify a new technology by citing the driver that other technologies create, if those other technologies also need justification. That’s particularly true when you’re trying to justify something that’s on the tail end of a long requirements chain. 6G, when you cut through the crap, is something that does more of what 5G was supposed to do, and we haven’t managed to get 5G to do it all yet. Or even a little of it.

We have technology in place to start developing contextual services. A team of maybe three or four good software architects and two-dozen developers could be working on a prototype by the end of the summer. We could identify places where the hosting and connectivity resources are available already, and where the value proposition for contextual services could be readily validated (or proved to be nebulous). If we expended a fraction of the resources that 6G will surely suck up on a contextual-services model, we could actually advance not only 6G but edge computing.

The notion of supply-side market drivers presumes one critical thing; pent-up demand. If you offer something that involves a significant “first cost” deployment investment, you have to assume that you’ll sell something that generates ROI, and quickly. But pent-up demand is something we can usually measure and model, and the most telling symptom is what I’ll call the “verge clustering” phenomena. If Thing X is needed and there’s really pent-up demand for it, then users/applications will cluster on the verge of Thing X, the place where it’s possible to get as close to the needed element as possible. We should be seeing 5G insufficiency in order to presume pent-up demand for 6G, and in fact we’re still trying to justify 5G.

Sadly, the same thing is true for contextual services, or perhaps even worse than the same thing. With contextual services, it’s hard to even define what “the verge” is, because we’re largely ignoring the concept. We talk about IoT, which is just a sensor-and-sensor-network technology, not an application. We need to talk about the utility of IoT, and of 6G, and of contextual services.

Why don’t I fix this, or try to? I’ve written a lot about the cloud, the edge, 5G, and more, and I think that most of my readers will say that I’ve been on the right side of most of the issues that have developed in any of these areas, well before they were widely accepted. I think I’m on the right side of this one too, but I’m a soothsayer; I can’t move mountains, only predict where they’ll go. But if you have a respectable plan for moving a bunch of rock, I’m willing to listen.