Is It Good that 5G Handsets are Taking Off?

Anyone who reads tech news or watches TV probably realizes that 5G smartphones are taking off. A part of the reason is that most of the major smartphone vendors make 5G a feature of their newest models, which makes 5G less a choice than something that a new phone pulls through. The question, of course, is whether the growth of 5G phones will somehow alleviate the growing cynicism about 5G’s promise, or make it worse.

In my blog on the Street view of 5G earlier this week, I noted that Wall Street’s forecasts show that 5G investment will peak in 2023, which likely means that any impact of new technology on the 5G market, and on the evolution of 5G toward a general edge computing model, will have to happen by then. One interesting question this point raises is whether early 5G adoption might actually hurt edge computing, and even 5G innovation, by accelerating deployment when few innovative offerings have matured enough to be credible. Could 5G’s early handset success be showing this risk could be mounting?

I’ve had a chance to ask several hundred people who got 5G smartphones whether the 5G capability had changed their experience. About ten percent said they had, and the rest said “No!” While it’s difficult to get this data accurately, it appears that only about thirty percent of those with 5G phones actually had 5G service available to them for any significant amount of time. The very few who were tracking whether they were on 5G or not indicated that they could not see any clear difference in how their phones performed when they were on 5G or LTE. What this seems to prove is that the 5G user experience doesn’t justify the hype, but that doesn’t mean that 5G phone success won’t be good for 5G…in the long run.

There is no question that 5G’s benefits to the average user have been seriously overhyped, and frankly no question that it was destined to be overhyped from the first. “News” means “novelty”, not truth, and in an ad-sponsored world, clicks are everything. You don’t get clicks by telling boring truths; exciting lies work way better. People justify the hype by suggesting that in order to get technology advance, we have to make the advances seem consumable, populist. OK, we did that, but now reality is setting in.

How 5G handsets will impact 5G overall, and in particular the development of “real” 5G-specific applications, depends on the tension between the negative force of disillusionment and two forces that act to promote 5G. One of those forces is the force of availability, and the other the force of desperation.

Every mobile technology succeeds primarily because it becomes pervasive. It’s difficult to imagine how 5G applications (whatever they are, and whoever might create them) could develop without extensive 5G availability, and it’s hard to see how that can develop without a large number of 5G smartphones. Were we to see a very slow uptick in 5G handsets, we could justifiably wonder if the whole 5G thing was a hoax. That’s not going to be the case.

We are already seeing a lot of 5G service availability, and that’s going to continue thanks to the growth in 5G handsets. With that, we’re eradicating the most basic battle for 5G applications, which is some 5G to run them on. Every 5G tower is a point from which a 5G future could be projected, and every 5G-committed operator is a competitor in a market that, to avoid price commoditization as users figure out that 5G really doesn’t do much (if anything) for them, is sure to try some innovation.

That attempt is going to be delayed in the near term, though, because a big part of the 5G handset success is attributable to smartphone incentives launched by the operators. If the services themselves aren’t differentiating, maybe the phones can be. Of course, there are only a certain number of phones that have star quality, so differentiating on phones isn’t going to last, and desperation will set in.

It might have set in already, in some ways and for some operators, and the relationships between 5G operators and public cloud providers may be an indicator. Some see this as a pure cost-driven alliance, but that’s a bit of an oversimplification. Yes, public cloud relationships offer operators a first-cost benefit, reducing the cost of building out a wide-geography carrier cloud when there are only a limited number of uses for it. In the long run, if hosting features is the best/only path forward, then network operators will bear higher costs because of public cloud profit margins.

The alliances are justified in the long run if operators can harness public cloud providers’ capabilities and knowledge to explore additional applications, services, and features. What I’ve heard from operators this year suggests that there are some in the organizations who see that, who understand that 5G “applications” have to be first new, and second, uniquely linked to some aspect of 5G. It’s very difficult to see how those requirements could both be met through anything other than edge computing applications.

There’s been a lot of operator interest in function hosting, including the NFV initiative, but the fact is that for a variety of reasons, the software side of 5G hosting and edge computing hasn’t even come close to the richness that the cloud has. While some major vendors launched “telecom” initiatives, these have (perhaps not surprisingly) focused on 5G functionality rather than the hosting ecosystem that would likely lead to a generalized edge strategy. Two of every three operators tell me that they can’t get what they need from the software vendors, and in many cases this is true even when the software vendors have what operators need.

This is the reason why the relationships between public cloud providers and network operators for 5G function hosting could loom large. We don’t have edge computing today because we have no significant resource-pool deployments at the edge. Operators own real estate there and could potentially build out facilities to host 5G elements, which could then be used for those “true” 5G applications. If operators don’t build out, then either we don’t get edge pools at all, or they’re owned by the cloud providers.

There is, of course, the option that the cloud providers in some way “lease” the resources from the operators and provide the software needed, either cloud middleware that can run in these operator-built-cloud-leased edges, or a complete 5GaaS. In this case, whether the edge resource pool would be committed to the operator in a longer-term sense would depend on the specifics of the deal.

The faster an open-model 5G that relies extensively on traditional resource pools deploys, the more incentive there is for operators do do cloud provider deals, because their do-it-yourself hosting decisions would likely take longer to realize. Would that hurt edge computing? It might actually help it a little, because there is near-zero chance that operators themselves will be able to navigate the complicated symbiotic evolution of 5G and the edge. It could hurt if cloud-software giants like Red Hat and VMware decide that they’re too late to grab the brass ring, and stop trying to merge telecom and cloud in a software-centric, hosting-independent way. I suspect that from now through mid-2022 will be the decisive period, and I’ll report regularly on what I’m seeing.

Reading Wall Street Tea Leaves on 5G, Metro, and the Edge

Wall Street has credible reasons to believe that O-RAN is going to end the dominance of the big mobile network equipment vendors, and it may re-jiggle the vendors’ ranking too. The bigger question is the impact it might have in the networking industry overall. It may be that there’s no success in networking in the future without a credible O-RAN story of some sort. It may also be that O-RAN will need to change to realize the full benefits of all of this. Street data may help us assess these questions, and maybe answer them too.

The telcos have a decades-long history of messing up their own futures by launching and supporting “transformation” projects that were totally disconnected from reality. One could argue that the whole of 5G specifications, from the 3GPP, are an example. These specifications failed to truly define an open framework for 5G even as they mandated hosting principles that almost had to be open to work. They also built on NFV work that was itself out of touch with reality.

When the O-RAN Alliance was founded (by AT&T, China Mobile, Deutsche Telekom, NTT DOCOMO and Orange) in 2018, it faced the same sort of risk that contaminated previous telco standards and transformation projects. Despite that, for reasons I’ll get to, O-RAN actually produced something that was not only useful but potentially transformational. The biggest reason for that wasn’t the technical excellence of the work, but the specific target.

The 3GPP specifications tended to open up 5G infrastructure with respect to control-plane function hosting. Yes, it did that with NFV, but the operators who founded O-RAN weren’t trying to replace NFV but to address something that they believed the 3GPP work didn’t open fully, which was the 5G New Radio (NR), the 5G Radio Access Network (RAN) specification. In the 3GPP work, the X2 interface provided for base-station coordination of resources and resource management, but the interface was interpreted differently by different vendors, and the exact mechanism for the resource orchestration, configuration, and management was opaque. O-RAN’s contribution to all of this was to open up that opaque process through the concept of the Resource Intelligent Controller or RIC.

Opening up the 5G RAN opened up the critical (and, as I’ve said, opaque) part of RAN implementation. In doing so, it broke a potential proprietary RAN lock-in, and that was the thing that really changed the mobile network market forever. The majority of 5G is still proprietary because of the past vendor-specific infrastructure model and past vendor relationships, but it’s very clear to the Street (and to me, and many others) that we’re entering the open age with regard to mobile networks.

Over the last decade, mobile network infrastructure has actually become less “open”, meaning it’s been more dominated by a few large vendors. Street data says that a decade ago, Ericsson, Nokia, and Huawei had about 65% of the market for mobile infrastructure, and today they have nearly 80%. Nokia’s success, and Ericsson’s to an extent, can be traced in part to their at-least-titular support for O-RAN and openness, and the Street is now projecting that vendor lock-in for mobile networks will not operate in 5G.

An elephant in the 5G infrastructure room is Huawei, who has had the largest market share in mobile infrastructure over the entire last decade. However, it’s market share has been fairly static from 2016 because of tensions between the US and China that have spilled over to other 5G infrastructure buyers. Huawei was the price leader in mobile infrastructure, and the diminution of its influence put cost pressure on operators, pressure that an open 5G infrastructure market could be expected to relieve. If price-sensitive infrastructure deals favor O-RAN, then that only adds to its importance.

As I’ve noted in past blogs, 5G is the only area of telecom spending that’s actually budgeted, and that alone means that it will have a massive impact on vendor opportunity. A vendor who wants to sell to network operators who have any mobile service commitment will need to have a 5G strategy, period. If the Street is right (and I believe they are) about the open-5G drive among operators, then that means those vendors will need an O-RAN positioning, and a positioning toward where O-RAN is leading us. Thus, it’s critical to know where that is.

5G is made up of two distinct pieces, the RAN/NR piece and 5G Core. The term “5G Core” is perhaps unfortunate, because virtually nothing in it relates to what we’d think of as “the core” of a network, the deep transport router functionality. 5G Core is really a metro strategy because its primary role is subscriber and mobility management, which is functionally focused on metro areas because that’s the typical scope of movement of mobile customers. 5G RAN, and O-RAN, are “edge” or “access” strategies that extend from the metro area outward to the towers. Arguably, it’s the metro that’s the focus of 5G infrastructure because everything touches it.

That’s even more true when you consider that the most visible of the (arguable) 5G drivers is low-latency applications. It makes little sense to spend dollars to control mobile network latency, only to throw away the benefits by hauling application traffic a thousand miles to a cloud host. You’d want the applications to be run proximate to the point of RAN attachment, which is (you guessed it!) the metro.

The big innovation of O-RAN, technically speaking, is that RAN Intelligent Controller or RIC, but a second benefit is that the “hostable” elements of the RAN, which are the functions of the “baseband” portion and the so-called “Central Unit” and “Distributed Unit” or CU/DU, are controlled by the RIC but virtualized onto open devices. That means that specialized chips aren’t mandatory in the structure; as long as you can virtualize the CU/DU software functions onto a hardware device, you can use it.

What hardware device? There are three classes of network device that could be used in 5G RAN and Core. One (obviously) is the server, the commercial-off-the-shelf or COTS hosting point. A second is the white box, the open-hardware-modeled switching device, and the final one is the legacy network switch/router. That last device is responsible for providing IP connectivity, and it’s really a lower layer of the User Plane of 5G. The other two device options differ in the extent to which they’re generalized resources versus specialized communications devices. The former is most useful when there’s more than one thing a device at a specific point in the mobile topology could be expected to do, which will generally mean further inward, to the metro. The latter is the default strategy for something further out, which means the RAN, toward the tower.

What makes this resource independence and virtualization work is the “vRAN” concept, which is related to but not identical with O-RAN. The purpose of vRAN is to allow RAN (and, I believe eventually) 5G Core functions to be developed in a hardware-independent way, including independence of specialized switching chips. As I’ve suggested above, vRAN virtualization is a key part of the O-RAN value proposition, and so I believe that the two are joined at the hip. However, there are some vendors who are advancing vRAN concepts faster than O-RAN concepts, and some that go in the other direction.

The major mobile vendors have very different positions in all of this. Nokia is already aggressive with respect to vRAN and O-RAN, and the Street expects them to become even more so as the open-model strategy gains traction with operators. Ericsson and Huawei are far less committed; my operator contacts tend to agree that both vendors are just “blowing kisses” rather than making changes.

That suggests that there’s an opening for other vendors, and the Street likes some other non-network-specialized tech giants, Fujutsu, NEC, and Samsung, as well as up-and-comers Mavenir and perhaps Rakuten. The former group has the advantage of credibility of scale, something telcos generally value highly, but their success depends on O-RAN and open-model 5G in general, because otherwise big vendors tend to stir telco fears of vendor lock-in. The latter group represent the open wave in 5G better, but perhaps not as well as others could.

If we consider edge computing part of the mix, we have to look for more IT-centric players, particularly HPE, IBM/Red Hat, and VMware/Dell, but the timing may be critical. None of these players have realized a complete and mature 5G strategy. 5G spending is expected to peak in 2023, and 5G is already the only driver of increased mobile operator spending and the major driver of telco spending gains overall. It may well be that Ericsson and Huawei expect competitive pressure from operators to drive their deployment decisions faster than an O-RAN/vRAN movement can mature, reducing the threat of competitors from any source mentioned above.

The reason these three players, as well as the public cloud providers, are interested in all of this. Nobody really knows what a “5G application” would look like, nor how many of them could deploy, or when. The edge computing opportunity isn’t really driven exclusively by 5G, but because 5G is deploying and is defining an open-model, virtualized, edge function set, it could also define a model of edge hosting and even a middleware model. If there is a sea change coming in computing, it would likely come from edge-specific applications. My modeling has always said that the total revenue from this could hit a trillion dollars, but it won’t be easy to realize because it will require assembling a wide range of technologies into a cooperative system. The biggest liability the mobile network’s traditional infrastructure leaders face is that they don’t have a clue on this topic.

The final truth here relates to the “don’t-have-a-clue” group and the fact that the mobile network vendors aren’t the only ones in that category. Remember that legacy switching and routing, which form a big chunk of the 5G User Plane, is one of the product classes that will make up 5G infrastructure, which means legacy network vendors will see their own opportunities change with 5G deployment.

The future of the network is going to be decided by the fusion of switching/routing and hosting, and that fusion happens in the metro area and nowhere else. The big question is how it happens, at least from the perspective of the switch/router vendors. Is the metro network a network first, or a huge data center? If it’s the latter, there has to be an element of data center thinking in metro strategy. You can see Cisco’s’ recent application-oriented M&A in that light, but it’s more easily validated (if not yet effectively exploited) in Juniper’s metro-fabric, Apstra, AI, and 128 Technology stuff.

A great metro strategy is a great network strategy, and a great metro strategy has to be based on some credible network/hosting/edge vision and 5G/O-RAN/vRAN positioning. That’s where the build-out money for metro will come from through 2024. Get a piece, and you’re a winner. Fail, and you fail, convincingly and in the long term, no matter how good your product suite is overall. You can’t make money where it’s not being spent, and metro is that place, now and going forward.

The big fly in the ointment here may be Nokia’s announcement on the O-RAN Alliance, saying “we have no choice but to suspend all of our technical work activities” because of the threat of US sanctions applied to some Chinese companies who were members. Other international bodies like the 3GPP secured US licenses to permit Chinese participation without sanction risk, but the O-RAN Alliance has so far not done that. I don’t think this is going to derail O-RAN, and I don’t think that it will impact Nokia’s relationship with O-RAN, but it could delay broader progress in open-model 5G innovation. If the Alliance doesn’t address the sanctions risk, it might delay open-model 5G just enough to give those mobile vendors a shot at locking up the 5G market before it peaks in 2023, and that might impact edge computing by decoupling hosting and networking at the edge. In short, a lot’s at stake in the next couple years.

What’s Behind the Comcast-Masergy Deal

In a move that I found both surprising and unsurprising at the same time, Comcast acquired Masergy Communications, a provider of managed SD-WAN and SASE. It’s surprising because network operators haven’t traditionally purchased either technology or managed service providers. It’s unsurprising both because these aren’t traditional times in networking, and because Comcast knows it has a potentially major upside in the deal.

SD-WAN is a hot technology, one of the hottest in terms of user interest. Managed services are also hot and getting hotter, and I’ve blogged about both these truths in the past. Rather than reprise those stories, I’ll address the broad trends in the context of Comcast’s move, and its motivations.

There’s been a lot of talk about the global aspirations that Comcast supposedly has for the deal, speculation that it would somehow launch Comcast into a player in enterprise networking. I don’t think there’s a chance that’s true. The deal isn’t aimed at global opportunity at all; Comcast has a US footprint and they’d have no special credibility selling to buyers outside the US. As far as enterprise network opportunity, Comcast doesn’t stand any real chance of taking over enterprise network business from the telcos, and they know it well.

What the heck is this about, then? Occam’s Razor, in action. This is about SD-WAN.

Comcast is one of the premier suppliers of broadband Internet connectivity in the US. Since SD-WAN relies on broadband Internet, Comcast would appear to have a great opportunity in providing SD-WAN services to enterprises. Appearances can be deceptive, as Comcast learned with their initial SD-WAN push. The problem is that SD-WAN is something enterprises want to do once and only once. A supplier who can offer SD-WAN within the Comcast footprint is way less attractive than one who can offer it everywhere, which Masergy can do.

Suppose you’re a Comcast sales type calling on a US enterprise in your territory. You have this great SD-WAN strategy for the sites the company has that happen to match the Comcast footprint. You sing your best song, but the best outcome you can hope for is that 1) the buyer is willing to engage with multiple SD-WAN providers to get all their candidate sites covered, and 2) the other SD-WAN provider they contact for what your Comcast stuff can’t cover is stupid enough to leave any of the deal on your table. If either of these isn’t the case, you’re back to beating the bushes.

What Comcast needs is an SD-WAN strategy that covers the globe, so it doesn’t have to push buyers into a multiple-SD-WAN solution. Masergy has that. They’re a global MSP. They already run over any broadband provider in the world, including Comcast. They can provide sales and support globally, too. There are few SD-WAN managed service providers out there who are truly major players, and Masergy is one. Comcast needs an MSP, not SD-WAN technology, for that global support scope, and nobody is a better candidate.

Then there’s the competitive dynamic. Masergy is a truly great partner for a communications service provider like Comcast, and also for all of Comcast’s competitors. One could spend time better by wondering why nobody else did this deal first, than wondering why Comcast would want to do it now, and that’s for the Masergy SD-WAN MSP story alone.

It’s not the only story, either. The only fly in the Masergy-marriage ointment for Comcast is the price they have to pay for the deal, which hasn’t been disclosed by either party but which my Street sources tell me was “unexpectedly high”. It shouldn’t have been unexpected, of course, but it is a fact that without something to add some luster to the deal in the near term, the result might be a hit on Comcast’s bottom line and share price. How to avoid that? Have other stuff to sweeten the deal.

Comcast has an obvious revenue kicker in that they can promote their own cable broadband within their footprint, increasing their broadband sales to businesses. That not only creates direct revenue, it creates reference accounts and easier sales traction. However, that source is something the Street would expect, so it would have factored into their expectation of the strike price for the deal. There has to be something else.

There is, and it’s the other stuff that Masergy offers. They have eight SD-WAN elements in their order book, six unified communications elements, four contact center elements, and five managed security elements. I’ve pointed out before that the most critical thing an SD-WAN MSP needs is something to upsell into, to differentiate from the hoard of competitors and sweeten the revenue and commissions. Masergy can offer that, so much so that the deal can likely get Comcast salespeople an opening to call on enterprises. You can’t sell if the buyer won’t set up an appointment.

Some of those sweeteners are also valuable in their own right. Multi-cloud? It’s in there. AIOps? Also in there. Security services and SASE? In there, too. With the Masergy deal, Comcast steps up to be one of the most credible MSPs in the market…with the obvious “if…” qualifier.

If they can play it right. The big question is synergy. Masergy has probably already tried to engage enterprises for SD-WAN, and they cover the whole waterfront, footprint-wise. Our hypothetical SD-WAN/MSP salesperson, having called on enterprises, now leaves behind two pools; those that already bought and those that rejected Masergy. What can Comcast say to get their own shot? Not “Masergy is now us”, or even “look at what we are when you pull Masergy into our tent”. The whole has to be greater than the sum of the parts, and that has to be extremely clear from the very first.

They’re not doing that so far. Their press release on the Masergy deal is b..o..r..i..n..g. They talk about what Masergy adds to Comcast, but not what Comcast adds to Masergy, and that’s the question on which the whole value of this deal will rest.

If you’re a Comcast competitor, you have to see this deal as the classic shot-across-the-bow threat. Most competitors will be telcos, and rather than buy another MSP (who probably won’t have all the assets of Masergy), you’d likely engage with an SD-WAN technology vendor and launch your own MSP service. Masergy’s technology is hidden in their MSP positioning, but there are many (dozens of) SD-WAN products out there, as well as SASE solutions, security approaches, operations automation, and so forth. Big telcos also have global presence already. If I were a telco, I’d be dusting off my MSP business plans and looking for best-of-breeds. If I were a vendor who had something arresting in one of Masergy’s technology areas, I’d be prospecting telcos for my wares. Both will surely happen, starting now.

Comcast absolutely has to hit the ground running on this, and they’ve not done that with the bulliest of their bully pulpits, the announcement itself. If they’re not careful, this deal could hurt them more than help them, but if they are, it could make them a real player in the network of the future.

Is There a Role for Graph Databases in Lifecycle Automation and Event-Handling?

Everything old is new again, so they say, and that’s even more likely to be true when “new” means little more than “publicized”. When I talked to the 177 enterprises I’ve chatted with this year, I was somewhat surprised to find that well over two-thirds believed that artificial intelligence was “new”, meaning that it emerged in the last decade. In fact, it goes back at least into the 1980s. The same is true for “graph databases”, which actually go back even further, into the ‘70s, but Amazon’s Neptune implementation is bringing both AI and graph databases into the light. It might even light up networking and cloud computing.

We’re used to thinking of databases in terms of the relational model, which uses the familiar “tables” and “joins” concept to reflect a one-to-many relationship set. RDBMSs are surely the most recognized model of database, but there have been other models around for at least forty or fifty years, and one of them is the “semantic” model. I actually worked with a semantic database in my early programming career.

Semantic databases are all about relationships, context. Just like words in a sentence or conversation have meaning depending on context, so data in a semantic database has meaning based on its relationships with other data. The newly discussed “graph databases” are actually more properly called “semantic graph databases” because they extend that principle. One common graph database technology is the NoSQL graph database increasingly popular for IoT applications.

The same notion was the root of a concept called the “Semantic Web”, which many saw as the logical evolutionary result of web usage. In fact, the Resource Definition Framework (RDF) used in many (most) graph databases came about through the Semantic Web and W3C.

Graph databases shine at storing things that are highly correlated, particularly when it may be the correlations rather than the value of a data element that really matters. Amazon’s Neptune and Microsoft Azure’s Cosmos DB, as well as a number of cloud-compatible software graph database products (Neo4j is arguably the leader), will usually perform much better in contextual applications than RDBMS databases would. That makes them an ideal foundation for applications like IoT, and also for things like network event-handling and lifecycle management. While you don’t need graph databases for AI/ML applications, there’s little doubt that most of those applications would work better with graph databases, and my notion of “contextual services” would as well.

Network service lifecycle automation, a topic dear to my heart and the hearts of anyone with a network, would seem a natural for graph database technology. Network events, since they reflect a state change in a highly interconnected community of cooperative elements, are handled properly when they’re handled in context, so obviously something that can reflect relationships would be a better way of storing and analyzing them. Why then don’t we see all manner of vendor representations on the power of their graphical database technology in network management?

We do see an increased awareness of the contextual nature of lifecycle automation, and I’ve illustrated it through my blogs about finite-state machines (FSM) and state/event processing. You can also see it in the monolithic models of network automation, including the ONAP management framework, by the fact that the processing of an event often involves a query into the state of related elements. That begs the question of whether a graph database might serve as an alternative to both FSM and specific status polling.

One barrier to graph database application to network or service lifecycle automation, and one that would apply to application lifecycle automation as well, is the tendency to rely on specific polling for status, rather than on a database-centric analysis of status. Polling for status has major issues in multi-tenant infrastructure because excessive polling of shared resources can almost look like a denial-of-service attack. Back in 2013, I did some work with some Tier Ones on what I called “derived operations”, which was an application of a proposal in the IETF called “i2aex”, which stood (in their whimsical manner of naming) for “infrastructure to application exposure”. The idea was that status information obtained either by a poll or a pushed event would be stored in a database, and applications like lifecycle automation would query the database rather than do their own polling. I2aex never took off, and I didn’t follow through with serious thought about just what kind of database we might want to store these infrastructure events in. I think that graph database storage is an option that should be considered now (and that I should have explored then, candidly).

Conceptually, the “state” of a community of cooperative elements (of any kind, network or application) can be determined from the sum of the states of the elements themselves. The relationships between the elements and their states can surely be represented in a graph database, and in fact you could use a graph to represent a FSM and “read” from it to determine what process to invoke for a given state/event intersection. Why not create a graph database of the network and the service, and use it for lifecycle automation?

One potential issue is that the number of relationships among elements grows exponentially with the number of elements, which means that a graph representing a large network, service, or application might be very large indeed, and that a query into it, even given the high performance of graph databases, might be time-consuming. Still, the concept might have real merit if we could tweak things a bit.

One possible tweak would be to use the same techniques I’ve discussed for creating a “service hierarchical state machine” or HSM from individual service-component FSMs. In the approach I discussed in my blogs, the components of a service or service element reported state changes back to their superior element, which then only had to know about the subordinate elements and not their own interior components. The model constrains the complexity.

Another possible tweak would be to use AI principles. A service or an application, in the real world, is really a fusion of two models, one representing the resources themselves and another the way that service functionality is related to or impressed on those resources. I believe a graph database could model that, but it might be easier to use AI to do bridge correlations between a graph database representing each model.

I’ve always been a fan of state/event tables, but I’m not wedded to an approach if something better comes along. I’d like to hear about any state/event applications mapped to a graph database versus the traditional table, and hear comments from some of my LinkedIn friends on their views on the use of graph databases in applications traditionally seen as state/event-driven. Please do not advertise your product in response; if you want to reference a product, post a link to a blog you’ve done that explains the graph-database connection to state/event handling.

Making the Most of “Citizen” Strategies in IT

Remember “shadow IT?” Even the name seems a little sinister, and for many CIOs it was a threat from the get-go. What shadow IT is all about is the dispersal of information technology purchasing and control to line organizations, creating a parallel set of technology centers. We don’t hear as much about it these days, but a recent article in Protocol made me look back at what I’ve heard from enterprises this year, and it’s interesting.

Despite the seemingly recent origin of shadow IT, it’s actually been around for sixty years. A company I used to work for, a giant publishing firm, had a half-dozen very separate lines of business, and they tried out dispersing IT to the business units—several times in fact. Over a period of 20 years, they went back and forth between central and distributed IT, and gradually ended up in a kind of uneasy middle ground. The reason for all the vacillation is relevant to the current situation.

Line organizations often see the IT department as an enemy. In the 177 firms I’ve talked with this year, over a hundred said that they had “tension” on goals and costs between IT and line organizations. Almost a hundred said that they had some “shadow IT” activity in place, and none of them indicated they were trying to stamp it out, though half said they were trying to “control” it.

The reasons for the tension vary, but the top two items on the list are pretty consistent across enterprises. The number one issue was delay in getting critical line-sponsored projects implemented. Companies said that it took IT “over a year” in most cases to get a project completed, and the line organization targets were less than half of that. Issue number two was that IT costs were substantially higher than expected, often so high that the business case was threatened. Line departments felt that there was too much IT overhead, and that their allocated costs covered resources and activities that didn’t benefit the line organizations at all.

Line departments reacted to these issues in two specific ways. First, they promoted the notion of low-code/no-code development to take more control over project implementations. Second, they looked increasingly to the use of cloud computing, because as an expense, cloud computing bypassed some executive limitations on just what line organizations could do. “I can’t buy computers but I can rent computing” was a typical line comment.

There’s no question that low-code/no-code “citizen developers” have revolutionized the way line organizations handle many of their projects. Interestingly, enterprises often don’t consider this a form of shadow IT; twenty percent more companies say their line organizations use low-code/no-code than say they have shadow IT. IT organizations have gradually accepted the citizen-developer trend as well; the majority of the enterprises that use it said they didn’t put any special restrictions on use, though most did say that IT played a major role in selecting the tools.

The cloud is another matter. When asked whether cloud projects driven from line organizations were more successful or more likely to have issues, enterprises (and even a majority of CFOs) said that “citizen-cloud” projects often failed to deliver on expectations, were more likely to experience significant cost overruns, and “usually” required IT organizations step in to correct problems. CIOs’ biggest problem with citizen-cloud projects was the security/compliance issues, which came about because of data stored in the cloud without proper precautions.

The specific trend that Protocol talked about in the referenced article, the notion that software and even hardware vendors would start selling (or trying to sell) to line organizations rather than IT, wasn’t particularly common in my sampling. Only 27 of the 177 enterprises indicated that this had happened, but of course there’s always the chance that the attempts weren’t all recognized even by CIOs and CFOs. Both the CIO and CFO organizations indicated that they believed that their current policies on software and hardware purchase by line departments were satisfactory. In general, those policies required that the dollar amount be small (usually somewhere between five and ten thousand dollars maximum) and that the software and hardware be used entirely within the purchasing department. Network equipment purchasing by line organizations was rarely allowed (11 out of 177), and the same policy held with software that had to be run on IT-owned or multi-tenant facilities, though that was allowed by 38 of 177 enterprises if the hosting was done in the cloud.

I was interested in how the sellers might view the idea of going around IT, and there was considerably less consensus there than among the buyers. My sampling here is more limited (75 vendors), but over two-thirds of hardware and software vendors said that they would be “reluctant” to prospect line organizations, partly because they were afraid of alienating their major buyer (IT) and partly because they were afraid of creating a very visible failure that could taint their reputation within the company, or even in other firms in the same vertical market.

Cloud providers didn’t share this reluctance, though they were reluctant to talk about it and their policies might vary significantly across providers, sales regions, and even individual salespeople. The best I could do here was to identify 49 of 177 companies who said their line organizations had been prospected by cloud providers or cloud resellers.

Perhaps the most interesting thing about all these shadow-IT, citizen-IT plays is that they dependably spark IT initiatives to enhance IT responsiveness and manage costs. Every enterprise that supported citizen developers also had a “growing” community of low-code/no-code developers within the IT organization, aimed at providing a rapid response to the line project requests. These resources were primarily directed at projects associated with long-running applications rather than the one-off applications that (slightly) dominate line department projects. About a quarter of enterprises either had, or were exploring, ways of supporting citizen-cloud requirements out of IT.

If you dig into citizen/line/shadow IT, you find the classic irresistible force versus the classic immovable object principle in action. Line organizations see business problems and opportunities first, and because most have direct profit-and-loss responsibility, they also see considerable pressure to address them. Given that enterprise IT has enough problems in acquiring skilled technology workers, it would be surprising if line organizations had a rich pool of IT skills, so there’s rarely a good understanding of how to run an IT project, or how to assess whether one being run by the CIO is being run right. Some bickering here is inevitable, but that isn’t the main issue.

The main issue is that we’re continually trying to move IT closer to the worker. We’ve gone from batched punch-card retrospective reporting to “mainframe interactive” transaction processing, to distributed computing, to hybrid cloud. That evolution frames the capability to host more worker-interactive models, but it’s the applications that really establish the way workers and IT relate. What we have in shadow IT today is an attempt to make applications as dynamic as application hosting. That’s partly a low-code/no-code problem and partly a data architecture problem.

It’s hard to advance applications toward citizen development, even with coding tools, without creating “data middleware” that makes information access easy. The great majority of citizen developer projects are really data analysis problems, and so enterprises who are trying to use citizen developers or support the activity should take a hard look at their data analysis and modeling strategies, and try to create a unified data model to support line department information use. Without that, it may be more difficult to get full benefit of low-code/no-code and even citizen cloud empowerment.

Cisco Has Giant Plans, but Faces Giant Hurdles Too

If there’s a vendor in the networking space that we need to watch closely, it’s Cisco. Not only are they a giant, and a master at account control, they’re also the most inventive when it comes to addressing (or creating, or some might say, fabricating) future trends. Their latest earnings report adds some color, perhaps, to some of their recent strategies.

Cisco reported a revenue beat, driven entirely by their infrastructure platforms category, which were up over thirteen percent year-over-year. Applications, security, and other products all came in below estimates. The Street view was that even though Cisco’s revenue and EPS (slightly, in the EPS case) beat estimates overall, and even though outlook was good, gross margins guidance and supply chain comments somewhat offset the positives. I think it’s the source of their revenue beat that raises questions.

What’s kind of ironic here is that Cisco has been doing all manner of things, including virtually all its recent M&A, to make itself less dependent on infrastructure platforms, the area that actually did well for them in the quarter. The areas that were judged most problematic by the Street were subscriptions, software, security, and applications, and those of course were the areas that Cisco was trying to boost with announcements and acquisitions. The big year-over-year revenue gain also came from a very low reference point.

It’s hard not to see the picture that Cisco painted as being one of a simple recovery of deferred spending across all types of customers. Despite swings in the stock market overall, there’s a general view that global economies are recovering and that businesses are now looking to execute on capital programs that were deferred because of the virus. That’s the classic rising-tide story, at least where companies have the account control to execute on the changing plans, which Cisco clearly does.

Account control issues may play into the problematic factors in Cisco’s numbers, too. Cisco has great strategic influence in the network. Software and applications typically fall outside the normal zones of Cisco influence in an account, and subscription and security are spaces that are shifting in response to market factors that are best exploited where strong account control exists beyond simple networking. The big push in subscription is Cisco Plus, which is a subscription hardware service that’s arguably more focused on the data center than the traditional network, particularly in terms of the organizations who’d own decisions within each buyer.

To all appearances, Cisco is shifting in response to a combination of enterprise truths. First, it’s becoming very difficult for enterprises to acquire and retail skilled technical people, in networking and in IT/operations. This has driven increased enterprise and even network operator interest in “managed services” for the former, and cloud provider partnerships for the latter. Second, lack of strong internal technical skills has made many CFOs antsy about capital projects promoted by the technical teams. Combine that with the fact that enterprises are really consuming more VPN and SD-WAN services than creating network services of their own, and that most network growth comes in the data center network, and you can see why Cisco is looking at a broader footprint in terms of both products and influence.

Another shift that seems clear is Cisco’s “ecosystemic vision”. Decades ago, I read a SiFi story about a future age when “marketing” consisted of using all manner of tricks to create what they called a “circular trust” (meaning a feedback-based supporting set of monopolies) where a lock-in was created by having one product promote another product, which in turn then promoted the first one again in a feedback loop. Cisco seems to have this in mind, and want to link their server UCS strategy, their data center strategy, their subscription strategy, their cloud strategy, and their software strategy so that accepting any of them pulls the rest through.

There are obvious benefits to this kind of “herd of camel’s noses” approach, and there’s also a competitive dynamic to consider. Remember Cisco’s strength in account control and strategic influence? If you have better, far better, control than your rivals, then broadening the playing field works in your favor because you can respond broadly and your competitors can’t. Cisco’s rival Juniper, who actually has at least as good and likely better individual elements to frame camel-herd ecosystem trusts, doesn’t do nearly as well in ecosystemic positioning, so Cisco is punching into unprotected competitive anatomy here.

Ecosystemic positioning demands an ecosystem, of course, and perhaps your ecosystem has to embrace stuff that’s still not in your own portfolio, so your herd of camel-noses can start working their way into tents. The recent steps that Cisco has taken in monitoring (exploiting ThousandEyes and acquiring Epsagon) suggest that Cisco wants to spread a management/observability umbrella over both application components and networks. Their recent partnerships with cloud providers and interconnect players to enhance SD-WAN cloud routing suggests that they’re also trying to elevate their networking capabilities, extending from private elements and MPLS VPNs to cloud/Internet routing of application traffic. All of this is aimed at addressing what they clearly believe is a sea change in how enterprises deploy applications and build and use application connectivity.

A software success, a big one, would surely help them in their cloud and application management goals, and Cisco has done a ton of software M&A, even in their most recent quarter. A lot of their software revenue comes from that M&A, which has led some to suggest Cisco just bought its way into software revenue growth without much regard for what all the stuff did in an ecosystemic sense. There’s some validity to that point, but it’s also true that all the network vendors and network operators are starved for software engineering expertise, particularly people who know cloud-native techniques. Cisco may be trying to build a workforce as much as an ecosystem, or more.

It’s tough to call how this is going to play out. Cisco has a plan that, if executed well, could work. They have the account control needed to promote their plan to buyers, and they have the nakedly aggressive marketing needed to make that promotion easier for the salesforce to drive home. Extreme Networks’ recent M&A suggests it’s taking its own run at cloud networking, and an activist investor is rumored to be looking to split Juniper’s AI and cloud networking off the legacy switch/router business. That seems to prove it’s not just Cisco that sees the changes, and the opportunities. Still, Cisco could make a go of this.

“Could” is of course the operative term. Cisco, like many marketing giants, often mistakes saying something for doing something. There is no question that Wall Street likes the prospect of a bright new set of business opportunities, and that those same announcements offer additional credibility to Cisco’s sales pitches across the board. There’s also no question that at some point, Cisco has to deliver substantial progress here or let all this ecosystemic stuff gradually fade away. In which case, they’ll need something else to replace it.

5G Slicing, MVNOs, and Acquisition and Retention

The value of MVNO (mobile virtual network operator) services for wireline incumbents has always been a question mark, given that many cable operators who tried the notion out didn’t get what they’d hoped for. Still, as this Light Reading piece says, there have also been MVNO success stories among at least some of the larger cable companies. One of the 5G features that always gets touted, network slicing, has the potential to reduce the scale and cost of MVNO relationships with mobile operators. Could this pull through 5G Core, could it induce cable companies of all sizes to have a go at mobile 5G?

There are quite a few MVNOs out there today, and some analysts on Wall Street have told me that a sizable majority of them are offering prepaid mobile service. The reason is that the big mobile operators all want to own the post-pay customers whose spending is highest and whose loyalty is most important. The theory behind prepay MVNO business is that the operator really has to pay only marketing costs and the MVNO charges to their supplier. If an operator can identify and exploit a valuable niche market, or in particular if they have some current customer base they can exploit, they can make MVNO service a success.

Well, that’s been the theory, anyway. I’ve had a lot of discussions with mobile service planners, including MVNO planners, and there seems to be another dimension of the whole MVNO thing emerging. It may be that this new dimension will decide whether MVNO opportunities really drive network slicing in 5G, and how many smaller and more specialized non-mobile operators (including cablecos) will get into the MVNO space.

I’ve said in many of my blogs that operators almost universally spend more on operations expenditures (opex) than on capex. Mobile operators are no exception; their expensed costs exceed capex by an average of 45%. If you’re wondering why opex connects to MVNO drivers, a look at how those expenses are divided will answer part of the question. The biggest contributor to opex, meaning operations-related expenses, is customer acquisition and retention, which makes up about 40% of opex overall.

Suppose you’re a cable company or other non-mobile incumbent. You can spend that money on advertising and maybe special offers, which is the traditional approach, but suppose that you offered all of your customers a sweetheart mobile service deal through an MVNO relationship. Could it be that such a deal, offered to your wireline customers only, would increase your acquisition and retention rates?

Some cable giants have suggested to me that this is exactly what happens, and that the most valuable thing to them is that such mobile service plumbs will tend to improve customer retention significantly. One cable strategist puts it this way: “mobile services to our base is like the ad you never run but your customer always sees.” If your customer looks to buy a different wireline broadband service, they have to consider that they’ll also need a new mobile service, and that’s likely to be more costly. Of course any phone deals and the hassle of number portability can be an incentive to stay the course, too.

The potential for mobile services to increase wireline broadband and TV sales means that it doesn’t have to be as profitable in itself to be useful. Nobody I’ve talked with are prepared to say they would offer MVNO mobile service as an incentive, as a loss leader, but they’ve suggested that thin margins would be acceptable, and some said that if they found it improved retention in particular, the margins could be even thinner.

Obviously the profit margin for MVNO mobile service would depend in part on what the relationship with the real mobile operator costs. That’s where some (emphasis on “some”) operators think network slicing and 5G could come in. Some prospective MVNOs tell me that their mobile operator prospects have suggested that 5G network slicing could allow them to reduce their costs to MVNOs. Some mobile operators have said the same, but so far, nobody has been willing to offer exact numbers.

One mobile operator did say that if they found 5G network slicing reduced the cost of offering MVNO services by up to ten percent, they’d be inclined to drop their prices by the same amount to increase their business. If costs were to drop by more than that, they’d pass the “majority” of the reduction on to MVNO customers.

Both the mobile operators and prospective MVNOs said that WiFi hotspot and roaming support was as important to their cost/price relationships as network slicing. Anything that unloads data traffic, and in particular, video content, is attractive as a means of reducing network service costs to the MVNO. For this to work, both parties say, there has to be both WiFi calling and automatic switchover to WiFi when in a suitable hotspot. Smaller MVNO prospects would like to see mobile operators include both solicitation for hotspot locations and support for call/data management in their MVNO offers.

All of this suggests that mobile operators who wanted to capitalize on MVNO deals should create a kind of MVNO package that includes “slices”, hotspot deals, and WiFi integration into both call/text and data services. Larger MVNOs like Comcast or Charter may be able to do their own deals and also provide the technology support needed for other important features, but smaller ones wouldn’t likely have any chance, and certainly wouldn’t be comfortable with either task.

It also shows us (yet again) that you can’t just toss 5G features out there and expect everyone to figure out how to exploit them. Supporting MVNO relationships at a reasonable cost for both parties is critical to the expansion of MVNO services, and critical to early adoption of network slicing. It’s also a way to facilitate bundling mobile and wireline services for operators who can’t afford or justify their own mobile network deployment. However, there’s more to it than just slicing 5G.

Another lesson to be learned here is that feature differentiation of connection services is very difficult; bits are bits. When you can’t feature-differentiate, price differentiation and commoditization tend to follow, and that’s a path that no network operator wants to take. The fact that customer acquisition and retention costs are a larger percentage of opex than they were five years ago, and are likely to be an even larger portion five years hence, illustrates this point. MVNO deals may help reduce those costs, but a better differentiation strategy for connection-oriented operators are the only long-term solution.

Is Fiber in Our Future After All?

What the heck is going on with fiber to the home? There seem to be a lot of announcements about new fiber deployments. Isn’t there a problem with fiber deployment costs in many areas? Are we headed to universal fiber broadband after all? The answers to these questions relate to the collision between demand and competitive forces on one hand, and hard deployment economics on the other. We’re surely going to see changes, but they’re going to make the broadband picture more complicated, not more homogeneous.

Demand shifts, when they happen, change everything, and streaming video is such a shift. Consumers have long been substituting viewing recorded video rather than live, and as companies like Amazon, Hulu, and Netflix started offering streaming programming from video libraries, many have discovered that the broader range of material gives them more interesting things to watch. We have more streaming sources than ever today, and companies who had relied on live, linear, TV programming are finding that they’re struggling to maintain customers in that space, while their broadband Internet services are expanding.

It’s really streaming that changes the broadband game, and the fiber game. Streaming 4k video works well for most households at 100Mbps, better at 200Mbps for large families, and those speeds are beyond what can be delivered reasonably over traditional copper-loop technology. Cable companies, whose CATV cable has much higher capacity to deliver broadband, is already pressuring the telcos in the broadband space, and fiber to the home (FTTH) is the traditional answer.

FTTH, even in its passive-optical-network form, has a much higher “pass cost” than CATV cable (today, roughly $460 per household for PON versus roughly $185 for CATV), and both these technologies are best-suited for urban/suburban deployment. Telcos like Verizon, with a high concentration of urban/suburban users and thus a high “demand density”, have countered cable companies with fiber effectively. Where demand densities are lower (like AT&T), there’s a risk that going after urban/suburban users only would offend a large swath of rural customers, and even create regulatory risk. AT&T, staying with our example, has lagged Verizon in fiber deployment, though they’ve been catching up recently.

Streaming video demand, and competition between CATV and fiber, has been increasing telco tolerance for higher fiber pass costs, which in any case have fallen roughly $150 from the early days of Verizon’s FiOS. The big problem with both CATV and fiber is the need to trench and connect the media. You need rights of way, and you need crews with equipment, who have to be very cautious not to cut wires or pipes in crowded suburban rights of way and easements.

Another factor in the fiber game is that suburbs are growing, which is gradually making the suburban areas more opportunity-dense, and improving the return on fiber. It’s a bit too early to take the assertions that COVID will drive a major relocation push seriously, but it is possible that people who have accepted the benefits of city life are more likely to rethink the risks. WFH isn’t going to empty cities, but it may swell suburbs enough to shift the economies of fiber a bit…if only a bit.

The game-changer, potentially, is 5G in general, and millimeter-wave in particular. Feed a node with fiber (yes, FTTN), use 5G to reach out through the air a mile or so to reach users, and you can achieve a low pass cost. Just how low depends on a lot of factors, including topology, but some operator planners tell me that a good average number would be $205 per household. Something like this could deliver high-speed broadband to rural communities not easily served by any broadband alternative. Not only that, the technology could be used by competitors to the wireline incumbent; all you need is a single fiber feed to the town.

“Predatory millimeter-wave” may be the decisive technology in fiber, deployment, ironically. In the first place, it’s a direct consumer of fiber in areas where FTTH is simply never going to happen because of low demand density. Second, it’s the only realistic model for competitive home broadband in areas where a competitor would have to be established from scratch. Finally, it’s good enough to support streaming services, but not as good as fiber, so it’s going to tip the scales on pass-cost tolerance further, encouraging telcos to deploy fiber in suburban areas previously considered marginal.

There is also a chance that the FTTN deployment associated with millimeter-wave 5G will help expand FTTH. Clever planning could encourage symbiosis between the two fiber models. You could create a PON deployment, some sites of which were FTTN nodes rather than homes/businesses, and reach out from the edges of FTTH for a further mile or so. You could also selectively feed PON from a traditional fiber-fed node as well as supporting millimeter-wave.

Where millimeter-wave broadband could really shine is in conjunction with the utilities. Anyone who runs wires or pipes can pull fiber, and many have done that. The question for many who have is how to leverage it effectively, and the 5G/FTTN hybrid could do the trick. Some of my broadband data modeling worked on what I called “access efficiency”, which relates to how dense the rights of way are. In many rural areas, access efficiency is such that a majority of the rural population could be reached from a utility right of way. If we imagined a kind of “linear PON”, with fiber feeding 5G-mm-wave nodes along a right of way, any site within a mile or so of the right-of-way could be reached. If the cost of one of those nodes were reasonable, it could be a better option than fiber-to-the-curb, which then relied on MOCA cable for the household connection.

I think we’re going to see some fiber expansion. We’ll also surely see more people predicting universal fiber, but in the US at least I don’t think there’s any realistic chance of that. In fact, too much focus on impractical fiber strategies could end up hurting technologies that will actually boost fiber deployment overall. 5G/FTTN may not satisfy the fiber-to-everyone school, but it would radically improve broadband access, and increase fiber deployment too. That should be our goal.

Will even 5G/FTTN create universal gigabit broadband in the US and other countries with similar demand density variations? Not likely, but that’s more a public policy question than a technology question. While we’re answering it, people who live in sparsely populated areas are likely to find themselves with fewer, and poorer, broadband choices.

What Would an Innovative Telco Opportunity Look Like?

OK, if telecom innovation is dead, as both the Financial Times and my recent blog agreed it was, what would “innovation” have to look like to restore things? Can public cloud providers play a role in helping telcos innovate, or would the reliance on them just create another form of “disintermediation”? Let’s look at these questions, re-making some past points with some fresh insights, and making some new ones.

People have been telling me for decades that the telcos are motivated by competition and not by opportunity. Given the nature of their business, which has enormous capital costs and relatively low return on capital, most competitive threats emerge from other telcos. At least, if one considers “competition” to mean competing with telcos’ current businesses. This simple point may help explain why telcos have been so reluctant to branch out to better and more innovative services; they don’t value the opportunity those services might generate, and they fear a new host of competitors.

After posting the blog I reference above, I got some pushback from people who believed that telcos could innovate by reselling what are effectively OTT forms of communications services. I don’t believe that the revenue opportunities for telcos here are sufficient to create a bump on their profit-per-bit decline, but that’s not the only problem with the notion. Being a reseller exposes telcos to all the new skills required for retail service promotion, and their reluctance to compete in the OTT space is partly due to their desire to avoid proactive selling at the retail level, when they’re used to simply taking orders.

Some OTT position is essential, given that all the credible opportunities that have emerged or seem likely to emerge are related not to connectivity but to how connectivity can be used to deliver new services. In short, the connectivity that’s the product of the telcos is simply a growth medium supporting what could (inelegantly, to be sure) OTT bacteria. The telcos have been not only reluctant to embrace OTT services, there are positive reasons they can’t. The biggest such reason is that you can’t compete with OTT players when your own competing services have to contribute to cover connectivity service profit shortfalls those OTTs don’t have to worry about.

The solution to this problem seems clear to me. If your original product (connectivity) has commoditized, you need a new product. If you don’t want to field an OTT offering, or don’t believe it would work for the reason I just cited, then you need to field another wholesale offering, something beyond connectivity, where you can establish a viable wholesale business before disintermediation takes hold. What? I think there’s only one answer, and that is information.

OTT services are typically experiences of some sort, and experiences are valuable to the extent that they’re personalized. You can’t just shove video at people, you have to give them something they want to watch. Weather isn’t interesting if it’s for somewhere far from where you are and where you might be heading. These are simple examples, but they prove that people want stuff in the context of their lives, which is why I’ve called the sum of the stuff that personalizes experiences contextual services.

There are a lot of things that create contextual services, the most obvious these days being tracking web activity. Ads are personalized by drawing on what you buy and what you search for or visit. This sort of contextual service isn’t easily available to telcos because it relies on information gathered from OTT relationships, the very area telcos don’t want to get into. Telcos need a kind of personalization that OTTs don’t already have an inside track with, and the best place to find it might be location and location-related information.

Anyone who watches crime TV knows that mobile operators can pin down your location from cell tower access, not perhaps to a specific point in space but to a general area. They know when you move and when you’re largely stationary, and they could know when you are following a regular pattern of movement and when you’re not. Best of all, all the stuff they know about you can be combined with what they know about others, and what they know about the static elements of your environment, like buildings, shops, sports venues, and so forth.

Obviously, using data from other people poses privacy issues, but there are plenty of examples of services based on even individual locations already, and an opt-in approach would be acceptable by prevailing industry standards. When we generalize to communities of people, there’s no issue. For example, mass movement of mobile users associated with things like a venue letting out would signal a likely change in traffic conditions on egress routes. That movement can be tracked by cell changes by the users, something that’s already managed by telcos.

Is a shopping area crowded, or relatively open? Are there a lot of people in the park? Those are questions that analysis of current mobile network data could answer, though obviously you’d need to aggregate information from multiple providers. That’s OK, because what we’re looking for, recall, is a wholesale information offering. Telcos could offer an API through which retail providers could gather their information, and aggregate it as needed with information from other telcos.

They could also aggregate it with IoT data. One of the big problems with the classic IoT public-sensor model is that nobody has any real incentive to deploy public sensors; there’s no ROI because their use isn’t confined to the organization who paid for deployment. Suppose that IoT sensors were deployed by telcos, and that rather than their being public, they were information resources for the telcos to use in constructing other wholesale contextual services. Whatever we believe IoT could tell us could be converted into a service that digests the sensor data and presents it as information, which is what it is.

There’s more that could be done with all this data. Recall from my previous blogs that many advanced IoT opportunities involve creating a “digital twin” of a real-world system. The “real world” here could be the real world in the local sense of a given user. We already have sophisticated mapping and imaging software that can construct a 3D reality of places, and that could be used to provide augmented-reality views of surroundings, with place labels and even advertising notices appended.

The notion of a real-world digital twin is the ultimate piece of the contextual-service puzzle. A telco could obtain, or even contract for, the basic software needed, and of course there’s likely no organization more used to large capital expenditures than a telco, so they could also fund a significant IoT deployment to feed sensor-based services. The sale of wholesale services obtained from these systems could then be a revenue kicker, and it would still allow telcos to stay out of the retail application business for as long as it took for them to gain comfort with that space, which might never happen, of course.

I don’t think it’s likely that telcos will do this sort of thing. I think public cloud providers might well consider it, and might form relationships with telcos to obtain information from mobile software to feed the information services and real-world digital twin. It’s hard to imagine this kind of thinking from a telco who hasn’t been able to come to grips with cloud-hosting even elements of critical 5G infrastructure, and are seeking public cloud providers instead.

I think this is an example of an opportunity that telcos will be so slow to recognize that they’ll end up losing it to another player. Likely, to their own public cloud “partners”. Instead, they’ll try to convince themselves that some different form of connection services is innovation. They’ll succeed in the convincing part, but what they’ll get won’t be innovative at all, it will be another form of telco business as usual.

Telecom’s Innovation Failure: How Did We Come to This?

Why has a Financial Times article suggested that “Telecoms innovation talk may be nothing but hot air?” It’s probably more important to wonder if it’s true, of course. Many pundits (myself included) have talked about this issue before, and even some telecom executives have offered their own views. I recently recovered a hoard of my past email exchanges, including those relating to some very significant telecom initiatives that turned out to be nothing. Some real-world experiences may help focus this long-standing topic on something useful, or at least uncover why nothing seems to be useful at all.

My first experience with the issue of telecom innovation came way back in the 1980s. The Modified Final Judgment had broken up the old Bell System, separating telephony in the US into long-distance and “Regional Bell Operating Companies” or RBOCs. The RBOCs established their own collective research arm, Bell Communications Research or Bellcore, and I was at a meeting there, doing some consulting on a project. It was a large and diverse group, and in an introduction, the sponsor of the project made a comment on development of what today would likely have been called an “advanced” or “OTT” service.

At this point, one of the attendees whispered something, and to my surprise, three-quarters of those present got up and walked out. The whisperer, a lawyer for one of the RBOCs, paused and said “I’m sorry, but this discussion is collusion under the new regulations.” That was the end of the meeting, the end of the project, and the end of free and open discussion of advanced service topics.

About three years later, another Bellcore project came up, relating to the creation of specialized communications services for the financial industry. The project had two phases, one relating to fact-finding research into how the industry used communications, and one that would then identify service targets of opportunity. In the meetings relating to the first phase, the project manager made it clear that “communications services” couldn’t touch on service features above the network. I’d been involved with Electronic Data Interchange (EDI) projects at this point, and I wondered if the use of a structured “data language” like EDI could serve. No, I was told, that’s the wrong level. It has to be about connection and not data or it’s an “advanced service” and has to be done via a fully separate subsidiary.

Fast forward to the decade of the 2000s, and I was a sort-of-de-facto-CTO of the IPsphere Forum (IPSF). This body was created to build what would be the first model of a composed, multi-element, multi-stakeholder, service architecture, with “services” made up of “elements”. The group was making great progress, with full support from Tier One operators globally, but one day representatives from the TMF showed up. Some EU Tier One operators had been told by their lawyers that they couldn’t be in the IPSF any longer, because it wasn’t “open” enough to be acceptable under EU telecom regulations. The IPSF had to be absorbed by the TMF, a body that was sufficiently open. It was absorbed, but there was no real IPSF progress after that.

When the Network Functions Virtualization (NFV) initiative came along, it was created as an “Industry Specification Group” or ISG under ETSI. It had significant industry support, much as the IPSF had, but the telecom members seemed content to advance the body’s work through a series of PoCs (Proof of Concepts), and these were put together by vendors.

I’d fought to have the IPSF model be based from the first on prevailing cloud standards, with no new “standards” at all. The telecom people believed their application was unique, demanding extraordinary (“five-nines”) reliability, for example, so they rejected that approach. The NFV ISG is now trying to retrofit cloud-native to NFV, of course.

Cutting through all of this is a truth I came upon by surveying telecom players for four decades. There’s a very stylized way of selling to the telcos. You engage their “Science and Technology” group, headed by the CTO, and they launch a long process of lab trials, field trials, etc. Of all the lab trials launched, only 15% ever result in any meaningful purchase of technology.

So what do my own experiences seem to say about telecom innovation? Let’s review.

First, regulations have hamstrung innovation from the first. At the dawn of “deregulation” and “privitization” in telecom, when innovation was the most critical, the regulatory bodies threw every obstacle they could into the game, with the goal of ensuring that prior regulated monopolies couldn’t benefit from their past status. They succeeded, and the industry didn’t benefit either.

Second, people who run companies with their hands tied can only learn to punt. Regulations prohibited innovation, so regulations created a culture that innovators would flee, and what was left was a management cadre who accepted and thrived on legacy. The companies who gave us some of the most significant inventions of modern times became just “buyers”, and buyers don’t design products; vendors they buy from do that. Telecom innovation was ceded to vendors, and those vendors represented legacy services and not innovation.

Third, lack of understanding of cloud technology, created by ceding innovation to vendors, hamstrung any telco attempts to define innovative services or exploit innovative technologies. The first two points above have made telcos a less-than-promising place for innovative cloud-native technologists to seek a career. There are plenty of vendors who will pay more, so those vendors (to the telecom space, new vendors) with those innovative cloud-native technologists would have to promote their new technology ideas to the telcos.

Finally, the innovative vendors don’t see the telecom space as their primary markets, and some don’t see that space as a market opportunity at all. A telecom sales cycle is, on the average, four times as long as the sales cycle to a cloud provider, and two to three times as long as to an enterprise. Telecom sales, 85% of the time, don’t result in any significant revenue for the seller, only some pilot spending. Add to this the fact that there are few (if any) people inside the telco who would understand the innovative, cloud-native, thinking required, and you have little or no incentive to push new ideas into the telcos from the vendor side.

Increasingly, telecom is mired in legacy-think, because they’re staffed with legacy thinkers in order to maximize profit per bit from services whose long-term potential is nothing other than slow decline. Their hope of redemption, by the vendors who support and profit from that myopia, is way less than “speculative”. The fact is that neither group are going to innovate in the space. That means that the future of network innovation is the cloud, and that’s going to establish a whole new world order of vendors and providers.