Is “Cloud Dominance” the Same as Cloud Monopoly?

Is “dominance” the same as “monopolistic”? That’s a question that many regulators are wrestling with, and sometimes they’re also wrestling with lobbying and nationalism. In the UK, Ofcom (the UK regulator) has been looking at public cloud services, and recently referred Amazon and Microsoft to the investigatory body in the UK (the US equivalent of the FTC). One question this raises is whether the public cloud market, which depends on economy of scale, can be efficient if there aren’t only a small number of competitors. Another question is how a competitive market in the space could be promoted.

Cloud services are very much like network services in that they depend on earning revenue from an “infrastructure” that has to be deployed and operating before even selling services is possible. Telcos back fifty years ago used to draw a graph to illustrate this challenge. On Day One, the operator starts deploying infrastructure, and that process continues for a period of time before credible scope is reached. During that period, the graph falls negative because cash flow is negative. At some point, service sales kick in and investment falls off, and the curve flattens and then ascends, finally crossing the zero axis that represents neutral cash flow. It then continues into the profit phase. The negative part of the chart is called “first cost”, the outflow needed until the service begins to pay back in the net.

In the public cloud space, the “first cost” problem is exacerbated when there are established competitors in place while a new one is trying to gain traction. These competitors, having established customers and having paid their first costs already, are in a stronger position. They also have more infrastructure in most cases, which means at least somewhat better overall economy of scale. They have real estate and facilities established where markets demand they be located for efficient operation, too, and they understand both the operation of their gear and the marketing and sale of their services.

The “secondary” cloud providers that the referenced article cite are all players who had a viable non-cloud business and hoped to exploit their customer base with cloud services, which facilitates their entry into the market. However, none of these secondary players have really gained much market share. Amazon, Microsoft, and Google dominate the cloud.

Many market theorists would say that’s OK. The “optimum” number of competitors in a space has been calculated in many ways by many researchers, and the result usually turns out to be “Three!” Well, that’s how many major cloud players we have. So what’s Ofcom and the UK griping about? I think there is a touch of nationalism involved, which isn’t anything heinous, just perhaps unrealistic. The US tends to be a more risk-tolerant market for startup investments and also for the expansion of established players into new businesses. All three of our cloud giants exploited their own need for distributed hosting resources to get started in the public cloud space. It’s inevitable that where taking risks is better tolerated, more risks pay off.

The UK can’t turn back the clock. Could regulators there decide to somehow move against Amazon, Google, and Microsoft, or the biggest two of the three, which is what the current initiative seems to contemplate? Do they threaten to stop the big cloud providers from operating in their country? Do they force them to wholesale their services, or constrain the number of customers they sell to, or the number of new customers they accept? The challenge here is that any of these measures would almost surely fail, and would also almost surely hurt cloud users. Would they then encourage new players to step in? What happens when those players become “dominant”?

It seems to me that the smart move here would hearken back to the telco days of old. We had a unified set of telco services worldwide, with many different “competitors” in play. How did that come about, and seemingly work? I think the answer lies in three words, “Interconnect”, “Federation” and “Settlement.”

Regulators required that telcos “interconnect” their networks to provide services across them all. In many jurisdictions, they also required telcos to “federate”, meaning to offer their services to competitors who wanted a single point of contact but needed service coverage where no single provider could offer it. Finally, they required “settlement” among providers for services where one operator collected for a pan-provider service, so the other operator was compensated for the resources they contributed.

Could cloud providers have these same three requirements imposed? In theory, yes, though the legal basis for it might vary from country to country and in some cases might be linked to a consent decree imposed by the authority assigned to regulate monopolistic practices. In practice, the process could be a game-changer worldwide because it might eliminate a lot of problems users face today, like the challenges of supporting multi-cloud.

Could this help “competition”? Almost certainly, if one were to define “competition” as the entry of new giant competitors (like telcos and cable companies) who’ve stayed out of the public cloud services market up to now. All the second-tier cloud providers, as I’ve noted above, jumped into the cloud space by exploiting incumbent relationships and products. That means that it’s possible to start a specialized, targeted, cloud business. The problem is that you can’t achieve full geographic scale. Suppose that the big guys had to offer you services at a wholesale rate, which was retail less avoidable sales/marketing. That would allow smaller providers to leverage resource of the major players to build credibility. They could still eventually offer their own resources in those areas where they’d initially wholesaled, but only when the opportunities moved them enough on that first-cost “S” curve.

To make this work, though, you’d need to define a set of services that were subject to federated wholesale and standardize interfaces for them. Do regulators have the skill to do that, or even the authority? Perhaps most important, do they have the time? We can’t enter into another of those years-long “Where does your lap go when you stand up?” study adventures all to common in the standards world. If we could get through the process quickly, meaning if the UK and perhaps the EU are really prepared to push, we could in fact add competitors to public clouds. Would that help anything other than nationalistic pride? Not according to the data that says that three competitors is optimum, and we have more than that already. And for all the nationalistic pride at stake here, there’s no indication of major cloud investments by new players to enter the competitive fray.

That still leaves our first question, though. If enterprise cloud users have the option of using a set of “federated cloud” services brokered by a smaller player, versus a single unified service of a major player, would they pick the former? Not according to what enterprises tell me. These days in particular, with global economic stresses impacting almost every industry, enterprises want cloud providers that have the financial mass and technical credibility needed to stay the course. And would the smaller players, even if they then tried to selectively build out their own resources where wholesaling indicated opportunities existed, ever achieve reasonable economies in real estate, capital equipment, and operations?

Remember the CLEC craze of the 1970s, when regulators mandated telcos share access assets with others? I firmly believe that the requirement reduced and delayed competition in the access space, creating in its place a kind of retail arbitrage of wholesale relationships. It wasted a decade. We could do that again in the cloud, and waste another decade.

How Many of Those Metaverse Things Do We Have, Anyway?

OK, I guess it’s time to ask (and of course, try to answer) the question “How many metaverses are there, anyway?” It’s clear when you read about the metaverse concept, and also watch a growing number of metaverse-related commercials, that the answer is greater than one. How much greater, and what’s creating both the diversity of metaverses and the confusion over how many there are? Let’s get to the answer part and see what emerges.

The “original” metaverse was the invention of Meta, but only in the sense that Meta created an application for virtual reality technology that had been around for quite a while, particularly in gaming. What Meta envisioned was a virtual-reality successor to Facebook, a virtual world that was even more immersive, even more entertaining, and even more profitable. This metaverse, which I called the “social metaverse” is still IMHO what Meta sees, but the technology that it would depend on isn’t being revealed in detail.

We know from gaming that it’s possible to create a virtual world that can be represented in VR glasses. We know that in this virtual world, a player has a “character” or avatar, and that the behavior of the avatar is at least in part responsible for what the player sees. The player is represented, in short, by the avatar. We also know that multi-player games allow for multiple players with their own avatars, and that means that what each player sees is dependent not only on the behavior of their own avatar, but that of other players. Meta’s social metaverse, then, is an extension and expansion of this basic model, and that has both a business and technical challenge.

The business challenge is getting massive-scale buy-in by the same people that made Facebook a success. Early experiences have been disappointing to many, probably most, because they lack the kind of realism that any virtual world has to offer to be comfortable. Gaming, you may realize, is enormously resource-intensive at the GPU level, to the point where some advanced games can’t be played on anything but the most modern and powerful systems. You cannot have that sort of power as a condition of metaverse adoption or you’ll disqualify most users and all mobile users, yet without it you face the lack-of-realism problem.

That’s part of the technical problem, but not all of it. The other part is that a broad community like a social metaverse will likely have to present users with a virtual world that has to be synchronized to the behavior of users who are literally scattered to the ends of the earth. How is the composite view of a specific area of the metaverse, what I’ve been calling a “locale” constructed, given the differences in latency between that place and the location of each of the users? This challenge means that the “first cost” of metaverse deployment would likely have to be quite high even when the number of users was low.

Meta seems to have caught on to this, and their recent commercials have been emphasizing what’s almost a metaverse of one, or at least one of “not-very-many”. That takes out of the realm of social metaverse to what could be called an “educational” or “limited” metaverse. School children interact with woolly mammoths, doctors visualize a patent’s innards, and so forth. These applications are much easier to coordinate and implement, one reason being that you could assume that users were co-located and even that there might be some central “locale” processor that would do a lot of the heavy lifting on visualization, allowing client devices to be simpler. This is our “second metaverse”.

In parallel with this, we have our third metaverse, emerging from a totally different mission and set of backers. The “industrial metaverse” is something that’s intended not to create a virtual world but an accurate model of a part of the real world. In the industrial metaverse, the elements are not a mixture of a computer-generated place or places in which some user-controlled avatars interact, but rather a “digital twin”, a representation of real things. That elevates the question of how those things are synchronized to what they represent in the real world. I’ve had a number of conversations with vendors on the industrial metaverse, and a few with enterprises who realize that their “IoT applications” are necessarily creeping into the industrial metaverse space.

All of these metaverses have two common elements, the synchronization piece and the visualization piece. Sometimes visualization means exactly what the term usually connotes, as it would in the first two metaverses, and sometimes it means “exploiting” or “realizing” the model of the virtual world in some other way, like controlling a process or moving something rather than knowing it moved. Sometimes synchronization means modeling relatively simple behavioral movements of many users or a few users, and sometimes it means taking a complex manufacturing process and making a computer model of it. It’s been my view that this commonality means that we could consider all metaverses to be products of a common architecture, perhaps even of a common toolkit.

This is the thing I’m afraid we’re missing, the thing I think hurts us most if we do miss it. Three models, how many vendors and approaches, and where do we have a real, provable, opportunity? Talk about silo problems; metaverse could generate them in spades. And that doesn’t even consider what a unified metaverse might be essential for.

Do you like the idea of smart cities? Without a metaverse concept behind them, what we have is a bunch of autonomous cells and not a brain. Do you believe in robotics on a large scale? How will they behave in a real world that can’t be represented in a way AI can interpret? Speech to text, text to speech, chatbots, image processing, and a lot of other leading-edge stuff can only progress so far without having a means of using each technology within the context of a real-world model. I think we’re focusing too much on the virtual-reality part of the metaverse concept. Even Meta, trying to bring its own concept forward, is still obsessed with the metaverse as a virtual world for humans to inhabit. It is a virtual world, but the best and most valuable metaverse applications may not have any humans in it, and may not even require visualization.

Are Vendors Facing a Subscription Pricing Crisis?

The relationship between buyers and sellers isn’t usually hostile or adversarial, but it’s naturally wary, particularly on the buyer side. Sellers make money by making buyers spend money, and while that spending can often be offset by benefits, there are times when it isn’t. One such time is when changes in pricing policies set by sellers increases the cost of the “budget” side of network procurement. That part of network spending focuses on sustaining what’s already been justified, and because it doesn’t draw on new benefits, there’s constant downward price pressure on it. Pricing changes that push costs the other way aren’t welcome.

One of those pricing policy changes, the one that buyers express the most concerns over, is the shift to subscription pricing. Subscription pricing has also taken hold in the software space, and the same forces that encouraged vendors to adopt it there influence networking. The same resentments are building, too.

Whether you call it “subscription” or “as-a-service”, the concept is the same. A buyer of a product, instead of purchasing it for a given amount, acquires what’s essentially a lease. Remember when Microsoft Office was purchased? The problem for Microsoft was that the purchase model put Microsoft under constant pressure to create meaningful feature enhancements to drive buyers to purchase an “upgrade” version. When a buyer didn’t see an improvement, they hunkered down on their old version and produced no incremental revenue for Microsoft. So we got Microsoft 365.

For over a decade, we’ve had a kind of hybrid subscription model in the network equipment space. A buyer purchases a router, for example, and a maintenance contract that provides updated software features and fixes. More recently we’ve seen the hardware and software “unbundled” and the latter sold on the same sort of subscription deal that we had with Microsoft Office. Network vendors, like software vendors overall, have touted this shift in pricing policy to Wall Street because it offers them more revenue. Which, of course, is what gives buyers angst, because it’s a factor that’s raising the cost of sustaining infrastructure.

There are arguments in favor of the subscription model, of course, and even the more radical “network-as-a-service” approach. Many companies who have to watch cash flow prefer to have their costs incurred as expenses rather than capitalized, because the expenses apply at the same time the money is spent and capital costs have to be amortized, so early cash outflows aren’t fully offset when they do their books and file their taxes. The subscription model also assures that network software doesn’t get outdated, and that can be a big factor in keeping the network secure. Up until 2020, three-quarters of network buyers thought the subscription/as-a-service trend was positive, and over half of buyers feel that way today. But since 2020, I’ve been getting more and more reports from buyers that the CFO doesn’t like subscriptions any longer. And that they want something done.

Subscription pricing is the second-most-cited reason to look at “open-model” networking, and the number one reason is overall cost. Even two years ago, almost no enterprise buyer would have said that subscription pricing was a reason to look at open networks. In the service provider space, though, the interest in open-model networking was fueled in part by subscription pricing changes as far back as 2015. Two years ago, enterprises said they would look at alternatives to a vendor who raised subscription prices significantly (by 25% or more), and today a third of enterprise buyers are looking at open networks as a way of reducing costs, even if no changes in subscription pricing are in the offing.

Despite the views they expressed on subscription pricing, only about 8% of enterprises say they changed vendors or adopted open-model networks to get away from them. Just under half said that they’d pressured their vendors to give them a break on subscription pricing, and in most cases the pressure paid off with a cost reduction somewhere, but not always in the subscription costs themselves. In fact, enterprises said that vendors seemed more willing to discount equipment, perhaps fearing that they’d get caught in a never-ending erosion of their subscription revenues.

Enterprises also say that they “understand” the shift to subscription pricing, with the majority linking it to ongoing changes to software to enhance security or to fix problems. Less than a quarter of enterprises said that they believed that maintenance of embedded network software should be free, and the number who believed that software not installed on devices should be maintained for free indefinitely was even smaller.

Given this, how much of the push-back against subscription pricing of network software is really aimed at the practice, versus a simple reaction to the increased pricing pressure buyers feel? Well over 90% of enterprises say that over the last three years, they’ve felt more pressure to reduce spending on sustaining their networks. Most admit that this pressure has led them to take a harder look at subscription pricing, and this is particularly true of enterprises who are stretching out the useful life period of their equipment. Once gear is fully depreciated it contributes no further costs except for any ongoing maintenance and software subscription costs. It makes sense to look at those next, then.

Another interesting fact is that just over 80% of the enterprises I’ve heard from say that they use open-source software in their data center or the cloud, and that they pay annual subscription charges for that software. Candidly, I’m not sure I believe all those who say they don’t, but I was unable to find any network buyer who said they’d started compiling and maintaining their own open-source tools to avoid subscription payments. That suggests that either network software subscription practices are resisted in part because they’re relatively recent, or that ongoing feature progress in data center software is sufficient to justify the payments.

And that may be the critical point. Software subscriptions are, for the vendor, a way of ensuring ongoing revenue when buyers might otherwise resist upgrades. If buyers are resisting upgrades, they’re likely doing that because they don’t see much value in the new features the upgrade offers. In a way, then, imposing subscription costs could be a vendor admission that they fear they’re not providing enough incremental value with new software versions.

With desktop software, there’s always an influx of new buyers to create revenue. With network software, the number of “new” buyers is limited because companies that need large-scale network purchases don’t spring up out of nowhere. Desktop software vendors need to play to the new buyer more than network vendors do, perhaps, and also perhaps they need to rethink that point unless they want to see the current nascent movement toward open-model networks take hold, aided by resentment over subscription practices.

How Do We Get To, and Optimize For, AIOps?

What’s the right model for AI-based operations, or AIOps? There are obviously a lot of different “operations” AI could be used in, so does the optimum model depend on the specific mission? Or, just maybe, is there at least a set of guidelines that would cut across all the missions? That would enable us to frame out not only a general toolkit for AIOps, but also to at least hope for a toolkit that could cross over operations areas. Could we have One AI to Rule Them All? Maybe.

One thing I believe to be absolutely true is that the “Rule Them All” piece of that paraphrase is critical with AIOps. We are not, at least not in the near future, going to have a single AI element that can fully absorb the operation of a complex network and take it over in all kinds of conditions. AIOps is a hierarchy, by necessity. If you think of how humans work, that shouldn’t be surprising. When you’re walking along a corridor talking with a team member, you’re not plotting out each step and how to maintain your balance as you move your arms and change your center of gravity. You have an autonomous system that does that, one you can override but that’s happy to do what it’s supposed to do, on autopilot. So it must be with AIOps.

To me, then, the whole AIOps thing is about intent models or black boxes. A cohesive collection of devices and software in some mix can be visualized as a single black box. Its properties are those it exposes, and its contents are up to it to manage. The goal of that management is to fulfill the SLA that’s at least implicit if not explicit in those exposed properties. You’re working (or, in my example, walking) if you’re meeting the appropriate external conditions and properties of working/walking.

So does each of these black boxes have its own AIOps? Not necessarily. That will depend on the value of AI at that point, which depends on the nature of the contents of the black box, the complexity of managing it to its SLA, and the available tools to do the managing. It also depends on whether a given black box has a close relationship with others, and a bunch of what I could describe as “cooperative factors” that we’ll get into.

We’ll get into them by supposing that the manager of our black box, having expended its best efforts, cannot meet its SLA. I broke, and I’m sorry. Well, sorry probably won’t cut it, so the question then is how broke-ness will be dealt with. The answer is “whoever has an interface with this black box.” An SLA is a guarantee, and obviously it’s a guarantee to the element that’s connected at the specific interface(s) the SLA is provided for. Our black box has a cooperating element, or elements, and we can also presume that those connecting elements also have SLAs and other cooperating companions. The question is whether there’s some clear circle of cooperation here, a group of black boxes that are themselves perhaps an outer-level black box.

Networks, and combinations of network-and-hosting, are often systems of systems, and so the idea of a black-box hierarchy here isn’t outlandish. But it’s hard not to see an entire network, or a big conglomerate of data center and network, as the top element of every hierarchy. In other words, does “cooperation” extend as far as connection does? Not necessarily.

I’ve done a lot of work in describing service models that were aimed at automating operations. What I determined was that there was connection and there was cooperation as far as describing the way two black boxes interact. In the former case, a black box might have to be informed about an SLA fault in order to “know” that it might not be able to meet its own SLAs. In the latter case, the black box might be informed in order to mediate or participate in a remedy.

Let’s suppose our black box is an element in a service, and that its interface to a core network service fails. If it has a second interface to the same core network, and if that interface can be used to satisfy its SLA “upstream”, then there black box could remedy its own failure. If it doesn’t have another interface, then it would need to report a failure “upstream”. The question is then whether the upstream box has a connection to the core that doesn’t go back through the same failed box. If it does, then it has a remedy option, and we might well consider that “parent” box and whatever “child” boxes there might be with parallel interfaces to the core to be a cooperative system, and to provide operations automation, meaning AIOps, for that system of boxes. In that case, we might see a complex hierarchy of black boxes.

It’s also possible that there might be a black box system that’s created by management alone. An SLA violation might be reported not to the data-plane interface (or not just to it) but to a management system, one that’s an AIOps entity. In that case, the span of control of that management system would determine the boundary of the cooperative system, because it’s the thing that’s inducing the cooperation.

The purpose of this structure is to string AIOps elements in the mesh of things that form the infrastructure being managed, an infrastructure almost surely bound together with a network. These elements would be charged with ensuring that the subordinate black boxes under their control met their overall SLA. And, of course, there would be an AIOps element sitting at the top of the hierarchy. However, rather than being expected to understand all the pieces of that mesh of things, it only has to understand the properties of the black boxes just below it in the hierarchy. These boxes need to understand what’s below them, and so forth until we reach the bottom.

Effective AIOps depends on intent modeling, and intent modeling isn’t just defining black boxes that represent modeled elements, it’s also defining the way these black boxes are organized into a cooperative hierarchy. The “model” is as important as the “intent”. This reduces the burden on each AIOps element, and it also permits the elements to be specialized to the specific infrastructure components they’re expected to control. As you rise in the hierarchy, the elements could become more generalized, since their mission would usually be one of finding alternatives to a primary element that had failed.

The approach I’ve described also provides for the use of AIOps in testing and rehabilitation of assets, and for broader-level diagnosis of the health of the system overall. A specialized tool could be “connected” to a separate management channel to do this, and that channel could (like the data interfaces) represent either a pathway to specific real assets or to an AIOps element there, prepared to act on behalf of the generalized higher-level tool Testing and rehab, then, would be managed through the same sort of AIOps hierarchy as overall operations, though not necessarily one with the same exact structure.

We may be heading toward the notion of having AI deployed in a network or compute structure that’s hierarchical, but that doesn’t mean that we have a hierarchy of intent models or even that we have intent models at all. There’s not enough formalism with respect to the relationship between clusters of technology elements, their management tools, and the management of the entire infrastructure…at least not yet. Some vendors are making progress toward a holistic management vision, and such a vision is just as critical for AIOps as AI is itself.

Why We Need to Look Deeper into Market Opportunity Differences in Tech

Are there states in the US that present a special opportunity for network and cloud services? That’s a question I’m getting more often as vendors and service providers look for attractive targets for new revenue. In fact, to me at least, the big question is why this particular question hasn’t been asked and answered long ago.

The most fundamental measure of network potential I’ve discovered is “demand density”, which measures the economic activity per unit of addressable geography. Where demand density is high, a mile of infrastructure passes a lot of opportunity dollars and is likely to generate a good ROI. If we look at state-wide demand density, we find that there are four states with sterling demand density and seven where demand density is low enough to make profitable deployment of wireline broadband challenging. Roughly 20 of the 50 states have demand densities high enough to indicate that business broadband would be practical there.

One factor that strongly influences demand density (or perhaps correlates with it) is the prevalence of single-family housing. One one hand, it would seem that single-family households would be higher-income, but on the other hand single-family housing results in lower population densities than apartment living. None of the states with the largest percentage of households living in single-family homes place at the top in terms of demand density, and only three of the top ten in single-family housing are in the top 20 states in demand density. Urban/suburban concentrations make for more efficient networking, and that same concentration is linked to a higher population of businesses.

That doesn’t mean that selling broadband and cloud services would be practical everywhere demand density is high. Generally speaking you can’t sell broadband or any form of business technology except to the headquarters of a multi-site business. Major enterprises aren’t headquartered everywhere, and while every one of those 20 states has at least one enterprise headquarters, the largest number of enterprise headquarters are found in a group of ten states: New York (58), Texas (50), California (49), Illinois (37), Ohio (25), New Jersey (22), Virginia (21), Pennsylvania (20), North Carolina (19), Massachusetts (18). All of these states have demand densities in the top 20; the top four in demand density are also on this list. It’s also true that these same states have the largest number of headquarters sites for SMBs.

While you can’t generally sell tech except to headquarters locations, you also can’t sell tech if you ignore where the other sites are located. Only multi-site businesses buy business networks, meaning more than Internet access, and generally business networks are found only in businesses with at least 20 sites, of which there are roughly 600 thousand in the US. On the average, network salespeople say that only the top 50 thousand companies in terms of number of sites are really profitable sales targets, and these represent a bit over a million and a half locations. In order to sell network services, a provider would have to cover (somehow) all the satellite site locations.

The top states for carrier Ethernet, meaning specialized business broadband services (according to TeleGeography) are almost the same as the states that are top in enterprise headquarters. Only Florida and Georgia are on the Ethernet list and not on the top headquarters list, and only Virginia and North Carolina are at the top of the headquarters list and not on the carrier Ethernet list. That illustrates that some secondary sites are large enough to justify carrier Ethernet, and some states seem to harbor more of those high-volume secondary sites. Interestingly, these same states have the largest number of network integrators.

Suppose you’re trying to empower workers with new applications. How do you find the right targets? It turns out that there are 25 occupations where jobs have a high information content, and can therefore be empowered readily. The occupations with the highest unit value of labor represent the prime targets, and the same states that have the highest number of headquarters sites have the highest unit value of labor overall, and the largest percentage of high-value occupations. There’s only one outlier here; Washington state has a high unit value of labor and isn’t in the top ten on either of our other categories. Guess what’s responsible? Tech and Microsoft.

A good measure of a state’s growth in tech dependence is the number of computer schools found there, and here again the states with the most carrier Ethernet have the most computer schools. This demonstrates that carrier Ethernet requirements correlate with the employment of computer-savvy workers, ones that are most easily empowered through IT and network investment.

The states with the lowest unit value of labor on average, and the smallest number of high-value occupation targets, are the same states with low demand densities. These states also have the lowest number of personal computers per capita, which is a good measure of the technology use in a state, and the lowest average unit value of labor. We see here the contrast between an “empowerable” state and one that’s far less likely to benefit from technology projects aimed at worker productivity improvements.

The point of all of this is that it’s a serious mistake to think of a country as a homogeneous market for computing, cloud, and network services. Business headquarters are concentrated in a small number of states, and for most vertical markets (healthcare being the exception) high-value workers who present attractive empowerment targets are concentrated in roughly the same places. The best indicators of an attractive area for tech product and service sales are the number of carrier Ethernet sites and the number of computer schools, but demand density is a close third.

Of course, it doesn’t take a national intelligence organization to figure out where carrier Ethernet can be found, and furthermore the location and size of those particular business sites hasn’t changed much over the last decade. Real estate needed to house a large workforce and justify carrier Ethernet connection is high-inertia. The greater opportunity right now lies in the secondary sites that are too small to justify carrier Ethernet, and thus too small to support on MPLS VPN technology.

In order to be a “networked” business you need multiple sites to connect. There are roughly 2 million business sites that are part of a multi-site business, and a million and a half of them are associated with the top 50 thousand businesses ranked by number of sites. The other half-million are typically businesses with fewer than 10 sites, and only about a fifth of these are networked other than simply given Internet access.

The average major multi-site business has 53 sites, and only 25 are connected via MPLS VPNs and carrier Ethernet. One of the most important trends of the last five years has been the pressure to add small sites to VPNs, and it’s this pressure that created the SD-WAN opportunity. Gartner says that there are roughly 20 thousand US SD-WAN sites and my model says 32 thousand. Ether way, there are about 400 thousand satellite sites not connected via either MPLS or SD-WAN, and if you count all multi-site businesses there are over a million sites that could be on a VPN and are not.

Whether you want to consider it a cause or an effect or indicator, the most significant event in the transformation of a “business” is the connection if its sites to a VPN. It implies that there is a central set of resources, applications and data, to which the workers in the remote offices need access. It provides a central mechanism for the management of those resources and access to them, and even to local information resources. It’s an instrument of policy, of control, and in almost all cases a tool in security and compliance. That means that the time at which the connection is made is the ideal time to assess the quality of all these things, and to provide augmentation as needed. In particular, site and connection security policies are usually set at this point, and that means whoever sells the VPN connection has an inside track into the security of the network and the sites on it. Security tools are one of the most enduring sales opportunities of our current times, so that’s important.

The key takeaway here is simple; you have to know your market target, not only the usual “know about”, but the more general and more-often-ignored point of simply knowing what your target is. The business market for network products and services is, in particular, made up of highly differentiated segments. Every state in the US has its own mixture of segments. Blasting out material, sending out salespeople, with no notion of what segments you can hope to address is wasting time, effort, money, and opportunity.

Is AI Coming for Us?

Discussions about the threat of AI have been popular for decades. “Hal” in the movie “2001: A Space Odyssey” is a classic example of AI run amok, and the recent explosion of interest in large language models and chatbots has reignited the whole discussion. Generally, the threats can be classified into two groups. First, there’s the one I’ve already blogged about, the threat AI might pose in competing with humans in some tasks. Second, there’s the threat that AI could actually develop hostile or at least actively anti-human behavior. This is the threat that seems to be getting all the attention, and even prompting some calls to hold back on AI development.

The big question in assessing the threat of actively hostile or anti-human AI is whether that requires that AI be essentially human itself. That’s the “Hal” model, the one that gets all the attention. But how could we know that AI had developed to that point? It’s a pretty hard question to answer because it dives into not only what constitutes “human” behavior, but also the extent to which human behavior can be established by inspection.

How do we know that a being we encounter is human or humanoid? The answer is pretty trivial; we know because we know what a human looks like. The intelligence, behavior, and intent of that being doesn’t really play into our identification at all. But suppose we have this room-sized black box, and inside that box is a something-that-might-be-human. Other than breaking into the box and looking, how would we know whether the contents really were human? That’s the real AI question, because if an AI entity that’s populating our black box can pass all the tests we can give it from the outside, is it “human/humanoid?” If it is, might it pose an active threat?

For decades there’s been software that seemed to emulate human behavior. Some may remember the old therapy emulators that pretended to be an analyst having a patient conversation with us. More recently, we’ve had software tools that can recognize objects, speak, and so forth. Chatbots are an example of one of those recent developments. If we were to tie a chatbot with a really good speech recognition and generation tool, we could ask our black box questions and make an assessment, based on the responses, as to whether we had a real person inside or some software collection, meaning AI.

Of course, we’re not going to create a biological human with AI, and saying “humanoid” raises the prospect of aliens on UFOs. Is there a property of brain activity we could cite to “dehumanize” our discussion? The definition of “sentient” is “responsive to senses”. We could clearly create an AI entity that would qualify as sentient in at least a limited sense; it could “hear”, “smell”, “see”, “feel” and so forth. I put the stuff in parenthesis because its ability to to any of those things would almost surely be different from our own, sometimes better and sometimes worse. Despite those differences, we could put “sentient” AI in a black box and expect it to be able to fool at least some people some of the time, fool them into believing another human was inside.

We could also program our sentient AI collection to be funny, to laugh and cry and show “emotions”. We could even tell it to try to manipulate our views, to lie to us. It could present bias, fear, hope, and love in a way that would at least be credible to a degree, from our outside-the-box perspective. Long exposure might expose it, particularly if we were trying to make an identification of our black-box contents, but it would survive casual interaction.

This might seem an excursion into irrelevance, but it isn’t. We can say that AI could pass an inspection test and be declared sentient, but could it pass a test to be “human?” I submit that, from the black-box test perspective, the two are the same thing. If we can’t see the contents of the box and apply our “it looks like a human so it is” test, then we have to rely on testing senses to test sentience, and we would equate our result to a test for humanity. That’s what we’ve done with chatbots today.

A lot of animals pass the “sentient” test, as easily as our black box could. We don’t declare them human/humaniod. The thing that’s missing in most of them, and what’s inside that black box, is the sense of self. We can focus software development on mimicking human behavior, but does that make it human? How do we test for self-awareness? The fact is that we’ve been trying to do that, debating our results, for centuries. Rene Descartes opened a work with the statement, “Cogito, ergo sum” or “I think, therefore I am.” It’s a great test if you apply it from the inside, as a thinking being. From the outside, not so much. We know that most animals “think”, but “I am” is a statement of self. Do they have that? We can’t know because we can’t look inside. We’ve tried to test self-awareness by putting a mirror in front of an animal whose face/body has been painted in some way, to see if it touches the body part we manipulated or treats its reflection as another animal. Some think that’s shown some results, and others disagree. Could we make a robot “recognize” itself by programming, through? Certainly we could make it pass the outside-the-box tests we’d give it, so for AI this best-of-our-tests isn’t valuable.

Self is important in our debate on AI, because to form deliberate actions you probably need to deliberate, which means to project yourself into a variety of possible future positions. To be deceptive or threatening in a symptomatic way can be programmed into an AI entity, but to intend to be that is a deliberative decision. Can AI make such a thing?

If we go back to our black box, the answer is “we probably can’t tell”. I think that we could make AI appear to intend, even with today’s technology. The question is whether we could expect it to actually exhibit intentions, and that gets back to that sense of self and consciousness. Can we expect AI to be conscious? I think many believe we’re almost there. Not me. I do not think that AI is anywhere near to being conscious, and I’m not sure that we can say it ever will be. The reason is that we really don’t know what creates consciousness, even in ourselves.

I’ve talked with AI experts who believe that consciousness is purely a matter of memory and CPU power. I’ve talked with others who believe that it’s really about the way that our brains are organized, the way that things like memory and thought and forethought work. The former group is the most optimistic about the prospects for truly conscious AI entities, and the latter group is IMHO the most realistic, because it admits we don’t really know why “consciousness” even exists.

Chimp brains aren’t as complex as human brains, but chimps exhibit a lot of human-like properties. They pass self-awareness tests, for example. They can use tools, and there is recent scientific evidence that some proto-apes (or proto-humans, depending on where you fit them in the anthropological tree) may have actually made tools. Many would say that chimps can form intent, too, but few really believe the Planet of the Apes movies represent something that’s actually possible. We don’t know why that is, what separates them from us.

Even so, our black box raises a practical question, which is simply “Does it matter?” If we could create an AI entity that was indistinguishable from a human in every way we could test, would that entity be as much a threat if it was totally dormant in terms of consciousness as if it were fully conscious? Remember, the contents of a black box can only be determined by examining what it exposes to the outside world. Could AI entities, set to the task of designing better ones by themselves, eventually create one of those human-enough-for-all-practical-purposes entities? Perhaps, but even if it did, I don’t think it would be the kind of threat many believe AI could present, because I don’t believe that we’d have even as much problem disabling it as the pilot did in “2001”.

We are biological entities, self-powering in a realistic sense. AI based on “technology” will need power, and while it’s certainly possible to imagine that an AI entity could design a successor that would be almost-human, could it design one based on biology rather than on chips? Maybe that’s possible too, but we’re not seeing any signs of it now, it would surely be enormously complex given that we don’t really even know how our own brains work, and a lot of research ethicists would scream at even the early steps.

AI may help us cheat in school, at work, even perhaps help us invent alibis for things we do at home. In that sense, it can change our lives, maybe even threaten aspects of our lives and how we live them. That’s enough of an impact to suggest we need to take care with how we use AI, but I do not believe it poses even the kind of threat that Hal did, and I don’t think we’ll need the Three Laws of Robots to protect us any time soon.

Every technology advance creates winners and losers, all “progress” is subjective in that the positive connotation is true for only some and not all people. AI is progress, and like self-driving cars, smartphones, personal computers, and smart homes, there are some who are hurt by it, some who fear it. That doesn’t mean we need to call a halt to it, which in any case is totally impractical. It just means we need to manage the impact.

Relax and enjoy yet another tech-driven revolution. When you look around for rampant rogue robots, you’ll see instead a lot of humans who pose a greater threat. Read the news; we’re our own worst enemies, and AI can only be expected to cause a small perturbation in our overall risk level. Another old saying applies; “We have met the enemy and they are us.”

We’re Going to See Big Tech Subsidizing Telcos, So We Need to Look Beyond

It should be clear to everyone by now that the EU is going to provide some mechanism to subsidize telcos for carrying OTT traffic, at least the major players. It is less clear (but should be) that this is going to have a seismic impact on the industry. Perhaps the lack of clarity can be attributed to the hype on all sides of the issue, so the most important question we should be trying to answer is “how do we dig through the chaff to find the wheat?” Let’s give it a go.

All this talk about subsidization stems from discussions literally decades old, emerging from some of the basic tensions that were created by the dawn of the worldwide web. Consumer data services were in one sense a boon for operators, because the improvements in network technology were making long-distance calling increasingly competitive and leaving operators with an “access” business that could never replace lost long-distance revenue. In another sense they were a major threat, because early Internet access didn’t impose any data charges. The absence of usage pricing encourages usage.

Up to the time of the Internet, operator interconnection for all services was done based on settlement. Even voice calls involved payment by the calling operator to the called operator, a “termination charge”. Data services (packet-switched data based on the CCITT X.25 standard and interconnected via X.75) were settled. There was no “bill and keep”, the model that emerged with the Internet.

Even in the early days of the web, the risk of an imbalance in revenue versus costs arose around the subject of peering. “Retail” ISPs serving users were net sinks of traffic, and they were also the most costly piece of the Internet. A “wholesale ISP” who focused on connecting net sources of traffic could peer with the retail players and essentially consume their capacity while keeping revenues from the content providers. What we have now with the over-the-top (OTT) players is the successor concept. You pay a CDN provider to deliver stuff to the retail ISP edge to get to the customer. You don’t pay the customer’s ISP.

Early on this distortion in cost and revenue was recognized. I’ve noted in the past that I’d co-authored an RFC (Savvis’ CTO was the prime author) on “brokered private peering”, which would have established a paid peering mechanism that was designed to help balance revenue and cost, and also encourage features and QoS. The problem was that “the Internet” was perceived as being somehow independent of infrastructure, separate from other networks. Consumers wanted good, cheap, Internet and what they saw of the Internet was really only the OTTs. No wonder the OTTs won the lobbying wars with regulators, and what emerged was net neutrality.

The real question here is just what the thing that’s popularly called “net neutrality” really is, and means. The goal, leaving aside the politics and lobbying, was to create a fertile field in which Internet companies could grow, and support a whole new set of rich consumer and business services on the Internet. Which, of course, meant “over the Internet”. At the same time, regulatory provisions tended to bar the telcos from participating in these new service opportunities for fear they would use their incumbent power to stall other more innovative players.

Where we’ve ended up with this approach is that we’ve created a traffic-generating industry (OTTs) and a traffic-carrying industry, with the former offered all the new opportunities for service revenues and the latter increasingly pressured by declining profit per bit. Competition among “access providers” targeting the consumers of OTT services has run into the problem that you can’t really differentiate access services by any means other than price, and consumers are unwilling to pay proportionally for capacity. If you get “basic” 100 or 200 Mbps Internet for $40 to $60 per month, you might willingly pay $100 to $120 for gigabit services. Streaming video is the big consumer of bandwidth, and obviously we have more and more providers of that, consuming more and more bits. Since streaming doesn’t even require gig bandwidth to deliver, many users are quite happy to stream over basic Internet, so there’s not even much of a chance of selling higher access speeds, just a chance to carry more traffic for the same money, which is what trashes profit per bit in the first place.

My friends in the EU tell me that we will either see subsidies like the ones by the OTTs paid to the telcos, direct government subsidies, or some impact on the consumers. The impact could come in the form of data caps on broadband access, price increases, performance degradation, or some combination of these measures. Any of these things could have a chilling impact on the Internet.

The basic truth here is that no business is going to operate at a loss, or even to operate in such a way that they have no chance of improving revenues and enhancing their stock price. Most telcos, having been regulated monopolies, have tended to be considered “value stocks”, paying a dividend rather than promising a lot of share appreciation. They need to be able to sustain a dividend rate that’s attractive and also sustain their share price. Telcos have done a pretty good job of managing costs over the last two decades, but the problem with cost management is that you can only reduce so much. Operators have already taken most of the accessible opex reductions, and the maximum capex reductions they could hope for would drop costs only by about 4.7 cents of every revenue dollar, and that would take five years to fully realize.

There is no question that at some point they’ll need to either raise more revenue or be subsidized in some way. To claim that they can sustain operation without either of these two things is simply wishful thinking. Those who, like me, believe that subsidies aren’t the best answer are believing that revenue is, then. But operators have proved unwilling or unable to think beyond the basic connection-service models, and even those who can rise above (literally) those services somehow seem to mess up their efforts. However, they are making some progress, particularly those in the EU where competition is more acute. The consortium of operators who launched a personalization and identity initiative had the right idea even if they likely still picked the wrong target.

Telco intransigence doesn’t excuse regulatory myopia, though. Regulators focused so much on preventing telcos from monopolizing new OTT opportunities by taking advantage of their financial and market power, that they discouraged any form of innovation. Neutrality rules are often blowing in political winds, and any major new service revenues will require significant capital commitments that would be at risk if the winds changed. Whatever the EU decides here, and however regulators worldwide respond to their moves, it makes no sense to offer a market sector a helping hand while nailing their shoes to the ground. The regulatory structure of the industry has to conform to whatever policies on subsidies any jurisdiction comes up with.

Decoding Amazon’s Telco Plans

I’ve noted in past blogs on MWC that the cloud providers were front and center at the show, and also that public cloud providers had their sights on the telco space in general, and 5G in particular, as a conduit for future revenues. Amazon sent a copy of their MWC presentation, and it’s a good way to analyze what cloud providers are saying and thinking with regard to their future relationship with telecom. The key slide in their presentation shows three layers of AWS support for telcos, so we’ll look at them starting at the bottom.

The bottom layer supports telco OSS/BSS applications, and this is the layer that most telcos tell me is interesting to them. OSS/BSS is the “core business” application piece for the telecom vertical, and much of it is the same as, or easily related to, core business applications for the broader market. However, Amazon announced the AWS Telco Network Builder (TNB) at MWC, so let’s take a look at that.

The tagline for TNB is “Automate the deployment and management of your telecom networks”, and it’s an orchestration and management framework to deploy network applications using industry-standard TOSCA service definitions, and also with the NFV ISG specifications. Deployment options include AWS CloudFormation, EKS, and VPC. I think that TNB is essentially a TOSCA decoder that then links to AWS services and features, combined with virtual network function (VNF) deployment and management. As such, it is targeting network services rather than OSS/BSS, and it extends the “basic” telco support that vanilla AWS would offer.

Having support for VNF deployment is as good as VNFs are, of course, and that’s the big issue for TNB. While VNFs are indeed linked to standards (ETSI NFV ISG specifications), the NFV concept hasn’t exactly exploded on the scene. As I’ve noted in past blogs, operators have had very mixed views on the value of VNFs as elements in the service data plane, because of performance and latency issues. If VNFs are actually to be used by telcos, those telcos could look at hosting them on AWS using TNB, but public cloud VNF hosting is also a stretch. Given that VNFs themselves are limited, adding public cloud hosting raises the issues considerably.

There are places where TNB could make sense, primarily in control-plane elements of things like 5G. However, the “telco network” is now and will likely always be dominated by network devices and not VNFs. I think that, ironically, TNB success may hinge on the success of Nephio, which is a Google project that would enable Kubernetes management of network devices as well as virtual functions and other containerized applications/components. Will Amazon join that initiative, or expect telcos to integrate Nephio. If they don’t do either of these things, I think TNB will have limited impact.

Amazon’s “Telco Ecosystem” middle-layer piece is focusing on the partnerships they’ve established with vendors, integrators, etc. The next thing Amazon talked about at MWC was the top layer, where they introduced the Smart Home Platform, Integrated Private Wireless, and Network APIs.

The smart-home piece seems aimed at providing telcos a role in the smart home revolution. Amazon points out that consumers love the concept of a smart home but are often defeated by the technology requirements. Telcos could play a role in this, offering an ISV platform that would be device-agnostic and centrally managed. I think this could be a smart play, but more for smart buildings than for smart homes, because I’m doubtful that consumers would pay for the integration given that they can get apps and hosting from the device vendors, including Amazon with its Ring series.

There is an extension of the concept for businesses, though, and that seems aimed at supporting IoT, AI/ML, and other emerging technologies through the cloud. Here there’s already cloud acceptance, and Amazon is working to frame their cloud services to facilitate development by ISVs in the key vertical areas. In some cases (connected vehicles, for example) they’re also addressing a related consumer-market opportunity.

Amazon, like others, is infatuated by private wireless, and there is a private wireless opportunity, but it’s one that takes some real effort to develop and is difficult to address at all except in a few verticals. My data says that Amazon’s customer base is only about half of Nokia’s base, and that its users are smaller as well. It’s interesting that they cite many of the same industries that Nokia does, including automotive/fleet/transportation and agriculture. Where Amazon is differentiated is in its ability to offer “managed private 5G” to SMBs as well as integrated operator/private 5G with partners in the CSP and MSP spaces. They work with integrators and radio vendors for the on-site deployments.

Integrated private wireless (IPW) seems to be Amazon’s main focus here, which I think is smart because it can support a broader set of missions and even a broader set of verticals than a pure private strategy. The partnering with CSPs and MSPs offers Amazon an entree into the telco space too, since most of the 5G elements and deployment and management tools are shared with Amazon’s telco service portfolio.

Not surprisingly, a lot of Amazon’s telecom story is dependent on third-party developers and hosted applications they build for verticals. Amazon has the largest developer community of the public cloud providers, and they also have a fairly well-articulated edge strategy that links with their core AWS services and APIs. That means that AWS can be bound tightly to on-prem edge elements (AWS Outposts).

Overall, Amazon is playing its cloud cards in the telco world, and all of their efforts are based on a fundamental truth, which is that in today’s market, “services” are international and telcos are constrained geographically. There is zero chance that telcos would deploy their own assets over their entire proposed service footprint because the cost of simply acquiring facilities would be crippling. The cloud is the obvious alternative, and it’s clear that Amazon is going to enhance the services AWS can offer (AWS Wavelength is an example) as well as exploit its base of developers and integrators.

Will that be enough, though? That’s the big question with Amazon’s telco strategy, and it’s a question Amazon probably can’t answer and may not even be able to influence. Amazon wins if operators, or at least MSPs, decide to build real higher-layer services on top of connectivity, services that may not require tight coupling with what’s below but could benefit from it. Amazon loses if operators stay with connection services, because truth be told there’s not much to be gained by the limited VNF hosting opportunities that don’t involve the data plane. For the data plane, white boxes are the cost-cutting (and feature-enhancing) answer.

That doesn’t mean that the Amazon telco initiatives aren’t important. Startups have been AWS bread and butter, and they’re in decline at least for now. Microsoft is doing better in the enterprise space, and will likely continue to do that. Google is still groping for a differentiating strategy, and for literally decades, operators have been more comfortable with Google as a partner than Amazon. If Google finds their feet, they’re more likely to take share from AWS than from Microsoft. But…and it’s a big “but”…if Amazon can be a bit less pedestrian in their thinking about the telcos, and work a bit harder to create a real opportunity for a JV, they could marginalize Google, and they’d then likely gain more from that than rival Microsoft. Think about it, Amazon.

https://jeffdil.awsapps.com/workdocs/index.html#/share/document/04237190812e3ea83d8684d6c7479a9066ccae459831f6ecc7aff843c44a3674

A First Look at Google’s Bard

Last week I was provided access to Google’s Bard chatbot, the ChatGPT competitor. I’ve had a chance to run it through its paces as I’ve done with ChatGPT, and I want to offer my view of Bard and what I think Bard could do. Let me say up front that I was impressed with Bard, more so than I was with ChatGPT.

Let me make a point that I’ll repeat later, because it’s important. I am not interested in whether Bard can make me believe it’s human, tell me jokes, entertain me, or whatever. I don’t propose to have it take tests for me, write articles, or send emails or text messages. I’m interested in whether it could play a role in my online research. I juggle a lot of numbers in my work, and finding them is often a chore. Can Bard make the job easier? If it can, then it presents a real value to me, and likely to other serious users of AI.

Here are three questions I asked Bard, and the response provided.

Question: Compare growth in US spending on information technology with US GDP growth.

Response: The US economy grew at an annual rate of 2.7% in the fourth quarter of 2022, while spending on information technology grew at an annual rate of 12.1%. This means that IT spending grew at more than four times the rate of GDP growth.

This trend has been continuing for several years. In 2021, IT spending grew at an annual rate of 8.3%, while GDP growth was 2.3%. And in 2020, IT spending grew at an annual rate of 10.7%, while GDP growth was -2.8%.

There are several reasons for this trend. First, businesses are increasingly using technology to improve their efficiency and productivity. Second, the COVID-19 pandemic has accelerated the adoption of remote work and virtual learning, which has led to increased IT spending. Third, the rise of artificial intelligence and machine learning is creating new opportunities for businesses to use technology to automate tasks and improve decision-making.

The continued growth in IT spending is a positive sign for the US economy. It indicates that businesses are investing in technology to grow their businesses, which could lead to increased hiring and economic growth. However, it is important to note that not all IT spending is created equal. Some IT spending is simply maintenance or upgrades, while other IT spending is more transformative, such as the development of new applications or the implementation of new technologies.

The growth in IT spending is also a sign of the changing nature of work. The rise of automation and artificial intelligence is leading to job displacement in some industries, but it is also creating new opportunities in others. Businesses that are able to adapt to these changes will be the ones that are most successful in the long run.

Question: Comment on the following: Back in the days when wireline telephony was pretty much the only pervasive network technology, we had perhaps sixteen thousand “edge” office locations in the US. Today, according to some information provided by the Wireless Infrastructure Association (WIA), the US has over 142 thousand towers, over 200 thousand “macrocell” sites, and almost half a million outdoor small cell nodes. Add that up and you get fifty times the number of access edge locations we had in the old wireline days, and that growth has some really important side effects.

One thing that’s really important is that fiber infrastructure deployment has decisively shifted toward the access network. It used to be that we used fiber primarily in aggregation missions, starting with those SONET rings and moving into the core of IP networks. Today, we deploy far more glass in the access network, and while fiber to the home isn’t universal and may never be, fiber close to the home is already a fact.

A second important thing is that mobile fiber backhaul is making it easier to create those close-to connections for FTTN. There’s already been pressure on operators to use fiber to home/business locations where demand density is locally high enough to pay back the pass costs. If you’re going to trench fiber to a node for mobile services, you could trench from there to a residential subdivision or a shopping mall at a lower cost, so more areas are open. You can also use mobile sites as FWA locations. As a result, wireline broadband for what would normally be classified as residential use is improving in reliability and capacity.

Response: The growth in the number of wireless infrastructure locations in the US has several important side effects. First, it has led to a decisive shift toward fiber infrastructure deployment in the access network. This is a positive development, as fiber is a much more reliable and capable medium than copper. Second, it has made it easier to create close-to connections for FTTN. This is also a positive development, as it will improve the quality of broadband service for many people. Finally, it has made FWA a more viable option. This is a positive development, as it will provide more choices for consumers and help to reduce the digital divide.

However, there are also some potential negative side effects of the growth in wireless infrastructure. First, it could lead to increased competition for spectrum. This could lead to higher prices for consumers and could also make it more difficult for new entrants to enter the market. Second, it could lead to an increase in the amount of electromagnetic radiation exposure. This could have negative health effects for some people. Finally, it could lead to an increase in the amount of light pollution. This could make it more difficult for people to see the stars at night.

Overall, the growth in wireless infrastructure in the US has both positive and negative side effects. It is important to weigh the pros and cons carefully before making decisions about whether or not to support this growth.

Question: What does CIMI Corporation’s public blog say about metro?

Response: CIMI Corporation’s public blog discusses the importance of the metro in the networking industry. It argues that the metro is the best place to introduce network service features, as it is close enough to the edge to permit real-time handling and personalization, but deep enough to serve the number of users needed to create reasonable economy of scale.

The blog also discusses the possibility of making metro-centricity a design mandate. This would involve defining a kind of “movement zone” around each metro center, with the goal of containing a large proportion of the users there within that zone as they travel locally. This would improve handling and latency, and would cover about three-quarters of the population that way.

Overall, CIMI Corporation’s public blog argues that the metro is a vital part of the networking industry, and that it should be a priority for network operators.

In one of my earlier blogs, I explored the question of whether AI could be a direct threat to humans. I decided to ask Bard the broad question, and this is what it replied.

Question: Could artificial intelligence develop to the point where it was a threat to humans?

Response: There is a possibility that artificial intelligence could develop to the point where it is a threat to humans. This could happen if AI systems become so intelligent that they are able to outsmart and outmaneuver humans, or if they become so powerful that they are able to take control of critical systems, such as power grids or nuclear weapons.

However, there are also many reasons to believe that AI will not become a threat to humans. For example, AI systems are still far from being as intelligent as humans, and it is likely that they will continue to be dependent on humans for many years to come. Additionally, there are many safeguards in place to prevent AI systems from becoming too powerful, such as the development of ethical guidelines for AI research and the creation of safety mechanisms for AI systems.

Overall, it is impossible to say for sure whether or not artificial intelligence will become a threat to humans. However, it is important to be aware of the risks and to take steps to mitigate them.

Question: Will AI need Asimov’s Three Laws of Robotics to protect us?

Response: It is not clear if AI will need Asimov’s Three Laws of Robotics to protect us. These laws are designed to ensure that AI systems do not harm humans, but they may not be sufficient to prevent all potential dangers. For example, an AI system that is programmed to obey the Three Laws of Robotics could still harm humans if it is hacked or if it makes a mistake.

Additionally, the Three Laws of Robotics may not be applicable to all types of AI systems. For example, an AI system that is designed to explore the universe may not need to obey the First Law of Robotics, as it would not be in a position to harm humans.

Ultimately, whether or not AI needs Asimov’s Three Laws of Robotics to protect us is a complex question that cannot be answered definitively. However, it is important to consider the risks posed by AI and to take steps to mitigate them.

I think that Bard did a pretty good job on these questions, and the fact that it raised the potential of a hack or mistake was in a real sense insightful. I also think that criticisms that Bard is “boring” compared to other chatbots trivializes the whole notion of AI. Does everything these days have to be entertaining? Can’t we find use for something in improving our productivity or enhancing our lives? Even a quick check of Bard’s capabilities demonstrated to me that I could find things in seconds using Bard that I might not be able to find at all with search. Some of the things I’ve tried to do searches on, in fact, produced no useful results even after ten or fifteen minutes of work, where Bard’s quick response was on the money. Yes, if I ask the wrong question I can get the wrong answer, but in most cases my error was obvious (or our difference in interpretation was) and easily corrected.

I am going to continue to use Bard for research, because Bard has proved to me that it’s useful in that role. I don’t need it to tell me jokes or sing me songs or pretend it’s my friend. I need it to do a job, a job that’s important to me, and so far it’s been able to do just that. If you want entertainment, try watching TV.

Wireless, Edge, and the Access Network: Changes and Impacts

Back in the days when wireline telephony was pretty much the only pervasive network technology, we had perhaps sixteen thousand “edge” office locations in the US. Today, according to some information provided by the Wireless Infrastructure Association (WIA), the US has over 142 thousand towers, over 200 thousand “macrocell” sites, and almost half a million outdoor small cell nodes. Add that up and you get fifty times the number of access edge locations we had in the old wireline days, and that growth has some really important side effects.

One thing that’s really important is that fiber infrastructure deployment has decisively shifted toward the access network. It used to be that we used fiber primarily in aggregation missions, starting with those SONET rings and moving into the core of IP networks. Today, we deploy far more glass in the access network, and while fiber to the home isn’t universal and may never be, fiber close to the home is already a fact.

A second important thing is that mobile fiber backhaul is making it easier to create those close-to connections for FTTN. There’s already been pressure on operators to use fiber to home/business locations where demand density is locally high enough to pay back the pass costs. If you’re going to trench fiber to a node for mobile services, you could trench from there to a residential subdivision or a shopping mall at a lower cost, so more areas are open. You can also use mobile sites as FWA locations. As a result, wireline broadband for what would normally be classified as residential use is improving in reliability and capacity.

A third important thing is that we’re redefining “edge” in terms of access networking, at least, and raising the question of whether there will come a time when we could deploy service features and even edge computing outward beyond the current facility-based edge office sites. Probably, this won’t happen today or even next year, but we’re moving to a point where some features/functions might be hosted further out, providing we could make a business case for applications that depend on them.

One thing you probably see from my description of the side effects is that they’re “important” for what they might bring in turn, rather than what they are in detail. So what might they bring?

The most important result of these three edge points is that business-specific access is no longer a great opportunity for operators. Staying with the US market, the total number of multi-site businesses in the US is roughly 50,000 and these businesses have an average of about 32 satellite locations. Operators might have hoped to have over a million and a half prospective carrier Ethernet sites if everyone got on an IP VPN, but now we have to assume that at the minimum the primary market for carrier Ethernet would be the 50,000 main sites, and my data says that all but about ten thousand of these could easily be supported via the same fiber/PON connections used for residential broadband, just with different routing off the head end of the PON.

That, to me, means that operators have a dire choice to make almost immediately. Do they take steps now to condition residential-targeted fiber and FWA to support traditional MPLS VPN access, or do they accept that SD-WAN will eventually likely supplant MPLS VPNs completely and start deploying it instead? If they sit on their hands (a not-uncommon decision), they can expect to lose out to managed service providers or self-installed SD-WAN, or even to cloud-hosted SASE.

The second thing these three access-related points would do is to amplify what could be called uNode hosting opportunity over NFV hosting. Virtual functions in NFV were generally deployed in “cloud resource pools”, and when NFV ISG proof of concept trials looked at replacing CPE with VNFs, it rapidly reached the conclusion that you needed a generalized appliance to “host” something. Shift the mission from CPE to far-edge, and you get what I’m calling a “uNode”, a sort-of-white-box that’s designed to host small, agile, feature elements. I think that O-RAN already admits to the fact that you can’t expect the same sort of “function hosting” for the RAN as NFV had envisioned, and I think that we can expect to see the white box taking on new roles as it becomes the host de jure for far-edge feature elements.

The next of our access-related impacts is an increased interest in and value for “edge injection and interconnect”. We already know that “content injection” at the metro level (where it’s typically done today) results in traffic load congestion upstream toward the injection point. If we presume that many content experiences are real-time material, then we need to think about a better mechanism for distribution as that material increases. Some of this will likely be handled by my uNodes (creating another reason to have them at critical junction points in the access network), but some may also be handled by injecting things deeper in the access network. If we really see social-metaverse experiences develop, which eventually is likely, then that will likely require at least some interconnection of uNodes to adjacent metro centers to reduce latency.

Our final point is that access infrastructure changes will drive full convergence of mobile and wireline at the service level because of the increased harmony at the access infrastructure level. There have been operator-sponsored projects to address this for almost two decades (Project Icarus was one) but the unity of access infrastructure (and my uNodes!) will make it a lot easier. In fact, we can expect these points to create a full separation of access, transport, and services. That means continuity of calls, the Icarus goal, across wireline and wireless and even between devices you own, but it means a lot more.

These points reshape a lot of our industry, and it will also have to reshape regulatory policies and business models. Technically speaking, the Internet will become the universal dialtone for everything. All services, and I mean all services, will operate over the Internet and will thus become “OTT” services. That puts the infrastructure providers in a fatally compromised position, one that they can escape from only through some near-term effort to create a true service-over-the-Internet position for themselves, or by being rescued by regulators through settlements or subsidies.