Reading the Cloud Tea Leaves

Cloud computing’s Big Three roughly met Wall Street expectations in the first calendar quarter, but the expectations factored in a continued deceleration of cloud growth. Microsoft managed the best growth numbers, followed by Google, and AWS was third. In my view, none of this should have been much of a surprise, but I do think some thoughtful analysis of the results is in order. There are macro reasons for slowing cloud growth; cloud activity tends to mirror purchase interests and obviously the economy is in a bit of a mess. However, I think that the total picture is more complicated than simple macro forces.

One complicating factor is increased competition in the cloud space. While the combined market share below the Big Three is relatively small, cost concerns about cloud services have been increasing, and enterprises have told me they’re looking deeper into the list of potential providers. That means companies like IBM and Oracle can increase their own sales, at the expense of incumbents. The same forces are likely sustaining Google’s growth relative to Amazon; I’m hearing a bit more interest in Google from both enterprises and startups or OTTs.

I think Amazon’s trailing growth can be partly attributed to the stress on startups, since that’s always been a major source of business for Amazon and figures far less in the results of other cloud providers. Amazon’s cloud comments have been rather trite, but they did comment sales interest was driven by cost savings. Wall Street research has said that part of Amazon’s growth issue comes from helping cost-conscious customers move to lower pricing tiers, and I find that interesting. First, because the AWS enterprises I’ve chatted with don’t indicate that “savings” was as big a factor in their recent cloud decisions than “cost avoidance”. That would reduce cloud spending rather than creating growth. Second, because cutting costs is a factor with companies already in the cloud. This supports what I’ve heard from enterprises, which is that slack cost control in application design has created cloud cost problems. Movement to the cloud is not to cut costs, but to increase them less. Once there, enterprises increasingly realize they need to try to optimize applications in a way that they should have done from the first.

It follows that Amazon’s future may well be tied to the future of startups, and that’s looking particularly uncertain right now. While many Street analysts still say Amazon is going to stay in the market-share lead, there are also many who think they have challenges, and I vote with that group. Because Amazon doesn’t have any meaningful enterprise premises business, no office tools, no hosting software, they are perhaps a bit more committed to the notion that everything has to move to the cloud. That they’re supporting startups who were never anywhere else (though some are now saving a boatload by self-hosting) likely hardens that view. If that’s the case, then Amazon’s tools may support a front-end vision of the cloud but their marketing/sales work doesn’t, and that means they may not be the first player to come to mind when CxOs think about a cloud commitment.

Google admits that customers are looking for ways to reduce cloud costs, but their own growth rates were strong. Part of that, as I’ve noted, is likely due to price shopping on the part of Azure and AWS buyers, but I’ve noticed Google getting mentioned a bit more often by both enterprises and operators, and so there’s also likely some growth from new adoptions. I’ve believed all along that Google has better base-line cloud technology than either Amazon or Microsoft, but they don’t have the enterprise connection of Microsoft or the web service kit Amazon has. If they want enterprise success they’ll need to push more for business tools, so in the near term they’re likely dependent on operator interest. Their Nephio project is an indication they realize that, but it’s still a base-line tool and not something like 5G suite.

The problem is that I don’t believe that the telcos are going to be enough, even if we don’t consider that Google is well behind Microsoft in the telco space today. I think all the cloud providers have tended to kiss telco-strategic babies when they should be pushing cloud-strategic concepts, but for Google this is a particular problem because of Google’s superiority in basic cloud tools. They’re not exploiting their own top assets. Even if they do, given the glacial speed of telco progress in new technology areas, Google needs enterprise tools.

They have some, in their suite of office tools that compete with Microsoft 365. They are now starting to push these more effectively, and to integrate them with cloud applications the same way Microsoft does. They also have Kubernetes and Istio, but perhaps because of their software legacy they’ve almost downplayed their role in both these and other platform tools because they’ve turned them over to the CNCF or some open-source group. That’s a good step, but so is blowing your own horn. Google’s big problem is not having any solid Teams-like offering. Google’s collaborative tools, starting with Hangouts and then Chat and then G Suite and now Workspace, have had too many re-launchings to build confidence. That only makes it harder to establish Google as a Microsoft competitor, and Microsoft is the gorilla in the room.

Azure sure seems to be the real success story in the cloud, based on quarterly growth numbers. Microsoft has had an advantage over other cloud providers because of its presence on the premises, and that continues to show in their cloud numbers. Microsoft has more actual “cloud migrations” than any other provider, and despite the growing interest that telcos are showing to Google Cloud, Microsoft still leads the pack in the telco space from what I see and hear. They have specific “Cloud for…” offerings for key verticals and even “horizontals” (sustainability is an example) and that’s helped them in reducing the length of the cloud sales cycle.

Another benefit Microsoft brings, one that’s not always acknowledged, is their strong presence in the software development space. Visual Studio and GitHub are fixtures in the programming space, and while they can be applied to any cloud or data center environment, Microsoft’s documentation on their tools and capabilities not surprisingly feature examples from Azure. Rust, perhaps the hottest new programming language, is supported on every cloud but Microsoft’s Visual Studio IDE is the preferred platform for development and the Rust examples Microsoft offers are for Azure.

Microsoft also has Teams and Microsoft 365, and being able to integrate document production, collaboration, even phone calls, with Azure cloud applications is a powerful capability. Microsoft is working hard to advance cloud-symbiotic tools and enterprises are responding with more cross-tool solutions that still rely on Azure for the application hosting. They’re now pushing AI strongly, with OpenAI Service in Azure and Microsoft 365 Copilot. Microsoft has its fingers in almost every business pie, and that reach gives it a real advantage in the cloud, one that I think Amazon and Google will need to counter specifically if they want to beat Microsoft. Given that Microsoft is constantly adding to its inventory of symbiotic business tools, and that it has a long-standing position in key areas like software development and office productivity, it’s going to be a challenge for competitors to catch up. They’d almost have to buy into Microsoft’s tool strategy because users are already adopting it, and that’s a very weak competitive position to take.

In addition to their initiatives in AI and their incumbency in development, Microsoft also seems to be making a major thrust in the edge computing space. So far, this isn’t likely to contribute much cloud revenue because edge hosting in the cloud isn’t widely available from anyone, including Microsoft, but Microsoft wants to be sure that it gets edge development going in a direction that favors its own offerings. To prep for this, they’re defining the edge as the customer edge and they’ve created a “fog computing” category to express what edge-of-cloud hosting would do.

I think it’s interesting that Microsoft is planning for an application evolution that retains and even expands customer hosting of at least some applications/components, while at the same time blowing kisses at the “move to the cloud” story. I guess a part of that is reluctance to dis the value proposition that’s captivated the media; if Microsoft were to tell the blunt truth about cloud adoption their competitors would jump on the move as showing Microsoft isn’t fully committed to the cloud. But telling the truth is generally a good idea in the long run, and Microsoft may be approaching the point where they’ll need to reposition themselves to accommodate what enterprises are now realizing—the cloud isn’t where everything is going.

Can Money Be Made on Digital Twinning?

Is it possible to make money on a “digital twin” metaverse, on the empowerment of workers not normally empowered, on the binding of consumerism to consumers in a better way? I’ve talked in past blogs about the technology of such a metaverse, and I’ve brushed a bit at the things that it could benefit, but we need to be hard-headed here. Can we see a digital-twin metaverse creating an ecosystem that would be profitable enough to encourage participation? If not, technology is irrelevant.

The essential notion of digital twinning is to create a computer model of a real-time system, in order to better assess its overall state and support its operation and use. The information used to create the model is presumed to be gathered from sensors, which means IoT. The “support” could take the form of assisting human workers or driving automated systems through direct control.

From an architecture perspective, such a system would consist of four parts. First, the model, which is presumably a dataset that collects not only real-time information but also information inferred from that primary source. It’s the “digital twin” itself. Feeding it is a collection of sensors and the associated software, and that’s the second part. Third, a set of processes running off the model would be assigned the mission of establishing state and matching goals to the state. Finally, a set of processes running off the model would interface the model to external elements, which to me would include both humans and automated processes.

None of these pieces are particularly challenging. I contend that a reasonably qualified team of a dozen programmers/architects could produce a prototype of this system in six to nine months. Would it be comprehensive, merchandisable? Surely not at that point, but it would be enough to demonstrate a credible model and attract support, providing of course that it was offered by a credible player. But would such a credible player emerge without having the support that requires a credible model to build? That’s the challenge here.

Breaking the chicken-and-egg loop here would likely require a clear addressable opportunity. To do that, we start by saying that you could divide the labor force you’d want to consider into three groups, the “office” workers, the “retail/service” workers, and the “product and production” workers. Roughly half of all workers work at a desk and so could be considered “office” workers. The remainder are fairly evenly divided between the other two groups.

Most of the currently active empowerment strategies target the office group and a limited number can also target workers in other groups. It’s hard to gather data on this except anecdotally but my contacts suggest that 70% of office workers are targeted and about 8% of each of the other groups, which means that we actually work to empower about half of workers with current technology. I argue that it’s the other half that need the digital-twin metaverse, and that would mean that we could use it to open an IT opportunity as large, in worker population, as we already target. That should surely be enough to gain credibility.

Obviously it has not, at least not in an effective way. Enterprises with a lot of workers in the second and third groups tell me that there are three issues. First, they’re not actively promoted by vendors or providers of service to target these two groups. Second, what applications and services they see are specialized to a job and/or industry, intersecting a small part of those two groups within their companies. Third, technologies and practices for addressing workers in these groups are different, offering no resource or management economies.

What nobody mentions is the difference in the way that various jobs and industries have to be assessed and empowered. For example, retail and service workers typically have lower unit values of labor, which means that in order to recover a given investment in empowerment you’d have to empower a larger number of workers through any contemplated project. In all the major job classifications recognized in the US, there are some that can provide a relatively high unit value of labor, some that have large numbers, but interestingly none that have both. That makes efforts at empowerment a balancing of the two factors, and every job and industry seems to require its own balancing. In some cases, this means taking a fairly innovative view of how you might address them.

I had an interesting chat with the CIO of a restaurant chain, who reached out to me to understand the way “digital-twinning” might work for a restaurant. It turns out that rather than considering a worker as a target, you have to consider the restaurant as the target, meaning you model the restaurant as an ecosystem. That shouldn’t have been a surprise given that this would be true for empowering assembly-line workers as well, but it points out that worker empowerment doesn’t necessarily mean twinning a worker, and in fact probably doesn’t. It means twinning the real-time system that workers are a part of, and that’s in my view the underlying problem with addressing those two underserved sectors.

Creating a model for a workplace, which is really what we’re talking about here, is a three-level challenge. First, it takes somebody who has a detailed understanding of the workplace and the work to do that, which means a line person. That person isn’t likely to understand the process of twinning, so you also need a model specialist to translate the workplace/work insights into a model. Then you need the actual modeling tools, the model, and the interfaces between the model and the real world. That means that somebody has to support the general notion of digital twinning, and that seems intuitively like a heavy lift for a vendor who’s likely trying to pick some opportunity low apples.

Years ago, I worked on software for a company who was having a major problem with fixed asset tracking. They had computers, typewriters, desks, chairs, tables, and so forth, and of course when people moved around some of these moved with them. Other stuff stayed where it was, and some stuff disappeared. The company had tagged the asset types with serial numbers, and annually they did an inventory, sending people into each space to record what was there. They wanted to do this faster and more often, and their thought was to put an RFID tag on each asset so the process of recording it would be quicker. They envisioned an inventory type walking to a room, shooting a room tag with an RFID gun, then entering and shooting the asset. From that, they could construct a “digital twin” of the spaces and contents.

The problem was that the twin quickly lost connection with reality, which meant you had to do inventory more often, which was more costly. The solution was to put an RFID scanner at the entry points to each space, so when something was taken out or brought in it was recorded. Now the map was up to date. However, you can see that this application, having been developed from scratch and targeting this problem, wouldn’t create much that could be leveraged for other “twinning” missions.

Most of the applications that could be addressed through digital twinning, even those being developed today, aren’t built on a general digital-twin model. I think the biggest challenge we face in addressing new empowerment opportunities, even opportunities in consumer empowerment, is that of silo development. We’ve probably spent ten times what it would have cost to create a general model that could have propelled the whole space forward, and we still don’t have that model.

We also don’t have broad recognition of an almost-as-important truth, which is that a digital-twinning model is a general case of a social metaverse, and that metaverses and digital twinning are two faces of the same coin. That means we’re at risk to directing metaverse development into the creation of another set of silos. Any virtual reality has to start with a model of the reality. Any participatory virtual reality has to integrate the real world with the model. Whether we visualize the result or use it to empower workers or automate processes is just the way we manage the output. The rest is all the same—a digital twin model.

Cisco, VMware, and Growth in the Telco Market

If we assume that 5G will continue to drive telco spending this year, what specific part of telco spending is getting the push, and what vendors are benefitting. According to an SDxCentral piece that cites an analyst report, the answer might be pretty revealing, but not necessarily in exactly the way that the report suggests. There’s a bit of mixing of statistics in the data that can be confusing.

Let’s start by saying that 5G technology isn’t a uniform opportunity because it’s not uniform itself. We have RAN and Core, we have Control Plane and User Plane…you get the picture. At a higher level, we also have the 5G Non-Stand-Alone or NSA that lets 5G ride on an Evolved Packet Core for 4G, and we have 5G Stand-Alone (SA) where 5G RAN marries 5G Core. Generally, specialized mobile vendors like Ericsson and Nokia have played better in the RAN space because of their knowledge of the radio piece of the puzzle, but how well they can leverage that into the Core space, when Core is more about transport, is an open question.

Then there’s the fact that article I reference isn’t purely about 5G, but about telecom equipment and spending in general. It notes that Cisco dropped in market share for the telecom space in 2022, which was a surprise given that in 2021 it was a big market share gainer. VMware managed to be one of the largest share gainers. Obviously there’s a difference between revenue rankings and revenue growth rankings for vendors, and also major differences in the way that vendors fit into telecom networks overall.

One thing that seems clear is that it’s important to look at 5G spending here rather than smear the story over a mixture of network technologies. One big reason is that 5G is budgeted and everything else has to compete for sustaining dollars. I’d argue that one reason for Cisco’s problem in 2022 was that their primary telco products are routers, and routers are a piece of a narrow chunk of 5G infrastructure. VMware, in contrast, is targeting the 5G space for most of its growth, and that’s the space with broader potential. Microsoft was the biggest gainer in terms of growth of market share.

Still, given that 5G RAN generally deploys ahead of 5G core because 5G NSA can leverage 4G packet core infrastructure, that most deployment news seems focused on 5G Core (5G SA), and that if there’s any place where a router vendor could expect to play in 5G, Core would be the place, you have to wonder why Cisco hasn’t gained share. The answer lies in the Control/User plane separation that’s fundamental to not only 5G but 4G as well.

A mobile core (4G/LTE or 5G) has both a user-plane and control-plane element, at least at the edge where it connects with the RAN. The 5G Core connects with the RAN via what’s called a “backhaul” connection to an edge-of-core element that is 5G-specific. Within the core, it uses traditional IP routing. The majority of mobile-specific routing doesn’t take place in the core at all, but out in what 5G calls the “fronthaul” or (in currently favored distributed DU/CU architectures) the “mid-haul” area. This “routing” is a 5G user-plane activity, meaning that it’s done using a UPF and is tunnel-aware. That means that the really 5G-specific stuff isn’t really in the core at all, and that the mobile core is going to exploit routing capacity that’s already in place unless 5G drives up traffic considerably.

Some had predicted 5G would do just that, but the truth is more complicated. Let me offer an example; you’re watching an HD movie on something. That movie has a typical characteristic bit rate of about 8Mbps. Suppose you’re using it on a service with a bit rate of 10 or 20 or 100Mbps. What’s the video bit rate? The same 8Mbps. You don’t push bits into a device, it consumes them based on the nature of what it’s doing. Given that, there’s not a major chance that video bandwidth consumption would be increased by 5G, and video is by far the largest traffic source, and source of traffic growth.

What this means is that Cisco has relatively little change of seeing a big revenue kick from 5G Core even if we could assume they’d win the business. But the fact that 5G Core isn’t really a driver of router opportunity, and that the 5G specificity of the Core is really limited to an edge-of-core or on-ramp function, means that the UPFs used in the RAN mid-haul could also be used in the Core. That in turn means that an operator is likely to use the same core-side UPFs as they used in mid-haul, and they probably got those from their RAN vendor, which almost certainly wasn’t Cisco. There’s a double negative here, a double negative influence that is.

Then there’s the fact that Cisco is the router market share leader, so they have more deployed routers than competitors. Marginal gains in 5G traffic, which as I said are about as much as we should expect, aren’t going to add much to capacity requirements, and that which is added won’t have a major impact on Cisco’s revenue line just because it’s bigger. Triple-negative.

VMware, on the other hand, is a comparative midget in the telco market. A million-dollar gain in revenue from 5G for Cisco could hide in a rounding error, but it might be a big increase for VMware because they earn less from the market. VMware is also focused on open RAN, which is where all the 5G-specialized opportunity is, and as I’ve already noted, Cisco isn’t. Thus, their gain from 5G is unsurprisingly bigger.

There’s a final point here that’s perhaps even more important. Going back to a blog I did on an Omdia study on telco capex, the majority of capital spending is focused on the access network, not the core. In 5G, even 5G FWA, “access” means RAN and mid-haul. If you want to look from growth in 5G opportunity other than radio-linked stuff, you’d really have to look at Open RAN and function hosting, which in my view means looking at metro and edge computing. The opportunity for vendors in 5G may well be only metro-deep.

Can Openness be Merchandised, Even in Networks?

Everyone loves open technology, except of course vendors who have to compete with it. Still, even vendors seem to embrace it or at least rely on it in some areas, and there’s growing interest in having open technologies drive us into areas where innovation seems to have stalled out. With all of these positives, though, we have our share of negatives. One is the “mule is a horse designed by committee”, a second is “generalized tools can be more expensive than specialized ones”, and another is “you can’t monetize something you give away.” Can we overcome these barriers, and are there more waiting for us?

There’s an old adage that says “The IQ of any group of people is equal to the IQ of the dumbest, divided by the number in the group.” Putting this in a more politically correct way, it says that groups require cooperative decision-making, and that requires compromises to accommodate everyone in the group, which is harder as the number increases.” Anyone who’s been involved in open-source projects or standards development has seen this, but we seem powerless to eradicate it.

Some have suggested to me that the solution is to have a single person or company launch something and then “open” it, which means that the broad membership inherits an approach set by a more controlled group of people. I’ve seen that work and also seen it fail, so I don’t think that’s the solution. The real problem, from my own experience, is that projects of any sort that get off to a bad start are very difficult to turn around. A large group, having necessarily committed a large effort, don’t want to invalidate their collective work. You’ve got to start right to go right.

How, though? My best answer is to say that an open project should begin by having a single insightful architect frame the approach. Of course, identifying who that might be is a challenge in itself. An alternative is to create a number of sub-groups (no more than four) and have each contribute a high-level approach model, which for software would mean an “architecture”. The group would then discuss the advantages and disadvantages of each, and pick the model. Then, the full group takes the idea to the next level, and if it’s going well at that point, a commitment to the approach is formalized. If not, one of the other models is picked and perhaps refined based on the lessons learned.

What this seems to do is eliminate a problem that’s dogged the footsteps of virtually every network-related project I’ve worked on, which is approaches biased by old-think. When there’s a model for something in place, as there is with networks, there’s a tendency to think of the future in terms of the present, the familiar. I’ve seen three highly resourced new-network-model projects toss away much of their potential value through that fault. One was ONAP and another NFV, by the way. None ever recovered, so that’s why it’s critical not to have the problem solidified into a barrier at the start.

The second issue could be called “the curse of generalization”. NFV had a bit of this from the first, with the original goal being to transform networks by hosting virtual functions on general-purpose servers. Well, general-purpose servers are not the right platform for the most demanding of network functions, and perhaps not for very many of those that live in the data plane. White boxes with specialized chips are better, and recently it’s been reported that the cost of general-purpose CPU chips is so much higher than the cost of a specialized and even proprietary CPU that it prices “open” devices out of the market.

This problem is more insidious than the chip example, though. Software design is a delicate balance between a generalization that widens the scope of what the software can do, and a specialization that supports the initial target mission most efficiently. We see in today’s market a tendency to look toward “cloud-native” and “microservice” models for something because they’re more versatile and flexible, but in many cases they’re also alarmingly inefficient and costly. I’ve seen examples where response times for a general solution increased by a factor of 25 times, and costs quintupled. Not a good look.

These are both major concerns for open-model work of any sort, but the last of the three may be the most difficult to address. Something is “open” if it’s not “proprietary”, so open technology isn’t locked to a specific supplier, but free to be exploited by many. Given that, how does anyone make money with it? In the old days of open-source, companies took source code and built and supported their own applications. Even this approach posed challenges regarding how participants could achieve a return for their efforts, without which many key contributors might not sign on. Add in the growing interest for open-source tools among users less technically qualified, and you quickly get support problems that free resources can’t be expected to resolve.

We seem to have defined a workable model to address this problem in the server/application space, the “Red Hat” model of selling supported open-source by selling the support. However, the model fails if the total addressable market for a given open element isn’t large enough to make it profitable to the provider. Still, it’s worked for Nokia in O-RAN; their quarter disappointed Wall Street but they beat rival Ericsson, who’s less known for open components.

The big question that even this hopeful truth leaves on the table is whether a broad-based network change could be fostered by open network technology. O-RAN hasn’t been exactly the hare in the classic tortoise-vs-hare race, and the broader networking market has a lot of moving parts. But Red Hat surely supports a hosting ecosystem that’s even broader, so are we just waiting for a hero to emerge in the open network space? I know both Red Hat and VMware would like to be just that, and maybe they can be. If VMware were to be acquired successfully by Broadcom, the combination might jump-start the whole idea.

Fixing Operator Developer Programs (and Knowing When They’re Not Fixable)

Everyone seems to love developer programs. They promise that some legion of busy programmers will rush to create stuff using your service or product, thereby driving up your sales while you sit back and rake in the proceeds. What more could you ask? Well, how about a rational developer strategy? For operators in particular, that seems to be illusive. As a software architect and a participant in both some developer programs and some initiatives to create them, I’ve got some views on what might be involved in creating one.

Everyone who wants to start a developer program starts with the application programming interfaces (APIs) they’re going to expose and publish, and those are important at two levels. First, APIs represent features that can be accessed and composed into valuable services. The value of the services will depend in part on the API features exposed, but most important, the value of an operator’s contribution to the program is determined by those features. If you expose useless features then you contribute nothing the developers can exploit, and your program is doomed to failure.

One classic mistake operators make here is focusing on what they’d like developers to do rather than on what developers can actually profit from doing. Often that happens because they simply make an inventory of features that they could expose without much regard for what using those features could mean in terms of benefit to the developer. Remember, the developer is trying to create an application that uses the APIs, and hoping to sell the application or a service/services based on it. It’s important to consider that developer end-game if you want to attract developers.

Most operators look at two areas for feature exposure via APIs. The first is connectivity services, meaning voice, data, or message connectivity, and the second is management and operations services, meaning features of OSS/BSS systems. Both these areas have value, but in today’s market neither likely has enough value to actually assure operator developer programs of success. To understand why, we have to look at the way a developer would build on APIs to create value.

There are two models of developer applications that could exploit an operator API set. The first model is the app model and the second the cloud model. The app model says that the developer is exploiting the APIs in an app written for a mobile device or run on a system attached to the network, and the cloud model says that the developer is exploiting the APIs through a cloud application. Both these models have potential limitations.

One common limitation is that the likely geographic limitations to accessing operator APIs. Where are the APIs actually hosted? An operator-specific API likely has limited value if the user of the developer’s application/service isn’t in the geography of the operator or on the operator’s own network. A limit in the geography from which customers can be drawn means a limit in revenue, and in some cases limited API geographic scope can hinder effective marketing. If a developer has a plan to exploit a wider area than the APIs can be extended at reasonable QoE, they’d need to support multiple operator relationships, which means you’d need either a standard API set or operators in a regional market would have to agree to a common API model. Otherwise developers would need to build versions of the application for every operator.

Considerations on geographic support limits for the APIs also has an impact on the value of the program, and here is one place where the two possible models of service raise different issues. App-modeled developer programs and applications pose a risk in geographically limited programs because the user may roam out of the area where the app will work, or work properly. With cloud-model development, that risk may not be a factor, but the availability of cloud resources proximate to the user has to be considered. An operator’s developer program may require a single cloud provider to provide effective hosting, and if that’s the case then it limits the developer pool to those familiar with that provider and also may indicate that the operator should specialize their APIs for consumption on the available cloud infrastructure for maximum benefit.

If you analyze the failures of operator developer programs, which I think in the last five years have been more numerous than the successes by a good margin, you can trace the majority to a lack of foundation features to expose. Truth be told, connectivity is a given. Truth be told, OSS/BSS systems administer services, and in most cases any real developer opportunities will (as I’ve noted) either be delivered through an app, the cloud, or perhaps both. Both have connectivity implicit in the platform, and the cloud is replete with management, operations, and administration tools.

The final issue on this topic is the problem of having APIs forcing a legacy development model. In theory, an API can be accessed by any suitable programming language in any application. In practice, the way the API exposes assets creates an implicit application structure, and operators need to consider this when designing their API details.

A simple example here is the “push versus pull” approach. Classic operator-think tends to look at getting status information or other data by “pulling” it, so APIs are designed to ask for something and to having it delivered in response to a request. Event-driven applications expect things to be “pushed” to them. More and more modern development is being done based on event-driven principles, so having a pull/poll type of API runs counter to this, and in some cases makes the whole API set and developer program unattractive.

It’s my view that there is no way to make traditional developer programs a success for operators because they rely on exposing those tired old traditional feature assets. If operators want something more from developers they have to contribute more for the developers to work with.

Is Fiber the Only Path to Universal Broadband? Is There Any Path?

OK, call me a cynic, but I think that we tend to obsess about singular solutions to complex problems. Maybe it’s just a human need, or maybe it’s easier these days to present a single story instead of an exploration of some complex set of requirements and even-more-complex set of possible solutions. In any event, one place we see this is in the broadband space in general, and in particular how subsidies to support underserved and unserved users might best be applied. The most popular answer seems to be “fiber”, but is it the right one?

Light Reading cites a study by a wireless broadband supplier that calls a pure-fiber approach into question. Is this just self-serving and opportunistic, or is there a real question of how far you can take fiber, even given government willingness to kick in subsidies? A systemic approach is needed, and I think that has to recognize that there is no single answer to how to promote quality broadband for most markets.

What is the best technology for broadband? The one with unlimited (essentially) potential capacity? Why, given that the average household cannot really justify more than roughly 100Mbps broadband? Why, when operators are already under considerable financial pressure delivering even the current broadband services to the current users? The best solution to any tech problem is the one that delivers the best return on investment, because without good ROI there isn’t any deployment. So what does that say about fiber broadband?

The ROI of broadband depends on the cost of serving a population and the revenue the served population could generate. Generally speaking, as I’ve said for decades, those things depend on demand density, which is roughly the dollar GDP a mile of infrastructure could pass in a positioning deployment. Demand density is largely a function of population density and household income, and it varies considerably depending on both these metrics.

Could you trench a mile of fiber to serve one user? Hardly. Even half a mile per user would be an unbearable cost. The baseline strategy for fiber deployment is to “pass” households with baseline infrastructure than can then be connected to households that elect to purchase your service. If households are dense enough that works, which means that fiber is really a community strategy, and probably a strategy that requires some population density within the target communities.

If you look at the US market, the top ten metro areas in population each have 5 million or more inhabitants. Dip down to areas with a population greater than 1 million and there are roughly 56. At the half-million or more level and you have 113, and there are roughly 360 metro areas with a hundred thousand or more people. Only about 1,600 communities have a population greater than ten thousand, of which over a thousand have populations less than 25 thousand. The number of households in any community tends to be roughly 40% of the population.

Population density and household income correlate fairly well with willingness to pay for broadband services, and the household numbers of course correlate well with the cost of providing those services. When we have a concentrated population, a large number of people per square mile, the total revenue potential is higher per unit area than when population density is lower. When we have a large number of residential units per unit area, we have more efficient infrastructure to deliver broadband.

The reason this is important is that most fiber experts I talk with tell me that it is difficult to justify fiber broadband in communities less than ten thousand in population (4000 households), because the cost of deployment can’t be recovered fast enough from broadband subscription revenues. Roughly twenty percent of the US population live in smaller communities or unincorporated rural areas that would be difficult or impossible to serve with fiber except through a level of subsidies unlikely to secure public and political support.

There are alternative strategies to fiber to the home, of course. Fiber could be deployed to a node or to the curb, with another media then used to haul into each home. That strategy isn’t particularly useful unless you can reduce the cost of the home-haul significantly versus taking fiber the whole distance, of course, and that’s why fixed wireless access (FWA) has gained a lot of traction recently. With FWA you run fiber to an antenna site where wireless, including millimeter wave, can be used to reach homes out to a distance of one or two miles, depending on how fast you want to be and how many obstructions exist. Most operators I talk with will admit that the optimum technology strategy for broadband deployment would be a combination of fiber and FWA.

A square mile is 640 acres. A typical single-family residential subdivision has between four and six households per acre. Condos and clusters roughly double that, and apartment buildings generate ten to twenty times that density. FWA coverage, presuming a one-mile radius, would be 3 square miles or roughly 1800 acres, which would equate to a population of between seventy-five hundred and ten thousand single-family houses. The broadband revenue from ten thousand subscribers assuming an average of $50 per month for broadband would be $500 thousand per month, from which we could allocate $120 thousand to access infrastructure. That’s between $1.1 and $1.45 million per year. The cost of fiber infrastructure to support the population is estimated at between $5.7 and $7.5 million, so the rough payback period is five years, which is reasonable.

However, the cost of FWA for the same community of users is far less, estimated at only a sixth that of fiber, so we could support the high end of our community density with only $1.25 million in deployment costs, and the payback at the same revenue per user is then less than a year. Or, looking at it another way, we could support a population of one sixth of ten thousand, or about 1,600, with the same payback as we’d have with FTTH in a community of ten thousand units. That means we could address an additional ten thousand communities using FWA.

When household densities rise to roughly twenty-five per acre we reach a point where multi-story structures are essential to achieve that density, and these reduce the fiber cost while at the same time raising issues with per-household FWA because of interference of structures. These could be mitigated by having a single FWA antenna per building, of course. However, the number of communities that would fit these parameters is limited. Overall, FWA could increase the percentage of population covered by “quality” broadband from roughly 80% of the population to roughly 94%, according to my model. The remainder are likely beyond reach of any broadband technology except mobile cellular 5G and satellite.

I think that the article and study are correct in saying that the cost of supporting the underserved and unserved households with quality fiber broadband would be excessive. I also think that it would be reasonable to believe that some additional households in areas below the target density that can justify fiber could be served via fiber from the node point where FWA towers were fed, so the actual number of households that could be served by fiber would be a bit over the 80% number. My model suggests that this could serve another 4 million households, leaving FWA to support perhaps ten to twelve million who couldn’t be economically served via fiber. I also believe that of the 80% of areas that could support fiber, a quarter would be better served by FWA because ROI would be better and service competition likely higher.

To return to my opening point, there’s surely a tendency to avoid complexity and complex topics, but that can hide important truths. Demand density has been an accurate predictor of broadband deployment economics for at least two decades, but using it would generate a lot of that apparently unwanted complexity. The problem is that it’s simply impossible to ignore economic reality on an issue like this, and there’s actually some good answers out there…not perfect ones, or always popular ones, but still better than we have now. I hope we can take the time to consider the questions in the right way, so we can find those answers.

Are We Countering Cloud Skepticism the Wrong Way?

Starting with the tech planning cycle that some enterprises start in late fall, and running into February of 2023, I got for the first time hard indication from enterprises that they were not only putting cloud projects under greater scrutiny, but even cutting back on spending for projects already approved and deployed. Public cloud revenue growth reported by the big providers has slowed, too. I’ve blogged about this, noting that enterprises have realized that 1) they aren’t moving everything or even actually moving much of anything, to the cloud, and 2) they’ve been developing cloud applications in such a way as to raise the risk of overspending.

This isn’t to say that the cloud is a universal waste of money. I’ve also noted that there is a real value proposition for public cloud computing, and it’s largely related to applications that have a highly bursty and irregular use of resources. If you self-host these kinds of applications, you have to size your server pool for the peaks, and that can leave a lot of server capacity wasted, capacity that costs you money. If you put the same applications in the cloud, the elasticity inherent in cloud services can let your hosting capacity rise and fall with load.

Some cloud proponents don’t think that’s enough. They believe that there’s a kind of intrinsic set of cloud benefits that should be considered even to the extent that they overcome situations where cloud costs are higher than data center hosting rather than lower. One article opens with a statement of this belief: “We thought cloud was all about cost savings, but that metric was wrong. We need to put a value on the agility and innovation cloud brings, not just dollars and cents.” Another asks “Why did you move to the cloud in the first place? Maybe you were thinking there would be cost savings. But even if you were wrong on that point, it’s the agility of the public cloud that has always been its primary value proposition.” Is it true that there’s more to the cloud than savings? What do enterprises themselves think, and of course what do I believe is the case?

Let’s start by exploring another quote from the first: “Cost savings is a ‘hard value’ with easy-to-measure and easy-to-define numbers. Business agility and speed to innovation are ‘soft values,’ which are hard to define and measure.” Implicitly, this means that the cloud offers a faster way for a business to use IT to address problems and opportunities. The theme here is that we’ve tried to justify moving things to the cloud by demonstrating the cloud is cheaper, when we should be looking at other justifications.

One problem is that we’re not, as I said earlier, “moving” things to the cloud as much as we are doing things in the cloud we might otherwise have done within the data center. When you are looking to cut costs, you have to move to do it, and I don’t believe that in the main the cloud is really cheaper unless you have that elasticity of load I mentioned. So part of the problem of cloud justification may be linked to an invalid vision of what we’re using the cloud for.

When people use the cloud to augment current applications, which is the dominant “hybrid cloud” model of today, what they’re doing is building an elastic, agile, front-end to their applications. In a very real sense, they’re accepting those “soft values” the article talks about. However, that hasn’t meant that they aren’t seeing a need to harden all that softness.

In the last six months, over 80% of enterprises told me that they were “now” requiring some quantification of benefits in order to approve a cloud project. Less than half said they’d required that a year ago. However, even a year ago almost 90% of enterprises said that they justified cloud adoption at least in part by savings versus deploying server/application resources in their own data centers. I could find only 6 enterprises who had ever suggested to me that they believed cloud adoption could be justified by agility and innovation speed alone, and none of my CFO contacts believed that. Since CFOs have to approve a project in most companies, that would suggest that those two values were indeed not accepted in the past. Do we have to assign a specific value, a dollar value, to both? My data suggests that we do. Can we?

We actually have been, and that’s what’s created the cloud-front-end hybrid-cloud model in the first place. Companies have been doing this for roughly four years, and when COVID and WFH hit they accelerated their cloud usage to extend application support to home and mobile workers. The fact is that for at least two years now, the majority of cloud adoption has been driven by a shift of applications toward a web-and-app delivery model. The current trend in the cloud, the trend that led to broader adoption in the last two years, is exploiting the soft-value model, but even that is running out of gas.

We don’t need to validate the soft-value model to gain cloud adoption; we did that already. What we have to do now is to fix the problems with the applications we developed, problems that likely evolved out of the fact that we didn’t impose hard targets for savings and thus overran costs. We’re doing that now, and it’s part of what’s creating the current downturn in cloud growth. The other part, as I’ve suggested, is that the initial soft-value push is running out of gas because the big impetus was WFH. We’re now agile enough for WFH, and WFH is going away.

Here’s a basic truth to consider. Almost all our business applications are designed to support office or “carpeted floor” personnel. Business agility and speed to innovation are more than speed to billing and receiving; in most cases a new product doesn’t really require any significant changes to core business systems. But that’s because what we’re calling “core business” is “core office” and the real business is off our carpeted floors, in warehouses, factories, and other often-dirty places. Truth be told, there’s a massive soft-value proposition, but it involves people we’re not even thinking about when we try to justify cloud spending.

There is about three hundred billion dollars worth of productivity enhancements that could be addressed using IT, and as I pointed out earlier blogs, the majority of this relates to “dirt floor” rather than “carpeted floor” workers, people who are directly involved in the production and movement of goods and the parts that create them. I believe these would be best addressed using a “digital-twin metaverse” model, a model that’s designed to build a virtual representation of complex real-world industrial and transportation systems. These, because they’re really edge applications and often involve movement over some distance, could be the next generation of cloud activity, creating agility where agility is really needed.

Could it be that the very place where cloud features matter the most is the place where we’ve been ignoring using IT to enhance worker productivity? Sure looks like it, and if we those soft values to matter, we need to face the place where they matter the most.

The Telco-and-Cloud-Provider Partnership: Essential but Tricky

There are surely people out there who continue to believe that network operators, meaning telcos in particular, can catapult to profit growth on the back of traditional voice and connection services. There are also people who believe the earth is flat and that politics is a civilized tension between intelligent debaters. In assessment of profit opportunity, as in politics, it’s all about numbers, and the numbers aren’t on the side of the “people” in my example here. Omdia, who’s done some nice research, put out a report that’s cited (among other places) in TelecomTV. I don’t subscribe to other companies’ research, but the topic here is important and so I’ll comment on the summary material that’s widely available.

The core proposition in the report is one I agree with and I suspect that nearly every serious strategist in the industry would also agree with. Telcos can’t hunker down on basic connection services and hope that somehow they’ll be profitable again. That’s true. My research has shown that in the consumer space, there is no hope of profits from basic services. On the other hand, it also shows that there’s a significant profit to be gained in other service areas that telcos could reasonably hope to address. In fact, my numbers are a bit more optimistic than Omdia’s. Where they expect a bit over a $500 billion opportunity by 2027, my numbers suggest that there is actually almost $700 billion in consumer services and another almost $300 billion in business services to be had.

A second foundation principle of Omdia’s position appears to be that the real opportunities lie so far above the network, above those basic services, that it’s the cloud providers who naturally own them. Telcos should expect to partner with the cloud providers, and accept what another commentary on the report might be only “a small fraction” of the revenue potential, because it would be “better than nothing.”

The specific target areas the report suggests include digital music and streaming video, gaming, and smart-home services. It’s the latter that’s suggested to offer the greatest growth potential, and thus present the best opportunity for these telco/cloud-provider partnerships.

OK, I can buy a lot of this, at least to the extent that I agree there is an opportunity and there is a potential to exploit it via a public cloud relationship. However…I think that both the specifics of the opportunity and the specifics of the partnership would have to be considered, and above all that the mechanics, the technology, used to address the opportunity would be paramount in determining whether there was a useful telco opportunity to be had here.

Let’s look at two hypothetical partnerships, which we’ll call PA and PB. Let’s also say that both attempt to address the same opportunity. In PA, let’s assume we have one partner who has nearly all the assets needed to address the opportunity, and thus could really exploit the opportunity themselves. In the other PB partnership, let’s assume that there is at least a set of critical assets that still have to be developed, so neither party can really exploit the opportunity with what they have. Which partnership do you think affords a real balance of opportunity among the partners. Let’s then look at the specific target areas the report cites, and see whether they’re PA or PB opportunities.

OK, digital music. How many digital music services do we already have? Answer, too many to allow any to be highly profitable. What’s the key asset needed for the services? Answer, the music. Imagine a telco getting into this. They might try to market the digital music offering of current incumbents, but there is nothing other than sales/marketing they can contribute. Not only that, other types of business would actually have a better shot at sales/marketing in the space. I listen to one digital music source that’s bundled with another service, and I get another one subsidized by my credit card company. This is darn sure a PA partnership in my view; telcos would gain almost nothing from it.

Digital/streaming video is the same, or perhaps even worse. The essential element is the content. There are already many streaming services and they’re raising prices to try to compete, or they’re dependent on ad sales when competitors all have to scramble for the same (or often fewer) ad dollars. Telcos have already tried to market streaming video services with their broadband, and none of these ventures have taken off. Another PA. If there’s a PB lurking here, it lies in somehow personalizing and customizing both services, which could be an AI application.

Gaming? OK, here we have some PA and some PB flavor to our opportunity. On the one hand, gaming is a kind of content just like music and video, and has the same issue set. On the other hand, gaming in its massive multi-player form (MMPG) is dependent on network QoS to ensure a realistic experience. There is something here, a new dimension that not only is open to a telco to address, but that might be easier for the telco to address. Our first PB element! And is gaming a kind of metaverse? PB for sure if it is.

The same can be said for smart-home services. These services depend on devices, which is a kind of “content” and has a definite PA dimension. There is already a set of hosted services available, as OTT elements, to provide access to the devices from a website, app, or both. Another PA. However, there are service QoS dependencies to consider; if you’re away and your home or local Internet is down, you can’t access your devices. There’s also a question of whether a “smart home” is really a special case of IoT, which might mean it’s a “digital-twin metaverse” and a certain PB.

OK, without getting into the details of metaverse and AI technology, how would a telco play in this? There is a general answer to the question of making a partnership with a cloud provider work, and it was provided by a telco. It’s facilitating services. Forget marketing partnerships; telcos are notoriously bad marketers, and they don’t even operate near the top of the food chain in any of these service opportunities. You can’t sell something if you’re layers away from the buyer. Instead, what telcos have to do is facilitate, meaning build some lower-layer element that makes the retail experience better, cheaper, and easier to get into the marketplace.

Facilitation was proposed by AT&T, so it has operator credibility. However, facilitation requires two things operators aren’t exactly naturals at providing. The first thing is a sense of the retail experience being facilitated. You may not have the experience to sell a widget service, but you’d better understand widget-ness well if you expect to facilitate one. The second thing is a realistic model of the layered relationship you’re creating. Something is “facilitating” if it adds value, but it’s a valuable facilitation only if the cost of the facilitating service is reasonable given the retail price at the top, the distribution of contributed value through all the layers, and the cost to another party of replacing your facilitation with their own.

Omdia is right about the need for telcos to partner with OTTs, but I think it’s critical for telcos to frame the partnership so it doesn’t become parasitism instead. Connection services and telcos are increasingly disintermediated from demand sources overall, created because what connections are used for today is the delivery of experiences. It would be very easy, and very destructive to telco interests, for a partnership with public cloud providers to become a conduit for even more declines in telco revenue per bit, and if that continues then the foundation of modern telecom could be threatened, and some form of subsidization would be inevitable.

Is the Tech Dump the New Norm?

There doesn’t seem to be much good news for tech companies these days. The fact that PC sales are expected to have fallen sharply in the first quarter, with Apple estimated to have lost 40%, sure seems ominous. Is all of tech going to be under pressure? What’s behind this, and when will it end? Those are hard questions to answer, as we’ll see, but we’ll still try.

There are fundamentals issues with tech, of course. The sudden inflation we saw last year resulted in a global trend for central banks to raise interest rates. That tends to impact tech companies because many of them borrow significantly to finance growth. There were also supply chain problems that resulted in backlogs of orders, and that obviously delays revenues. Higher interest rates and inflation also hits developing countries particularly hard because their currency weakens against the US dollar at the same time that inflation drives up prices.

Consumers are obviously pressured by all of this, and that contributes to a reduction in consumer spending on tech. Businesses are pressured because spending pressure equates to profit pressure on them, and that puts their stock price at risk. I believe that a lot of the Tech Dump of 2023, as one vendor friend of mine described it, can be traced back to the issue of stock prices.

Stock prices generally (keep that qualifier in mind, please!) track earnings, which is roughly revenues minus costs. If you want earnings to go up while revenues are going down, then costs have to go down even more. We’ve heard about the tech layoffs, and that’s one aspect of cost reduction. Another, of course, is reductions in other spending, including spending on capital equipment. Since the stuff that’s Company A’s capital purchasing is Company B’s sales, you can see how this can create one of those feedback loops.

The potential for a kind of destructive negative feedback in spending and cost cutting is exacerbated by short-selling hedge funds. Short sales of stocks, unlike traditional “investments” or “long” purchases, are designed to profit if the market goes down, but they can also force the markets down like any wave of selling. For the first time in my memory, we’re seeing investment companies actively promoting themselves as short sellers, and issuing reports to call out short-sale targets. The effect of this is to magnify any bad news, and I believe that much and perhaps most of the stock dump we’ve seen over the last year was created and sustained by short selling. When a stock goes down, whatever the reason, companies try to take cost management steps to boost the price again, and that often means cutting spending and staff.

Even the expense side of the business spending picture can be impacted. One good example is spending on the cloud, which recent reports show has declined at least in its rate of growth. On the fundamentals side, cloud spending is linked to business activity, more so than capital spending on gear, so it responds quickly to a slowing of activity. On the technical side, many companies have realized that they built cloud applications the wrong way and are paying more for cloud services than they’d need to. Thus, they can cut back to reduce costs and help sustain their stock price.

What this all means is that there are a mixture of reasons why tech spending has fallen, and some of the big reasons have little to do with the market appetite for tech products and services. The good news is that these non-demand reasons for spending pressure are relieved when the current economic angst passes. Since January, my model has consistently said that will be happening in May, and I think current financial news is consistent with that prediction. Many reports now say that the worst of inflation has passed, and that the Fed and other central banks are nearly at the end of their rate hikes. Nobody expects prices to go down much (if at all), but both consumers and businesses tend to react more to negative changes than to a steady state that’s worse than before.

There are also segments of the market that seem less impacted by these non-demand forces, and those segments have already outperformed tech in general. The most-impacted sectors of tech are the sectors that rely on direct consumer purchases. Credit-card interest has been rising and inflation has increased prices, thus reducing disposable income and making consumers more concerned about their budgets. Next on the list of impacted sectors are those that support the consumer sectors. The least-impacted sectors are those that invest on a long depreciation cycle, such as network operators and those that supply products that are typically deployed based on long-standing evolutionary planning, like data center elements.

This explains why Apple suffered more from the downturn than, say, Cisco. Apple sells primarily to consumers, and Cisco sells to businesses to support capital plans that look ahead for half a decade. Perturbations in the market will obviously have less effect on the latter than on the former, which suggests that companies like Cisco will likely see less impact on revenues in their next earnings report.

All in all, I think the Tech Dump of 2023 (or of 2022 into 2023) will end, but that doesn’t mean that tech won’t still have issues, both in the short and long term. In the short term, it will take time for spending that was reduced or deferred returns, because it will take time for inflation and interest rate changes to percolate through the economy and impact stock prices. Nobody is going to push up spending till their stock recovers, which probably means until their revenue recovers. That means the same negative feedback that drove the dump will also delay full recovery.

In the long term, tech is likely to remain a target for short-sellers because tech stocks tend to price in a presumption of growth that’s often a bit optimistic. That makes it easier for short-sellers to start a run on a stock. There are also fundamentals issues; we continue to over-hype new technologies, and thus not only overvalue companies but also risk under-investing in things that could really be important, even critical, to the tech markets in the longer term. In the end, though, the markets are driven by companies who have significant earnings growth potential, and it’s hard to see that outside the tech space. So…tech may be down but it’s not out.

How Far Might Open Fiber Access Take Us?

There is no question that the most expensive part of networking is the access piece. Move inward and you quickly reach a point where hundreds of thousands of customers can be aggregated onto a single set of resources, but out in access-land it’s literally every home or business for itself. Not only that, selling a given user broadband access means having the connection available to exploit quickly, which means building out infrastructure to “pass” potential customers.

The hottest residential broadband concept in the US is fixed wireless access (FWA), which addresses the access cost problem by using RF to the user, and with at least optional self-installation. In mobile services, operators have long shared tower real estate and in some cases even backhaul. Now, we’re seeing growth in “open” fiber access, where a fiber player deploys infrastructure that’s then shared by multiple operators. Is this a concept that could solve the demand density problem for fiber? Is it the way of the future? Let’s take a look.

In traditional fiber networks, and in fact in “traditional” access networks overall, it’s customary for the service provider to deploy the access infrastructure exclusively for its own use. That means that every competitor has to build out like it was a green field, despite the fact that there may already be competitors in the same area who have already done so. Open fiber access changes to something more like that shared-tower model of mobile services; a fiber provider deploys infrastructure and then leases access to multiple service providers. This doesn’t necessarily eliminate competitive overbuild, but it reduces the chance it would be necessary. It also reduces the barrier to market entry for new providers who perhaps have special relationships with a set of broadband prospects that could be exploited.

At this high level, open fiber access seems like a smart idea, and it might very well be just that. It might also be a total dead-end model, and it might even be a great idea in some places and a non-starter in others. The devil here is in the details, not only of the offering but of the geography.

If a given geography already has infrastructure that can support high-speed, high-quality broadband (which I’ll define as being at least 200/50 Mbps) then the value of open fiber access is limited because either fiber or a suitable alternative is already in place. The open fiber then becomes a competitor who’s overbuilding, which sort of defeats the reduce-overbuild argument’s value.

If there is no quality broadband option in an area, the question becomes one of demand density. A small community of twenty thousand, with perhaps five thousand households and several hundred business sites, might well not have a current quality broadband provider. A county that’s spread over a thousand square miles might have the same population, and also not have a quality provider. The first of our two population targets would likely be a viable opportunity providing that household income was high enough to make the service profitable, but the second target would simply require too much “pass cost” to even offer service, and the per-customer cost of fiber would be very high because of the separation of users.

OK, suppose that we are targeting that first community of under-served users. The next question is whether we can support the same community with our base-level 200/50 broadband using another technology, like FWA. In many cases that would be possible; what’s important is that the community be fairly concentrated (within a radius of perhaps two miles max) and that a small number (one or two) node locations with towers could achieve line-of-sight to all the target users. If FWA works it’s almost surely going to be cheaper than open fiber access, which means the latter would have a retail service cost disadvantage out of the box.

But here we also have to consider demand density, the economic value per mile of infrastructure based on available users and price tolerance. If demand density is high enough, then an alternative broadband option could still be profitable. Most of the areas where FWA is being deployed are already served by technologies like CATV, and where demand density is high enough it’s still profitable to deploy CATV. If you could reduce the cost of fiber through “federating” access across multiple operators, the net pass cost could be low enough to put fiber to the home (or curb) in the game.

The home/curb point is our next consideration. A positioning fiber deployment would “pass” homes, meaning it would make a node available at the curb, from which you could make connections. You still have to bring broadband into a home/business to sell broadband, and obviously you can’t do that until you actually sell a customer service. When you do, how does the cost of that drop get paid? Does the service provider who did the deal pay it, or does the open fiber access operator do the job? If it’s the former, how do you account for the cost if the customer changes providers? What if the first provider elects to use a limited-capacity path from node to home? If it’s the latter, the open fiber access provider has to bear what’s essentially installation costs. They also decide what the feed technology will be, which probably means it would be very high in capacity, as in fiber. That raises overall service costs, perhaps higher than some service providers targeting more budget-conscious users would accept.

Then there’s the question of whether there’s a potential link between open fiber access and “government” fiber. Any level of government could decide to deploy fiber and make it available to broadband service providers. That’s already being done in a few places, and it might eventually open up whole communities with marginal demand density to high-speed fiber broadband availability.

All of these questions pale into insignificance when you consider the last one, which is “Why would an open fiber access provider not become a broadband service provider?” This is the biggest cost of broadband, overall. You could deploy and share at a wholesale rate, or you could deploy and keep all the money. What’s the smart choice? Unless you have a target geography that for some reason is easiest to address via a bunch of specialized providers, each with their own current customer relationship to exploit, keeping all the money seems the best option. Even if it’s not the first option taken, does the open fiber access provider’s potential entry into the retail service market hang over every wholesale relationship? Eat thy customer may not be a good starting adage to live by, but as new revenue opportunities disappear, the old rules of the food chain fall by the wayside.

There a corollary question too, which is “Haven’t we invented the CLEC model again?” I noted in an earlier blog that requiring big cloud providers to wholesale capacity would create the same sort of model regulators created when they broke up the Bell System and required local operators to wholesale access. That was supposed to be a competitive revolution, but all it really did was create an arbitrage model instead of a true facility-based deployment model. That could happen here too, but so far we are seeing the open fiber access model bring fiber to places where it otherwise might not be deployed, and that’s a good thing.