Could Stablecoins Save Crypto?

Let’s face it, cryptocurrency is a mess. I’m sure nobody who reads my blog has any doubt that I’ve never been a fan of the concept. “Currency” that has nothing behind it is just a bubble-in-waiting. But what about the stablecoin concept? Suppose you pegged crypto to an asset? Could that solve the problem? That’s a big “Maybe”.

The framework for stablecoin, as Wikipedia describes, involves backing it with a reserve of a specific asset or asset class. That asset can be money (US dollar, for example), a commodity, stock or fund, etc. The theory is that as long as the stablecoin represents a real asset, its value is tied to that asset. The holders of a stablecoin could demand redemption in the underlying asset if they thought the coin was in a bubble state, and the asset’s value would set a floor price. It’s a great idea but there are some major caveats.

The first caveat is the reserve requirement. Suppose I mine a trillion dollars worth of my stablecoin and back it with ten bucks. Obviously, if holders lose faith (which they should), there isn’t nearly enough reserve to actually allow them to convert their stablecoin holding to the underlying assets that “back” it. How much reserve do you need, and do you need to adjust reserve levels as the quantity of stablecoin holdings that those reserves back increase due to mining?

Then there’s the question of how the reserves are held and accounted for. Suppose I back my stablecoin with what I purport to be US dollar holdings in the Last Bank of Under Foramia. Can we trust that custodian to certify reserve levels? Can we trust a promise to redeem even if they make one? Some proposed regulations would require a major-bank reserve system, but many in the crypto world object.

Suppose, though, that we have credible reserve levels and credible custodial facilities. We back our stablecoin with US dollars. The problem is that people buy crypto because they think it’s going to explode in value. Does the stablecoin appreciate faster than what backs it? If it does, then the reserve assertions become useless. If it doesn’t, why not stuff dollars in your mattress?

What if the stablecoin claims to have fixed backing but also invests the money in something that can actually appreciate? Let’s suppose that we have dollar reserves and we promise to invest stablecoin funds in the S&P 500 index. OK, that might address some of these issues, but it raises the question of why you couldn’t just buy an S&P 500 index fund. Could a stablecoin reserve of dollars have met the demand for redemption we might have seen in 2022? If it could, then a major chunk of fixed assets lie behind the coin, and that can’t be invested or it’s not in reserve, so the upside of this model isn’t as large as the upside of the S&P.

But we’re not done yet. Consider the question of the honesty of the stablecoin issuers and the exchanges that handle them. Every intermediary player in a model like this expects to make a profit, will have to make enforceable financial and legal commitments, and could be subject to hacking and other forms of fraud. If I hand you what I purport to be a dollar, you have only me to hold accountable if it isn’t true. If I hand that same dollar to a chain of a hundred people who must cooperate to pass it to me, anyone along the way can defraud you and there’s a good chance you’ll spend more than a buck figuring out who to sue or charge.

Blockchain security and distributed validation aren’t the answer either. A stablecoin blockchain could be totally valid, but the underlying reserve might be worthless, which means that as soon as people start to doubt the value proposition the whole thing falls apart like the Dutch Tulip Bubble. Validation by acclimation isn’t helpful here because the “value” is what needs validated, and even if a million crypto enthusiasts said they backed a given stablecoin, could you sue them all to recover your losses?

The issues with FTX have certainly impacted crypto’s credibility, but there are two basic truths to contend with at this point. First, crypto has always been a major risk proposition because it’s a natural bubble; value is set by buyer sentiment alone. Second, regulators should have taken action on this from the first, because it would be a major financial problem for many if they tried to do the right thing now. These points should be considered now with stablecoin.

I don’t think stablecoins are really stable at all; they hide risk in a different and likely more effective way, but that only makes the risk worse. So, while it may be too late to get control of crypto overall, it’s perhaps not too late to get control of stablecoin, and to establish it as a kind of halfway point between the wild west of crypto and the at-least-familiar risks of Wall Street. How?

The regulatory measures that are being discussed are a part of the answer. We need to have stability in stablecoin, not just to preserve the meaning of the term but to prevent them from being a problem even greater than that of cryptocurrencies in general. The foundation of all currency, crypto or otherwise, is the full faith and credit of something whose faith and credit are credible to all.

The second thing we need to do is ensure that the blockchain process doesn’t either enshrine or at least tolerate practices that could render all that credibility worthless. We have to protect currency from counterfeiting, and we need to do that with stablecoins. That doesn’t just mean ensuring that a given token is valid, but also that tokens aren’t created outside the framework of the backing that roots them. Reserves have to be maintained as needed, which means that we have to limit mining to the amount that the reserves cover.

All that’s fine, but it isn’t the big issue. That big issue is the why. Why are we creating a stablecoin? Just to demonstrate cryptocurrency and blockchain, or promote Web3 concepts? To be really stable it has to be a financial instrument that’s as trustworthy as a dollar or a stock certificate. So why not have physical currency or stock certificates? I know that “money” and “stocks” really exist more as an electronic record than as something real, but that only makes the “why” question more important.

Stablecoin can be a real value if it can present something old—trusted records of financial assets—in a more useful form. What utility are we seeing? A good answer to that will justify the other steps that need to be taken. If we can’t offer one, then we have to assume that the goal here is the same as it is for many new “investment vehicles”, and that’s to fulfill the financial markets’ ever-growing desire to create bubbles. We don’t need more of those.

Should Regulators Allow Telcos to Buy Each Other Up?

Most everyone in the networking industry has heard about the efforts by some European operators to get the EU to approve a plan that would require big tech to contribute to the cost of Internet infrastructure. I blogged about this earlier this week, in fact. Fewer know about another EU question, which is consolidation in the telecom industry. Light Reading did a piece on this, and if you think about it, the consolidation and subsidization topics are related.

The telecom industry has evolved more, perhaps, than any other piece of tech, and it’s likely due for some more evolution. Every step along the way has changed it, but one simple truth is that none of them, or even all in combination, have changed it enough.

A century and more ago, the big concerns were that the new world of telephony would be stalled either by the high “first cost” of getting service into place, or by a host of incompatible operators who would make universal calling an impossible dream. The result was either the establishing of a regulated monopoly or the creation of an arm of the government (the “postal, telegraph, and telephone” or PTT) to handle the new service.

In the 1980s, we had a wave of deregulation and privatization that sort-of-broke this model. I say “sort-of” because now the fear was that the giant telephony players would, if released into the free market, use their incumbency to exploit customers. They ended up having to wholesale elements of the infrastructure they developed under regulatory protection (which had no meaningful effect) and being restricted with regard to what services they could offer “above” the service of connectivity, which has had a very bad effect.

Roughly a decade ago, we started seeing issues in operator profit per bit. The revenue generated from connection services was falling and so was the cost of creating those services, but the latter was falling more slowly than the former. Early warnings of a loss of interest in investing in infrastructure proved premature because operators cut costs, but the primary area cut was the high-touch opex elements—a human operator, phone books, an information number to call, and prompt field service support. All these things kept the financial wolves at bay, but they combined with regulatory barriers to discourage operators from thinking the obvious; that you could improve profit per bit by raising revenues too.

Profit per bit needs improving, because public corporations, which is what most operators turned into, are responsible to their shareholders. They sustain their share price either with a promise of future growth or a promise of an attractive and stable dividend. Most telcos have long since passed the point where their basic connection services have much growth potential, and the benefits of cost-cutting are largely plucked too, which means that cash flow and dividend payouts are at risk. In the US, we’ve seen both AT&T and Verizon take major stock hits this year, and EU operators are asking for subsidization and consolidation relief.

What we’re seeing now in Europe, where demand density and competition levels are high, is a growing realization that something has to be done. One possible solution is to get the Internet, which is largely responsible for the profit-per-bit problem because of bill-and-keep policies, to adopt some settlement mechanism to help operators sustain infrastructure. Another is to consolidate, meaning let operators combine to increase efficiency. Both are hot potato issues. I’ve talked about the settlement issue earlier this week, so let’s look at consolidation.

Why could a smaller number of operators be helpful? The positive of consolidation is cost management. Multiple operators serving the same geography have to deploy multiple infrastructures, and staff multiple organizations. Consolidation reduces the burdens of parallelism in both cases. However, consolidation also limits competition for the obvious reason that a big competitor could buy up others and end up as a virtual monopoly. Competition is seen as essential in ensuring consumers aren’t victimized on service pricing.

Competition doesn’t always work, though. Multiple players and regulatory policies don’t create competition, opportunity does. If profit per bit is under pressure the opportunities are limited. Small competitors, the very players who are logically acquisition targets, are the ones most at risk if the markets get marginal. Add in higher global interest rates and you can almost bet that small operators will have a hard time. So do we let them go out of business, which means they’re not competitors any longer, or do we let somebody buy them, which also removes them as competitors? Seems like competition, in a tough economy, isn’t really one of the options. It also seems to me that having somebody bought is better than having them fail.

It’s very possible that the EU’s position on consolidation is a factor in encouraging them to support some form of subsidization. The only sure mechanism for keeping smaller competitors in business and independent is to ensure that they are at least profitable, and the signs industry-wide say that’s going to be challenging without some new source of revenue. However, as I pointed out in my blog earlier this week, subsidization needs some mechanism for settlement, and coming up with the right approach, or even a workable approach, could be a significant challenge. If the EU tries and fails, you can be sure that the players it’s trying to save from acquisition/consolidation will be among the first casualties.

Couldn’t anti-trust actions and sensible regulations applied to proposed M&A could address the issue of competition? Generally, studies and modeling have suggested that an optimum market really needs only three or four competitors, and my own modeling suggests that even one broad-based wireline competitor in a given market area is difficult to sustain because of the high cost of infrastructure. The problem is that the nature of competition and opportunity depend on demand density at the local level.

Imagine a town of five thousand people sitting in a very rural area that has perhaps another five thousand scattered over an area ten miles radius from the town. Is there an opportunity to deploy competitive infrastructure? Not if you have to support three hundred square miles of geography to serve ten thousand people, but if you could focus on the town, it’s likely there is something to work with. In market geographies where settlement is fairly dense, there would be more places where competition could work than in areas where settlement is sparse, so one policy can’t fit all.

We can see examples of this in the US, which has vastly different demand densities across its national geography. Light Reading also reported that T-Mobile is looking at the possibility of deploying fiber to support about four million subscribers. The specific locations being considered aren’t described, but I’d bet that they are the same sort of places that Google’s fiber investments have targeted, which are pockets of opportunity in a wider area where opportunity is limited. It may be that T-Mobile, who offers FWA as an alternative to wireline, saw that there were pockets of local demand density that can justify fiber as they planned out FWA, which needs a reasonable level of demand density to support profitable operation too. This demonstrates that multiple operators may not even be directly competitive while serving the same “market”, if they target specialized local conditions. So, as was the case with big-tech subsidies, this is a complex problem for regulators to deal with.

But speaking of demand density, the fact that EU operators are pushing for regulators to offer some innovative changes in policy to ensure their profitability is a bad sign. EU demand density is four or five times that of the US and ten times that of Australia or Canada. Infrastructure should be more profitable in high-density areas, which could mean that competitive overbuild is truly having an impact. You could make a case for consolidation, then.

The question we’re left with is whether we’re trying too hard to make connection services profitable enough to sustain not only a major incumbent but also competitive rivals. There’s no differentiation to featureless bits. It’s interesting that operators haven’t demanded regulators lift restrictions on offering advanced services, restrictions that apply to many of the big players. Advanced services, services beyond connection, are the long-term hope of avoiding financial challenges. An enlightened strategy there could eliminate the need for subsidies, reduce the pressure for consolidation. But it’s also interesting that where there are mechanisms that would enable operators to offer advanced services, they’ve either held back or they’ve tried and failed. I still believe that the best solution to the problem of profit per bit is to encourage operators to participate more in higher-level services that are differentiable. I’m a champion of AT&T’s notion of creating what are essentially wholesale service elements, “facilitating services”, that would be offered to OTTs to build from, but likely would eventually form the foundation for some of AT&T’s own higher-level services. These could be the long-term solution to everyone’s problems.

That makes the final, and perhaps biggest, question whether any form of short-term subsidization or consolidation relief could actually hurt longer-term operator stability. Can you make a commodity business of bit-pushing a success without ever-increasing financial relief in some form? I have my doubts, and if that short-term relief encourages operators to believe they can stay forever in their connection-services comfort zone, then it’s a bad idea.

You can’t have something fundamental like the telecom/ISP space fail. It’s possible that the operators are crying wolf here; they’ve been talking about profit-per-bit declines since at least 2012. It’s also possible that there are basic issues that need to be addressed, and that they’re trying to give regulators and governments fair warning. It’s certain that the operators need to follow an old motto I trademarked with ExperiaSphere; “Think Outside the Bit”.

The Role of Metro in Networking’s Future

The biggest question for service provider networking in 2023 may never have been asked. That question is “What role will metro play, and what architecture will support that role?” The answer will not only determine what happens to the revenues of network operators, but also how each network equipment vendor will fare, and whether upstart players like DriveNets will really change the market for routers. It will also determine just how important non-router elements are to the network.

A service provider network is, broadly speaking, a hierarchy that’s made up of access edge elements and a series of concentration elements that end up creating the “core” network. In the core, the emphasis is on economies of transport; you need big iron that can push a lot of packets very fast and very cheaply. At the edge, the specific model will depend on just what kind of service the service provider is offering. An operator who offers only business broadband would look very different from one that’s supporting mobile networks, and those offering consumer wireline broadband would look even more different. At the edge, whatever the model is, there’s still a need to push packets economically, but since customers connect there, any personalization in customer care and service features are easiest to inject there.

Easiest perhaps, but not necessarily most economical. The problem with the access edge is that there are too many places and too few customers per access site to ensure economy of scale, particularly with hosted elements of a service. Edge computing is most economical if you can pull the hosting points inward a bit; not so far that you have too much traffic to analyze but far enough that you can justify reasonable hosting economies. The sweet spot is the “metro”.

Metro networking traditionally means the point where access elements are connected for aggregation. It also means the network piece that resides in the commercial center of a given market geography, and that combination is ideal if you want to add value by supporting real-world activities like going to work, shopping and eating out, and so forth. Collaboration and connection is most easily supported by technology that can be used by all the likely partners, and we tend to relate more with the people and places we can touch.

Metro is where we jump from features to transport, access to core, a lot of little boxes to a few big boxes, a pure connectivity mission to a feature-and-service mission. Real customer management focuses there too. Best of all, there are still a lot of metro boxes. Globally, there are thousands of metro locations, perhaps even tens of thousands, and my model has consistently said that if we were to carry edge-enabling applications far enough, we could generate a hundred thousand new hosting points, each in a metro location. Metro is thus the place where router meets switch, where connectivity meets applications, and where data centers meet networks.

And that point, that last point, is critical. There are three possible relationships between data centers and networks, and each drives metro differently, creating different requirements and opportunities…and different winners and losers.

The first possibility is that no real features are injected at the metro point at all, so any hosting there is confined to what we could call “virtual function” support. This would have a minimal impact on both services and on metro requirements, and thus would mean that metro would retain its current focus on traffic aggregation and being an on-ramp to the core. This scenario would mean little change could be expected in either metro requirements or vendor incumbencies.

The second possibility is that there are feature connections made in the metro, but the features are hosted in the public cloud, likely via relationships established between operators and cloud providers. If this is the case, then it’s likely that these features would be delivered either via a single (redundant) trunk link or integrated with the service above or outside the network, in the user’s device or via a high-level service API that the cloud provider hosts. This would increase the management complexity, but not radically change the equipment requirements.

The final possibility is that there would be functional integration between hosted features and router features injected at the metro level. This could come about either because operators directly invested in “carrier cloud” or partnered with cloud providers in a way that required tight coupling, likely because operators provided facilities and local connectivity to cloud partners. In this case, metro is the focus of future service value add overall.

I believe that AT&T’s approach to “facilitating services” is the right answer for operators. I believe that approach will require tight coupling between hosting and network, and I also believe that operators will inevitably build out carrier cloud on their own, at least in the areas where they have a major customer footprint. I don’t think that the first ships-in-the-night model will serve operator needs to boost revenue per bit, and I think the second strategy of a kiss-through-cellophane interface is only a transitional approach.

Presuming I’m right (which, of course, I’d presume), then the architecture of metro has to change radically, creating what’s much more like a large data center network with a bigger-than-usual transit routing element that bridges between access, core, and data center networks. Given the sheer number of metro areas to network, this radical change could end up impacting most operator spending on network equipment.

Which raises the question of just what the new architecture would look like. Obviously it has to be something that’s more fabric-centric, because the assembly of features into services can’t be allowed to generate a lot of latency. Obviously that means that it’s going to be very switch-like. However, it’s also still an aggregation element, and so it has to have that capability as well. Finally, it’s likely that proximate metro centers would network with each other directly rather than through the core, because these clusters of metro create what’s actually an economic unit, and economic units are the foundation of application opportunities, particularly ones based on a broader (and more sensible) conception of the metaverse.

Both major vendors (Cisco and Juniper) and startups (DriveNets) could build this sort of thing with their current product sets. It’s therefore going to be a question of positioning the assets properly, and that could be done very, very, quickly. DriveNets and Juniper have taken steps already, and operators tell me that Cisco has been talking about metro at the sales level for about six months, though their marketing/positioning is still lagging a bit. Broadcom, with its position in the white-box and fabric space and bolstered by VMware’s hosting, virtualization, and virtual networking, might also take a run at things.

If broader economic issues that derailed 2022 pass, as I think they will, then 2023 may be the year when this new vision solidifies.

Should Big Tech Help Telcos Build Capacity?

Should big tech, meaning the companies whose business is to deliver content over the Internet, pay for part of the cost of capacity upgrades? That’s not an academic question, because that’s what EU operators have proposed and what the EU seems to be seriously considering. The issue is tangled with broader questions of “net neutrality”, politics, public policy, and a lot of other things. It’s also been around for a long time, without any resolution. What might happen now if the EU agrees with its telcos?

The most important thing to watch in the EU decision here is less the question of whether they’ll accept the big-tech-contributes notion than how the settlement would be implemented. There are many views on this, and many ways that the idea of settlement might work. It might be a positive revolution for the space, or it might pose almost-insurmountable challenges for all.

All the telcos I interact with tell me that they believe that big tech should indeed make some contribution to cover the infrastructure costs associated with creating their conduit to their users. They point out that up until the Internet exploded, services always involved settlement among the operators who participated. If you made a phone call, you paid for making the call, and your operator then paid a termination charge to the operator who completed it. Business data services involved settlement when traffic crossed a “network-to-network interface” or NNI. The majority of operators think that some form of settlement is the best solution, given that it’s been proven in past practices.

Big tech, of course, doesn’t agree, and many Internet and “net neutrality” advocates feel the same. Not only that, the firms who are most impacted by a settlement move are the same firms who are currently under pressure because some or most of their revenues come from advertising, whose credibility as a revenue source is currently in serious question. Many think that big tech is reeling already, and that settlement costs would only add to their angst.

Would they? Would operators even benefit? The answer lies in the basis for the settlement. Past services that relied on settlement were all connection-oriented, voice calls being an example, but Internet-based activity would almost have to be settled based on traffic exchange, a fee per packet or for a fee for total bytes transferred. However, unless regulators took the step of defining a formula for applying traffic-based settlement, it’s likely that ISPs would negotiate with big tech, perhaps defining settlement based on multiple factors, including the number of users.

A terabyte of data for a single user would represent a lower benefit to big-tech firms that either rely on ad sponsorship or are looking for a high addressable market, than a terabyte divided across a hundred users. In fact, it’s very possible that ad-sponsored firms would want to settle with operators/ISPs based on the number of potential tech users the latter could provide.

The settlement mechanism is important for three reasons. First, it could distort the entire Internet market if it’s not fair and relatively immune from being gamed by those who want to avoid payments. Second, the mechanism for settlement would likely determine whether the big-tech-contributes notion spreads beyond the EU if it’s approved there. Third, the impact of the settlement mechanism could impact the stock price of both big-tech companies and operators.

I think that a “fair” settlement strategy would likely be seen by both big tech and the ISP community as acceptable, perhaps preferable to either risking loss of investment in Internet access or regulatory intervention. Thus, it would likely spread. That addresses our second reason why the mechanism is important, so let’s look at the other two.

If we assumed no regulatory intervention, free-market principles would likely result in a strike price for subsidies that both parties could accept. The problem is that the only way this can happen is that either party could simply refuse to participate if they didn’t believe things were fair. That could mean that an ISP who didn’t want to accept the offered settlement for carrying Brand X Video’s content could simply refuse to deliver it. While we have a similar risk today with the streaming TV aggregators and the networks (there have been risks of or even actual blackouts of networks for some users because agreement wasn’t reached quickly), public outcry would be sharp if one of the big-tech players, like Meta, suddenly became unavailable.

That illustrates the big problem the EU will face in the settlement model(s) it accepts. If you set the policies in law, you have to take responsibility for whether the parties are unequally impacted, which could make things worse. If you don’t, then you have to establish how the settlement agreements would be reached without unreasonable impact to the public.

There are technical issues that impact all of this too. One is the impact of content caching, meaning the use of content delivery networks (CDNs). A content provider pays a CDN operator for caching video, for example. Does the CDN operator settle with the ISPs, or does the content provider? If an ISP decides to be its own CDN, does that mean there’s already a form of settlement if the content provider caches with the ISP? In any CDN situation, what counts as settlement traffic?

Another question is whether only some traffic should be subject to settlement. Google has a search engine many use. Google caches some pages, too. Should searches be free from settlement? Should search results, if clicked on, require that the content owner settle, or should Google settle? If Google does cache content, does the traffic it caches get settled by the content owner or Google? And, of course, if some traffic is immune from settlement charges, how do we identify traffic types to prevent gaming the process?

There are a lot of questions here, but the first one to answer is whether the EU goes along. It’s hard to get a reading on something like this, but my friends tell me that they are leaning toward an approval. It may be just a measure to study the settlement options, but something is likely to come out of it. Will it be good for the industry?

Most likely it will cause some convulsions. Any hope of settlement subsidies will make the operators more reluctant to step outside their safe zones, particularly if their stocks gain on the news. The devil will be in the details, and there is a real risk that a bad model for settlement would hurt both operators and the big-tech firms, contributing to a problem that’s hitting tech investment overall. While the EU has, historically, been fairly smart with respect to regulatory policy, this issue is so hot, so critical, that you’d be justified in hoping that they decide not to intervene at all.

The Lessons of October

The biggest question the month of October raised, for most of us at least, is what’s happening to tech. The answer, of course, is that a lot is happening, and most of that is what’s been predictable from the first. All hype waves hit the beach, and in October we’ve seen a lot of beaching. More is to come, so is tech doomed? The good news is that tech might emerge healthier than ever.

I have to admit being frustrated by the way that tech media and even the financial media has treated the sector over the last couple decades. Tech media’s reason is simple; “news” means “novelty” and so there’s positive reinforcement to cover things in a way that sensationalize them, simply because people are more likely to click on an attractive exaggeration (or lie, if you prefer) than on the objective truth. The financial markets’ reason is also simple; you make money on bubbles, so stories that justify them are going to be promoted and expanded upon. We’re not going to change either tendency, so we have to accept that there will be turmoil now and then, and this is one of those turmoil times.

The issue of hype is pervasive and we can’t examine all its impacts, but there’s also more to this than just hype. To find out just what’s going on, let’s focus on three areas in particular. Those are 5G, the cloud, and the metaverse. You can apply the lessons of those spaces to all other spaces, and each of these spaces can teach us something about the future.

Why promote sunrise; it’s going to happen anyway. 5G is also inevitable, in all its various forms and with all its components. The need for improved mobile broadband drives a requirement to make each cell capable of not only higher speeds per user, but also higher capacity for all users being served there. People want to do video, HD video, on their phones, and operators who can’t support that would suffer. Thus, 5G deploys, and there should never have been a question of that.

The problem is that if we say that 5G is going to prepare for more customers and will help sustain and even lower overall mobile broadband costs, nobody reads the articles. If we claim it will revolutionize networking, change how we live, they’ll eat it up. You can guess where this leads us, but to make it explicit, I’m saying that there was never any real reason to believe that simply deploying 5G could raise overall mobile broadband revenue levels. There still isn’t, and so we’re already seeing stories about 5.5G and 6G. Go far enough into the future and you can make any claims you like and keep pushing them until the future becomes the present. Then move on to another claim. Expect nothing revolutionary to come of any of this.

What’s missing in 5G is some specific mission that people would pay for. Even if 5G raises per-user broadband speed, the difference isn’t noticeable for most people because video is the big consumer of bandwidth and limited display capabilities on mobile devices make it nearly impossible to see differences in delivery rate once you can keep up with the frame changes. We can say that 5G is a victim of its own hype, because everyone involved took the easy way out to get publicity and never faced the need to develop a new mission, which really means a new set of mobile applications. It’s what you do with 5G that makes it valuable, and that’s our first lesson.

Now, the cloud. Anyone with a good statistics program could have found that most corporate data centers could achieve economies of scale within a couple percent of what cloud providers could achieve. Thus, since cloud providers expect to earn a profit on services, simple replacement of a data center server with a cloud server couldn’t possibly make financial sense. We’re seeing all sorts of stories in all sorts of places, tech and financial, that are now questioning the economics of the cloud.

But does this mean the whole cloud story was a vast corporate error? No, it means that we didn’t take the time to understand what was really driving the cloud. Moving everything to the cloud could never make sense. What does make sense is to use the cloud’s elastic resources model to address applications that have highly variable demand. Those applications can’t be efficiently supported in the data center because sizing compute resources for the peaks would reduce efficiency too much.

This sort of variable-demand mission is characteristic of the user front-end piece of almost all applications. The cloud has deployed because of that, because we’ve become dependent on online information, ordering, and support. But what happens as we use up those applications, as we deploy the cloud for a larger and larger percentage of the applications that justify it? Answer: Cloud growth slows, which is that we’re seeing and what’s now being acknowledged by Wall Street.

The solution? It’s the same thing that’s the solution to the 5G challenge, it’s applications. If we want to see more cloud, we need to see more applications whose front-end needs justify the cloud’s specific features. That means we have to find a different mission, a broader mission, for hosted software in our businesses and our lives. I believe that this could have been started two decades ago, because I believe that the signs of the future were visible, even clear, then. We didn’t start it then, and until we do we’re going to have to face the fact that hype waves break eventually.

To face the future effectively, we can’t rely on being able to define every possible new application. What we can do is to define the application model that most of these new applications would be based on, and I believe that model is the model of the “digital twin.” Yes, I know that some people I like and respect have recently deprecated the concept on LinkedIn, but I persist for a reason.

Future applications that drive accelerated investment in tech will have to follow the approach of past applications, and move information technology closer to our lives. The next step in doing that is to start modeling our subjective reality in digital terms, and using that model to offer us different tools to face the real world. In order to do that, we have to represent elements of the real world in digital form, so we can use that representation to facilitate realizing peoples’ individual roles, individual needs.

This is the meat of a real “metaverse” concept, or rather it should be. Any “metaverse” is an alternate reality, and any alternate reality has to connect in some way to the real world to make it believable. That means there has to be at least a limited digital twinning, a synchronization. The metaverse model is the general approach to that, period. Whatever drives 5G or the cloud in application terms, the model is going to end up having digital twinning and metaverse elements.

Why, then, is Meta such a mess? We could personalize and say “Zuckerberg”. We could generalize and say “tunnel vision”. Both would be accurate. Zuckerberg is no Steve Jobs, no person of enviable vision. He may also be why such a person doesn’t really have much sway at Facebook, and the lack of insightful leadership has limited if not doomed their chances of making the metaverse a success. But the real problem is that we, as an industry, haven’t looked at the metaverse model at all. We postulate the result in the social-media space, but without looking at the pieces that have to make up and support that result.

Meta’s metaverse is messed up because it doesn’t have a model, only a goal. We want virtual reality to be the basis of a new social network paradigm that will somehow erase TikTok’s (and others’) gains and make us king again. We don’t know how, exactly, to do that, but we need it to happen so we’ll throw billions against the wall and hope somebody figures it out so we can buy it or (better yet) copy it.

Why, though, are we trying to do metaverse without a model? Because the model is complicated and the broad market has no appetite for complicated things. Yes, hype is a part of it, but the root of it all lies in a pervasive attitude of complexity avoidance. We want things simple, digested into a couple hundred words, and something like a new application model to drive 5G, the cloud, and the metaverse just can’t be explained that way…not to the public or to Wall Street. Imagine Meta talking about what would really be involved in the metaverse (assuming they know) to Wall Street analysts or reporters. Even software architects would likely be challenged by the issue, and where can they go to get educated?

They may even have issues getting startup funding, because venture capital is a bubble-and-hype game too. Meta’s problems are opportunities because they mean that the real metaverse is yet to be described. However, what VCs want is for a metaverse hype wave to drive both IPOs and acquisitions, and Meta is making that very unlikely. The question for the market is whether a VC, a startup, or another tech giant (like Google or Microsoft), will see the right approach and implement it. IBM, hardly the poster child for innovation in the mass market, gave us the PC that drove the last real tech revolution. Might they give us the tools for the next? Even the telcos might have a role, because it seems certain that edge computing on a large scale will be needed, and “telco cloud” could give us that.

Will operators’ reluctance to move beyond simple connection services keep them out of the cloud? Whether it does or not likely depends on AT&T’s notion of “facilitating services”, a model where an operator creates not the top-level retail information/content services but the critical components they need. Because operators have a lower internal rate of return, they can invest at lower ROIs than the OTTs, than big tech, and this means they can create a role for themselves that’s at the boundary between the future—which requires innovation—and the connection—which can never again be really profitable.

The good news is that, behind the scenes and with generally low levels of efficiency, we’re making incremental progress in framing the kind of software architecture that we need. We don’t know it’s a unified architecture yet, but if we hearken back to the story of the elephant behind the screen, we understand that one grope may not solve the whole problem of identification, but successive gropes will eventually let us get it right. We had nothing to support a digital-twin-metaverse concept a decade ago, and now what we lack is less the elements than the understanding of how to assemble them.

Somebody is going to get this right. They may become the next Apple, the next Facebook, the next Amazon, and even all of the above. Who will it be? We’ll probably get our first glimpse of the glorious elephant whole some time late next year.