What’s Meta’s Future, and That of the Metaverse?

Facebook/Meta booted its quarter, missing estimates by a mile and saying some things that troubled investors. It’s a rare disappointment for the company, and why it happened and what Meta will do could create a major shift in the industry. Not surprisingly, the “what Meta will do” is likely to focus on the metaverse, and that alone has major ramifications, but metaverse is not the end of it.

Companies, in my experience, rarely confront their real issues head-on during earnings calls. They’ll say stuff that’s nibbling on the edge of real disclosure, but calculated to present the quarter in the best possible light. What I believe Meta is facing, in terms of its revenue growth and future, comes down to three factors.

The first of these is the fundamental weakness of depending on advertising as a revenue source. I’ve talked about the problems of ad sponsorship in prior blogs, but what it comes down to is that ad spending isn’t growing much, if at all, and so nobody can really hope to “win” without someone else losing. Meta’s revenue stream is inherently vulnerable, which means that they need to somehow escape what’s effectively total ad dependence.

The second factor is that Meta is built around a community concept whose upside is also limited. Social networking unquestionably exploded because of its novelty and its ability to extend engagements that were previously limited by the need to personally interact. It’s not novel any longer, and more and more users have hit the wall in terms of how much they can use the platform without interfering with their real-world life. This issue is particularly important for youth, and Meta cited that on their call. To get their community back, they need to get their novelty back.

Factor three is that Meta’s attempt to exploit the community for more revenue has alienated some, and attracted regulatory attention. You certainly don’t need whistleblowers discrediting your revenue strategy if you’re a CEO, and the fact is that many people (myself included) dropped everything Facebook long ago because of increased ad exploitation and content. You cannot synthesize truth by collecting the largest possible number of lies. Given that the community has to grow to support revenue growth, but that exploiting the community for revenue is likely to create pushback, this puts the company in the classic Catch-22. This means that Meta needs to break the vicious cycle of exploitation before they discredit themselves, leaving little room for recovery.

Where this leaves Meta, IMHO, is simple. They need to step beyond ads, create novelty, and either police their stuff better or at least make it appear that it’s not their problem to do so. The metaverse is a part of this, obviously, but I think there are other things we need to be watching for.

One is cryptocurrency, not just as a feature but as an on-ramp to a sell-something strategy. In many gaming applications, users buy what are essentially NFTs, and this requires a payment system. Everyone knows that Meta considered launching a cryptocurrency, but I think they finally realized that what they really needed was a way to sell something; how payment was handled is a secondary point. Metaverse offers three options for selling, NFTs and metaverse hosting, and “virtual retail”. They’re all interdependent.

Creating a workable metaverse is not a simple task, as some of my blogs on the topic should show. You need a considerable amount of distributed hosting and a network that connects your hosting points with little or no latency. You also need a boatload of cloud-native software that provides the functionality, of course. All of this stuff could be difficult or impossible for some would-be offers of a metaverse, not to mention very expensive. Meta could logically offer to host other metaverses, even allowing companies to create their own customer or supplier metaverse to be hosted by Meta.

NFTs and cryptocurrency wallets could be an important part of the features of a metaverse, and Meta could re-launch its own crypto idea or support a variety of currencies. They could also provide an NFT exchange for use by their metaverse hosting customers. Taking a cut, of course, of the action.

Something like this might also allow Meta to escape some of the regulatory backlash they’re seeing, and the pressure to suppress questionable content. If a third-party hosts a metaverse on Meta, is Meta any more responsible than if a cloud provider hosts an application? They could cover a lot of this in the terms of use for metaverse hosting.

As far as their own metaverse is concerned, Meta could turn it into virtually everything, including virtual online retail. Your avatar could to into a meta-store and look at a meta-product, and when they purchased (perhaps with meta-currency) they’d see the real thing delivered. A true virtual store. Meta could enter a lot of direct retail markets. They could frame virtual visits to doctors or other professionals as metaverse activity, again taking a cut. If virtual reality can be made realistic enough, then meta-retail (and wholesale, of course) could become the baseline strategy. Guess who could control a big piece of it?

Not everything is beer and roses for Meta, though. Microsoft, so current stories go, has bowed out of its HoloLens VR/AR glasses, which on the surface might be seen as a good thing for Meta. Maybe it is, but it’s also possible that Microsoft is less bowing out of AR than deferring it. They could believe that “immersive AR” is shooting way to far ahead of the opportunity duck; that realization of the metaverse opportunity demands getting something out there that can evolve. There are a lot of lower metaverse apples to pick, and Microsoft’s cloud position means they could gain a lot by picking a few.

Because it needs to grow its community, Meta needs something glitzy, and that takes time and technology. The fact that Meta has announced “personal distancing” limits to prevent AR groping in the metaverse is an indication that they do really intend to offer immersive, realistic, AR. Because of the latency challenges at the least, immersive AR could be a barrier to a quick realization of Meta’s goals, which might open the door for others.

Do you really need immersive AR to shop? Do you need it to collaborate, or to model a factory process? The short-term answer is that you do not, even though it might become a major asset in the longer term. What you do need is a cloud, though, which Microsoft has and Meta does not. They could become cloud customers, but I doubt that they’d want to surrender the margins on the services to a cloud provider. Given that the cost of creating a truly distributed, edge-hosted, immersive-AR metaverse would be formidable (as would the time required), Meta may have to focus initially on something less immersive, less realistic…at the risk of losing more users? Or maybe not.

The initial metaverse social framework will be novel in itself, even if it doesn’t yet have immersive AR. It would surely be broader-based, since most users won’t have AR glasses and since AR requirements would limit the extent to which social-metaverse usage could be mobile. Meta could roll the other stuff in later on, as technology improves and as the initial novelty fades, creating the need for a kicker.

This isn’t going to be an easy move for Meta, not the least because the company’s own culture has been arrogant and slow to accept the realities of their own market, including regulatory risks. They may need some leadership changes, even to the top, to make it work. Meanwhile, other companies could be in a better position to exploit Meta’s own signature concept.

Enterprise Views of Cloud Providers

According to a Street publication, Google’s cloud isn’t really gaining market share. That’s obviously not a good thing, and the truth is that Google’s cloud business has been consistently ranked fourth in the space, behind even IBM. I wondered early in January why Google wasn’t gaining, given that frankly it may well have the best cloud technology and infrastructure of the whole public cloud space. That raised the broader question of just what enterprises thought about public cloud, and what public cloud providers think about their customers, so I’ve been gathering views. This blog will present them.

First, though, I want to make a point, which is that enterprises make up less than half of the public cloud market. The majority is made up of OTTs and startups, and Amazon’s cloud business is likely Number One because they’ve gotten the largest share of this bigger chunk of opportunity. I didn’t try to survey this group because 1) I don’t have the breadth of contacts, 2) I’d have NDA issues with clients, and 3) I don’t think I’d get a truthful response in many cases. Suffice it to say that this group of cloud users tend to fear Google and Microsoft as direct competitors, so they gravitate to Amazon.

Enterprises have their own emotional reactions to the cloud players, and I can get a taste of their views for Amazon, Google, and Microsoft. I’m less comfortable about the IBM cloud data; my contacts suggest that most of the IBM cloud business comes from relatively few larger customers, which makes accurate data difficult to come by. For the three, I think there’s enough comment to draw some insights.

The first and most obvious question is whether enterprises believe they know the offerings of those three cloud providers, and almost all of them say they do. Similarly, almost all the enterprises think that the services offered by the providers are “substantially the same”, and just short of three-quarters said that pricing differences among the providers was “minimal”. This would suggest that something other than features and price drove the decisions, or that enterprises weren’t representing their process accurately.

The next question might tell the tale. If you asked enterprises what cloud provider they engaged with first, the tally of responses by cloud provider match quite closely to the market shares of each. As a follow-up point, if you asked whether their the currently dominant provider in their business was the first provider they engaged with, just under 70% said that it was.

This, to me, suggests that cloud success is tied closely to cloud provider sales engagement with enterprises. That’s not surprising; incumbency is a powerful force. It does beg the question of what happened in the remaining 30-plus percent, though, and here is where some “issues” or “differences” emerge, and there were two highly related things that drove enterprises.

The first of these issues was hybrid cloud, named as a priority by over two-thirds of enterprises (some named more than one issue). It was particularly interesting to me that half this group said that hybrid cloud was an “emerging” issue for them, but at the same time indicated that the applications that created their focus on hybrid cloud were in fact options all along. That suggests to me that enterprises didn’t at first see the nature of their own future cloud usage.

That seems to be supported by the second priority, which was creating new front-end technologies for their applications to accommodate changing business needs. This got support by almost exactly the same percentage of users, and it was universally cited by the half of “hybrid cloud” users who said their applications of hybrid cloud had been options all along.

What cloud providers were seen as best in their support of one or both of these? Among the three, Microsoft was named by the largest number, over two-thirds of enterprises. That means that Microsoft’s Azure was seen as “top cloud” by two-thirds of the enterprises who changed their primary providers. Clearly, it’s this factor that has been driving Microsoft’s cloud growth. Amazon was second with about 25%, and Google was third with about 8%.

Google’s failure to gain market share, then, could be linked to the fact that it is not seen as the premier hybrid cloud player. The importance of hybrid/front-end cloud in wrestling enterprises away from their legacy cloud provider is likely the reason IBM decided to emphasize it in their own positioning. Both these points are validated by comments on how the cloud providers engaged those enterprises who did abandon their incumbent provider for another.

On the hybrid/front-end cloud issue, every enterprise who switched cloud providers at all said that Microsoft’s Azure sales team engaged them fully on the issue. A bit under half said that Amazon had, and the number who said that Google had was less than a sixth. Microsoft’s presentation was considered “highly useful” by almost all the enterprises, Amazon’s by a quarter, and Google’s (significantly) by only two enterprises, below the level of statistical significance.

In some interactive chats with these enterprises, they often noted that the initial contact they had with their cloud provider options (including the incumbent) when considering a change was unsatisfactory. The biggest complaints were things like “didn’t listen to my needs” or “gave me a canned pitch”. These all boiled down to being non-responsive, and the complaint was levied most often against Google, then Amazon, and then Microsoft. If we broaden the universe of enterprises to all cloud users I had data on, rather than just those who changed from their incumbent provider, the incumbent was less likely to be considered non-responsive (not a surprise), whoever they were. That may help explain why incumbency is so valuable; it teaches the provider to listen better.

Amazon’s biggest problem came from its channel partners. Enterprises liked them the least of all the channel partners, citing them as being less responsive to their needs. Microsoft’s channel partners got the best comments, which might mean Microsoft is doing a better job of qualifying them or training them.

Better, but still not great. None of the cloud providers are prepared to tell enterprises that most of their current applications won’t ever move to the cloud, or even suggest it could be a very long time before that happens. As a result, many enterprises don’t really see the hybrid/front-end strategy holistically, and that’s an important prerequisite for effective use of public clouds.

Hybrid cloud technology, if it is indeed the key to the cloud future, is both simple and complicated. On the simple side, the coupling between workflows in the cloud and in the data center is simple, with any number of options available. Enterprises could use any helpful set of cloud features and have little or no impact on how they link back to their data center applications. On the complicated side, if hybrid cloud is “multi-cloudized”, then it could be smarter to use portable platform software (from HPE, IBM/Red Hat, or VMware, for example) to build the cloud piece, which could then be run anywhere with minimal changes. This would also facilitate redeployment and scaling across cloud and data center boundaries.

What I hear from enterprises suggests that the cloud providers are trapped in the “move to the cloud myth” to some degree at least, and that their success with the enterprise relates to the extent to which they’re breaking out. Microsoft has an advantage here because they’re an enterprise company first and a cloud company second. IBM is in the same position, but is hampered strategically by the fact that their influence is limited to big mainframe accounts, and they’ve not effectively exploited Red Hat in broadening their position. In any event, Red Hat and OpenShift depend on being cloud-neutral for a big part of their potential success. IBM, then, should have embraced a cloud-neutral strategy to maximize hybrid cloud positioning, but that would then limit the benefit to their own cloud business.

It’s hard to say whether cloud providers are driving misconceptions about “moving to the cloud”, whether they’re victims of media hype as much as buyers are, or whether buyers still believe that’s where things are heading. Whichever is the case, it would seem that both enterprises and cloud providers would benefit from examining the current cloud world, and positioning based on what’s really happening.

Is There a Relationship Between Crypto and the Metaverse?

Just where, if anywhere, is the intersection between the metaverse concept and crypto? That might be a key question for a number of reasons, but as usual the answer will depend a lot on how we define two very fuzzy concepts. Whatever the truth is, we can also expect hype to fuzz up the results, particularly since Meta’s reporting of an objectively very bad quarter virtually assures that they’ll be looking at the metaverse to restore their opportunity. Some of what they do will surely involve crypto, and I’ll blog next week about what I think will happen, but today I want to look at the broader question of the metaverse/crypto relationship.

If you read my blog from yesterday you’ll get a sense of the multiple missions that the concept of a metaverse could serve, and also my own definition of what a metaverse is. The prevailing definition, to quote, is that a metaverse is “an artificial reality community where avatars representing people interact.” On LinkedIn, I was told that Wikipedia says that a metaverse is a collection of virtual worlds created for social connection. My thinking is that we need to broaden that to say that a metaverse is an enhanced or alternative framework representing a real-world environment or community. The second, broader, definition is a superset of the others.

A metaverse’s fundamental requirement is the ability to create a digital twin of a set of real-world things, and an alternative framework in which to represent them. That framework can be designed to create a realistic virtual community, a game, or even a representation of a factory or transportation system. Humans may or may not be represented in it, and inanimate things might be generated or twinned. There is, in my view, absolutely nothing about this broad process that demands any crypto technology.

Whether it admits to or benefits from crypto depends on how that concept is defined. There are two general definitions out there. First, “crypto” can be a shorthand term for “cryptocurrency”, and that appears to me to be the most broadly used definition of the term. Think Bitcoin, in other words. The second definition is that crypto is short for a cryptographic, blockchain-oriented, mechanism for creating an authoritative record of something. The first of the definitions would make crypto an adjunct to the metaverse, but the second might make it a very important feature of its implementation.

Obviously, you can’t pass around real money in a virtual reality. If a metaverse has to support real commerce, meaning payments and receipts, in any way, then we have to be able to translate real-world financial value into the virtual world (the metaverse) and back. Even the ability to buy a weapon or a drink in a game, if the right is backed up by something like a credit card, means that you have to be able to represent the transaction in the metaverse, perhaps with a “local currency”. If payment between players is possible, then that currency has to be convertible to real money, and that means that if you could counterfeit it, you would be effectively creating real money. Cryptocurrency, or a blockchain-authenticated local currency, could be the solution to that. There could be other solutions too, of course.

That’s not the end of it. If you pay for a sword in a game, you “have” it. Could players counterfeit swords, and sell them within the game? Perhaps, depending on the rules. If they could, then you also need to be able to represent a virtual element in an authoritative way, which essentially means that things that are “real” in the metaverse might have to be represented as non-fungible tokens (NFTs). An NFT is, in a sense, a cryptocurrency, but in another sense it reflects our second definition, which is that it’s simply a validation that something is a representation of something else.

That’s a pretty broad definition, so we can say that the second crypto definition and application of “crypto” is a lot more nuanced. The most obvious application of that definition is the authentication of the relationship between a “digital twin” and the real-world counterpart. If we’re talking about people and avatars, that would mean assuring the metaverse that a given avatar was what they represented to be. Whether that assurance to the metaverse meant assurance to everyone in it would depend on whether the rules of the metaverse allowed someone or something to misrepresent itself. Can I don a disguise? If so, then my identity (the twin-to-reality association) has to be flexible. However, when I buy a sword, I need the metaverse to be as sure of who I am as a clerk in a store would be sure of my identity if the real “me” used a credit card.

The ability of crypto to validate identity is fundamental to many current crypto/blockchain applications, and particularly to Web3 stuff. But it’s not just “human identity” that we have to worry about. In a metaverse-of-things (MoT) IoT application, we might digitally twin a manufacturing process to allow for computer simulation of production control. It would be awkward to say the least if some foreign element could be introduced into the MoT, where it could either bollix up the works or even steal something. A twin of a truck, introduced into a transportation MoT, could end up being loaded with real goods.

In both these applications, there are implications that only add to the potential complexity. Just as you need a wallet for real money, you need a wallet for cryptocurrency, and you need perhaps a virtual backpack or scabbard to hold your virtual possessions. The elements of an MoT need a “locale” that holds the process, and we need a way of moving things into and out of their repositories, ways that might or might not involve a transfer of ownership. Drawing my sword isn’t the same as giving it to you, or throwing it on the ground, and changes to the ownership relationships have to be made appropriately. Through it, the properties of the sword remain.

Except that I could “process” the twin element, which in MoT would reflect real manufacturing steps. Those have to be recorded. If I throw down my sword and it breaks, that has to be reflected in the properties of the sword, and so would having it repaired. Staying with manufacturing/transportation, there are changes of ownership that have to be managed. Crypto/blockchain isn’t the only way to do that, but it’s a logical way given that we have blockchain-based initiatives that accomplish much the same sorts of things today.

What this all leaves us is that there is definitely a role for blockchain in what may turn out to be all of the possible metaverse missions. There may also be a role for cryptocurrency and NFTs. However, and to me this is the big “however” point, there are a lot of things we need to have before either blockchains or cryptocurrency/NFTs have to be addressed. The risk of doing without them in the metaverse is silos based around identity and authenticity assurance, but every metaverse implementation could end up being a silo at the architectural level, and that’s a much greater risk.

If anyone is doing the right thing here, with the architecture and with “crypto” in combination, I’d sure like to hear about it. Takers?

More Metaverse Missions?

If you like euphonic comments, how about “more morphs mix metaverse”? It’s pretty clear already that we’re going to see the original concept of the metaverse broadened to the point where it could apply to nearly everything. That’s too bad, because there are some (more euphonics) multi-faceted metaverse missions that actually could impact the architecture and evolution of the concept, and they could get buried in the hype.

The simple definition of a metaverse is that it’s an artificial reality community where avatars representing people interact. That’s what Meta (yes, the Facebook people) seems to have intended. In a sense, this kind of metaverse is an extension of massive multiplayer gaming, and that similarity illustrates the fact that the concept of the metaverse as a platform differs from the concept of the metaverse as a service or social network.

As a platform, a metaverse is a digital twinning framework designed to mimic selected elements of the real world. We could envision a metaverse collaborative mission as one that fairly faithfully mimicked the subset of reality needed to give people a sense of communicating in a real get-together. A game is a mission where only the “identity” of the player is mimicked; the persona the avatar represents isn’t tightly coupled with the real physical person except perhaps in movement, and perhaps not even there. Maybe it’s only a matter of control. As I suggested in earlier blogs, you could also envision an IoT mission for a metaverse, where you digitally twinned not necessarily people but transportation or industrial processes.

What we’re seeing already is an attempt to link blockchain concepts, ranging from NFTs to cryptocurrencies, to a metaverse, then say that anything that involves blockchain or supports crypto is a metaverse. Truth be told, those are more readily linked with the Web3 concept, where identity and financial goals are explicitly part of the mission. That doesn’t mean that you couldn’t have blockchains and crypto and NFTs in a metaverse, only that those things don’t make something a metaverse.

So what does? An architectural model, I think, made up of three specific things.

The first is that digital-twin concept. A metaverse is an alternate reality that draws on real-world elements by synchronizing them in some way with their metaverse equivalent, their twin. Just what gets synchronized and how it’s done can vary.

The second is the concept of a locale. Artificial reality has to reflect the fact that people can’t grasp the infinite well. We live in a vast world, but we see only a piece of it, and what we see and do is contained in that piece, which is our locale. We can define locales differently, of course, a video conference could create a metaverse locale, but a locale is fundamental because it’s the metaverse equivalent of the range of our senses. This means, collaterally, that the metaverse may have to generate environmental elements and even avatars that don’t represent an actual person but play a part—the Dungeons and Dragons non-playing character or NPC.

The third thing is contextual realism. The metaverse isn’t necessarily the real world, or even something meant to mimic it, but whatever it is, it has to be able to present in a way that matches the experience target of its mission. If we’re mimicking a physical meeting, we have to “see” those in our meeting locale and they have to move and speak in a way consistent with a real-world meeting. If we’re in a game and playing a flying creature as our avatar, we have to be able to impart the feeling of flight.

I think that it would be possible to create a single software framework capable of supporting any metaverse model and mission, given that we could define a way of building a metaverse that could provide the general capabilities required for the three elements above. However, the specific way the architecture would work for a given mission would have to fit the subjective nature of metaverses; what makes up each of my three things above will vary depending on the mission.

A good example of this is how a metaverse is hosted. Just like a cloud-native application is a mesh of functions, a metaverse is likewise. The place where a given function is hosted will depend on the specific mission, and in fact some functions would have to be hosted in multiple places and somehow coordinated. For example, a “twin-me” function that would convert a person into an avatar would likely have to live locally to each person. I also speculated in a past blog that a “locale” would have to have a hosting point, a place that drew in the elements of all the twin-me functions and created a unified view of “reality” at the meeting point.

Blockchain, NFT, and crypto enthusiasts see the metaverse as a GPU function because GPUs create blockchains. I think that this focus misses all three of my points of metaverse functionality because it misses the sense of an artificial reality. The limit to a metaverse is really the limitation of creating a realistic locale, and the biggest barrier to that is probably latency, because latency limits the realism of the metaverse experience.

We could envision a collaborative metaverse with ten people in it, with all ten being in the same general real-world location. We could find a place to host our locale that would present all ten with a sense of participation, providing our “twin-me” functions were adequate. Add in an 11th person located half-a-world away, and we would now have a difficult time making that person feel equivalent to the first ten, because anything they did would be delayed relative to the rest. They’d “see” and “hear” behind the other ten, who would see and hear them delayed from their real actions.

This doesn’t mean that there’s no GPU mission in metaverse-building. I think that the twin-me process could well be GPU-intensive, and so would the locale-hosting activity, because the locale would have to be rendered from the perspective of each inhabitant/avatar. The important thing is that contextual realism, which GPUs would contribute to but which latency would tend to kill. Thus, it’s not so much the GPU as where you could put it, particularly with regard to locale.

Everyone virtually-sitting in a virtual-room would have a perspective on the contents, and of each other. Do we create that perspective centrally for all and send it out? Not as a visual field unless our metaverse is very simple, because the time required to transmit it would be large. More likely we’d present the room and inanimate surroundings as a kind of CAD model and have it rendered locally for each inhabitant.

This sort of approach would tend to offload the creation of the metaverse’s visual framework from the locale host to the user’s location, but I think that business uses of a metaverse are likely to have a “local locale” host to represent their employees. That means that metaverse applications would be highly distributed, perhaps the most distributed of any cloud applications. It also means that there would be a significant opportunity for the creation of custom metaverse appliances, and of course for edge computing.

The connection between metaverse visualization and metaverse “twin-me” and gaming is obvious, and I can’t help but wonder whether Microsoft and Sony used that as part of their justification for buying gaming companies. However, there’s a lot to true metaversing that simple visualizing or twinning doesn’t cover. Microsoft, with Azure, has an inroad into that broader issue set, and Sony doesn’t. They may need to acquire other services, which begs the question of who will offer them.

The metaverse concept is really the largest potential driver of change in both computing and networking, because the need to control latency to support widely distributed communities would tend to drive both edge computing and edge meshing. That would redefine the structure of both, and open a lot of opportunities for cloud providers, vendors, and even network operators down the line. And perhaps not that far down the line; I think we could see some significant movement in the space before the end of the year.

The SD-WAN Wars are Coming

This is going to be a year of great change in networking, which means both great opportunities and great risks. In particular, we really seem to be setting up for a major shift in the virtual networking and SD-WAN space. The question, as it often is, is what exactly we’re going to be fighting over, and who’s going to take what critical positioning steps to grab control of the new situation.

SD-WAN is already hot, and its fundamental value proposition, lower-cost VPNs, is getting hotter. As I noted last week, a major improvement in consumer broadband technology drives the cost of high-speed Internet connectivity down. That makes SD-WAN more attractive than MPLS VPNs, cost-wise at least. SD-WAN as a VPN extension, or even a VPN replacement, is essentially old news here, but if the economies of consumer broadband drive more businesses to reconsider IP VPNs even where they’re available, the technology could get a boost.

Even with a boost, though, differentiation is the key to success in sales, which means that SD-WAN vendors have to push beyond the obvious. I’ll illustrate what I think is happening with three references, below.

One place they’ve been pushing is undergoing its own revolution—the cloud. Even before COVID, there was growing enterprise interest in creating Internet-based portals for customers and partners, and when WFH was added in in 2020, we saw a major upswing in the use of the Internet as an employee empowerment tool. The cloud was the primary vehicle enterprises used to create portals to their legacy application, and that’s been (and will remain) the primary driver of enterprise cloud commitment. SD-WAN, in software form, can create a direct-to-cloud connection just as it can support a thin-site connection.

This could be critical for a number of reasons, not the least being that while cloud providers are now starting to offer VPN-like services for enterprises’ cloud-hosted elements, these aren’t helpful in multi-cloud because they’re cloud-specific. On the other hand, an SD-WAN could link the cloud to the VPN, whatever cloud we’re talking about, and could also link branch offices and other remote sites, even home workers. Cloud connectivity is already recognized as a new SD-WAN driver, but it’s going to get a lot more recognized this year.

Then, of course, there’s security. The first of the three sources I promised to cite, from VentureBeat, is about what should have been the security focus all along, zero trust. I’d love to say that this piece frames the future of security, and the relationship between it and SD-WAN, but it totally misses the mark. The story doesn’t even talk about the real zero-trust model, which has to be based on substituting explicit connection permission for IP networks’ traditional promiscuous connectivity.

Like any term that gets media attention, zero-trust has gotten expanded to the point where it’s about almost anything and everything related to security. That’s probably largely due to the fact that software vendors and network vendors with established security portfolios aren’t particularly interested in seeing their business impacted by something new, but the fact is that the “trust” that we’re talking about in zero-trust is about trust in connectivity.

IP networks are inherently promiscuous in terms of connectivity, meaning that if you don’t want some connections to be made, you have to do something to block them. Traditionally that blocking has evolved into an endpoint feature, a “firewall” that stands between a user or application and the wide and evil world. However, once you decide that you’re going to create a higher-layer network service, as virtual networking and SD-WAN do, you have a chance to define connection rights there.

Back in 2019, I did a short report on SD-WAN, and in the report, I made the point that the number one requirement for SD-WAN was session awareness, meaning the ability of the software to recognize users and applications, and the network relationships (sessions) between them. Session awareness means that an SD-WAN can control what sessions are permitted, and that’s what I’ve believed from the first is the foundation not only of zero-trust security, but of security overall.

It’s possible to introduce something like session awareness via an expanded definition of a firewall, but that approach has challenges. Firewalls are per-packet elements; they look at packet headers to decide what to admit and what to reject. To make them aware of even the “allowed” IP addresses (and ports) but worse the list of ones not allowed, would make them impossibly complex and introduce significant latency. You need to introduce session awareness at the connection level, and manage the overhead.

I know of only two companies in the SD-WAN space who even claim any level of session awareness, and that hasn’t changed for several years. If there were a realization of the security side of SD-WAN, the connection with zero trust, you’d expect to see SD-WAN vendors adding the features to their own products. They haven’t, they’ve only band-aided and fuzzied up the concept with a loose link to firewalls.

What are vendors doing? That’s the target of my second reference. Cisco is of course the gorilla of network equipment, and they’ve recently announced a link between SD-WAN and their WebEx collaboration. This is consistent with recent announcements that have linked their SD-WAN to cloud and multi-cloud. The Cisco drive is sales-friendly, tactical, but it doesn’t reflect any Cisco awareness of a seismic shift in SD-WAN and virtual networking. Yes, as I’ve noted many times, Cisco likes to be a “fast follower”, but it seems to me that their recent announcements epitomize “follower” more than “fast”.

Cisco isn’t going to drive an SD-WAN or virtual-network revolution. Like most SD-WAN players, they’re committed to simple changes to their base technology, which means that even a strong cloud position is a bit of work. Security? Forget it; they’re well behind the positioning of other SD-WAN providers.

Including arch-rival Juniper. Juniper’s acquisition of 128 Technology gave them a major edge in the technology of SD-WAN and the implementation of a true zero-trust model. Their most recent announcement on SD-WAN, the third of my references, linked their “Session Smart Routing” (128 Technology) approach with Mist management, which not only simplifies operations for the typically small-to-fringe SD-WAN sites where local support is likely unavailable, but also makes their solution more attractive as a managed service. MSPs are a major conduit for SD-WAN sales, and the operational benefits of Mist would also make the Juniper strategy just as attractive to network operators.

One way or the other, SD-WAN is going to grow significantly. As it does, it’s inevitable that the market looks further and harder for differentiation, particularly when what looks like it might be developing is a true shift from a limited SD-WAN position to a much more interesting and important virtual-network positioning. The improvements in FWA and fiber broadband are priming the pump now, but it’s going to be the cloud and security that deliver buckets of opportunity. You can bet that these areas will be getting a lot of attention in 2022, but buyers will need to beware of the tendency of vendors to position old technology to address new missions. Vendors will need to start thinking about making real enhancements to some creaky old offerings, because things will get real, and very quickly, in the SD-WAN wars.