Is the Cloud’s Future Horizontal or Vertical?

Is the road to success in the cloud horizontal or vertical?  I’m not talking about actual roads, of course, but rather about whether pursuit of public cloud revenue should be directed at generally useful technologies (horizontal) or specialized and focused applications (vertical).  One of these approaches has dominated over time, but the other is seeing increased interest and one provider is arguably betting it’s going to be the best approach.

When public cloud computing first launched, it tended to be as generally targeted as possible, looking for simple server consolidation as the driver.  IaaS, or infrastructure as a service, tries to make the cloud look and work like a server, which is obviously pretty “horizontal” in my terms.  However, it became clear to providers eventually that less than a quarter of business IT could migrate to the cloud under the IaaS paradigm.

“Web services”, meaning features offered through APIs by cloud providers to facilitate developing or modifying apps to maximize cloud benefit, were the answer to this.  At this point, we saw our first signs of a different approach among providers.  Amazon stayed very generic in its tools, while Microsoft with Azure specialized their cloud services to mesh with Windows Server, creating a platform-as-a-service (PaaS) model distributable between premises and cloud.

Web services’ tendency to divide providers by approach is continuing and evolving, driven by provider desires to improve market share.  Amazon has tended to be a bit more focused on what could be called the cloud-specific market opportunities, and Microsoft on the hybrid cloud users.  These have both a horizontal and vertical dimension.

The horizontal side of the picture is the general goal of using web services to support new development framed around the separation of business applications into a distinct front-end presentation-and-device model, and back-end transaction processing model.  Businesses are very reluctant to shift the latter to the cloud for a variety of reasons already well-known.  On the other hand, the cloud is a way of extending web-server involvement in user interfacing.  Add logic that’s still presentation-centric to the web/cloud side and you get little pushback.

The specific transitional concept, the thing that has perhaps both vertical and horizontal aspects, is mobility.  Mobile apps, to optimize productivity or other value to the business, need to be more than just a web interface for a small screen.  They have to reflect the difference in how a mobile user relates to information, which means that you may want to pull a bit of the back-end transaction process forward into the cloud.  Conceptually, you may also want to do that for hybrid cloud applications as a means of offloading transaction processing tasks to a more elastic resource. With mobile you have vertical and horizontal drivers.

The impact of this transitional stuff happens at the point of the transition, which is the boundary between the front-end and transactional pieces of the app.  That boundary is Microsoft’s specific focus and strength, and Amazon’s Greengrass and Snowball are aimed (in different ways) of supporting the migration across that boundary, either persistently because of a function shift strategy or by scaling and redeployment.

Event processing is a further evolution toward a vertical mission.  Front-end and mobile apps are naturally contextual, meaning that while there may be processing steps from the initiation of an interaction to its completion, the user at least is aware of the progression and it’s therefore fairly easy to make the rest of the process as aware (as “stateful”) as it needs to be.  With events, the problem is that there is not likely to be this natural context, unless the event is “atomic” meaning that it carries a complete meaning and thus a complete requirement for processing.  For everything else, events need to be turned into transactions before they can be handed off.  That function doesn’t exist today, and so it is as easily written for the cloud as for the data center.  Great target, if you’re a cloud provider looking for some new revenue opportunity!

This is where our absent-so-far player Google comes into the picture.  Google has, in my view, the best architecture from which public cloud services can be delivered.  It’s not even close.  However, they have tended to be more evangelists for a different way of thinking about and implementing applications than a promoter of the cloud as a business asset.  As a result, they’ve neither captured the new cloud development opportunity with things like social-media startups that has fueled Amazon, nor the hybrid cloud that Microsoft has ridden to success.  To try to follow either course today would set them on a catch-up path that has little glamor or chance of success.  Instead they chart a new one.

Google, I think, sees two near-future drivers for the cloud.  One is events, and the other artificial intelligence (AI).  Google seems to see that there’s a natural symbiosis between the two, and that they could leverage that to make cloud prospects value a dualistic approach.  That might then be able to migrate “down” into the mobile front-end mission, which could then give Google a shot at an equal footing with competitors.

The symbiosis between AI and events is related to a post from CPLANE, one I reposted on LinkedIn.  What I get from the post is that our dependence on technology is far outstripping our tech literacy.  As a result, we risk having more and more new and useful things end up going nowhere because nobody understands how to get them, deploy them, or use them.  Event processing is an example of this.  We organize work into transactions because we think contextually, transactionally.  However, too many events with too much contextualization required overloads human ability to contextualize.  AI can both handle events (which reduces the stuff that humans need to look at) and contextualize what it doesn’t handle (through complex event processing for example).

The result of this combination could focus Google on event processing rather than hosting event processing components.  Amazon and Microsoft are more in the latter area.  This is why I think Google focused their custom silicon (announced at their Cloud Next event) on AI instead of just looking at replicating Greengrass capabilities for migrating cloud processes to the edge by loading them on CPE.

This is one of those moves by a company that could prove very smart or very destructive.  Amazon and Microsoft have won so far by appealing to the need of the moment, staying just a little out in front of merchandisable trends.  That lowers both the barrier to delivering something and the barrier to using it.  Google, as it traditionally has, seems to be taking the “make them rethink this” approach, which of course relies on thinking and on a more architected solution model.

Is this going to work for Google, lead the company back to at least parity with its cloud competitors?  I’d like to think it will, but I have to doubt it.  There has always been a cultural void between the Silicon Valley crowd and traditional businesses.  I remember watching an old commercial for the Apple Lisa (well, I said it was old, didn’t I?) where a youthful guy comes into his office on the weekend with his Irish Setter and sits down to a Lisa.  I happened to be with a VP at Chase the following day, and his comment on the commercial was “What kind of company would let someone bring their dog to work?”  That thought probably never occurred to Apple, but the Lisa failed so maybe it should have.  You can’t have instant gratification from culture revolutions; too much inertia.  That’s probably true here too.

CIMI’s SD-WAN Tutorial is Now Available

We’re happy (and relieved!) to tell you that the tutorial on SD-WAN technology we’ve promised is now available HERE as a zip file that contains the PDF.  There’s a license agreement on the cover page that describes the requirements for sharing this document, and I’ll expand on them again here.  Note that this PDF is protected from editing, and any attempt to break that protection is a violation of our license and copyright.

First, this document can be shared only in PDF form, exactly as provided on our website.  You may not edit the document in any way, including changing the text, adding information, or extracting or replacing pages.  You can print it for your own use, in low-resolution form, but you cannot share it in printed form without our express consent.

Second, you must attribute this document to me (Tom Nolle) and CIMI Corporation and acknowledge our copyright if you post a copy to another website.  You must also include a link to our website.  If you send copies of this document via email, the email must contain the attribution, copyright acknowledgement, and link.

Third, you may not use this document in any way that implies our endorsement of a product, service, or company.  That means that you may not distribute it in, or in association with, any sales/marketing event without our consent.

Next, for those of you who follow my posts on LinkedIn, I want to make an earnest request for some courtesy with regard to comments.  It’s bad form to hijack someone else’s post to promote yourself, your company, or your products or services.  I’ve made a point of calling out people who do that, and I’m especially sensitive here because this tutorial cannot be used to promote vendors/products in the SD-WAN space.  If you post what I think is a commercial, I’ll ask you to take it down.  If you don’t, then I won’t respond to any comments from you in the future, and I’ll disable comments on this thread, which means others won’t get to ask questions.  Be nice here, please.

Finally, as is always the case with what goes out under my name, I wrote every word of this document.  I invited an old friend, Dwight Linn of FAE Telecom Inc. to review and offer suggestions, and I thank him for his efforts.  I also asked a half-dozen enterprise network executives and service provider/MSP executives to review the material, and while I can’t use their names because they’ve asked me not to, I thank them as well.  No SD-WAN vendor or network equipment vendor in a related space saw this document before its release, and nobody influenced what I said in it.  You can rely on that.

What’s Nokia’s Problem, and How Far Does it Infect the Industry?

Nokia’s quarterly numbers were, to say the least, disappointing, and while telecom equipment is generally a challenge these days (Juniper had its own problem with its quarterly results), Nokia seems to be more challenged than most.  The reasons that’s the should be an indicator for others in the telecom equipment space, and also perhaps for the networking industry overall.

I just blogged about the problem of vendors stuck in “feature-neutral”, unable or unwilling to get beyond the old-line networking of the last decade.  That’s not Nokia’s problem.  They actually have very good technology, features well-advanced.  They don’t seem to be able to make people care, to get their hands around a way to advance their strategy and turn it into a buyer’s strategy.

One problem that Nokia obviously has is “mongrelization”.  Nokia is the union of the original Nokia company, and Alcatel-Lucent, which of course is itself the union of Alcatel and Lucent.  Those of us who knew the latter two companies through the years and watched the merger progress probably agree that there was never a happy combination achieved.  Given that, it’s no surprise that adding another company to the mix resulted in a certain amount of confusion and tension.

In the old ALU and the current Nokia, a lot of product line people have complained to me that they seem to spend more time competing with other units of their own company than they do with the real outside competitors.  M&A always generates political battles as the combined staffs of two companies fight for positioning in the remaining organization.  This is certainly true with Nokia and the old ALU, but in Nokia’s case there are also some other issues.

Both ALU and Nokia were notably providers of wireless infrastructure, which of course means a combination of the two created more directly competing product strategies.  That in turn exacerbates the political tension associated with the merger, and also any other tensions that happen to be simmering inside either or both companies before they joined.

In addition to the M&A blues, Nokia also has to face the fact that European telecom companies have hardly been superstars in the marketing/positioning wars.  Marketing and positioning are key if you plan to rely on feature differentiation in your market, because you need to grease the skids of the sales education process by getting prospects to value the messages you want to convey on a sales call.  Cisco is a master of this, and Nokia is exactly the opposite.

If you’re not able to sustain feature differentiation because you don’t know how to sing, you’re stuck with price competition.  That is a truly deadly place to be in the telecom equipment space because everyone knows there’s one and only one price leader, now and forever more—Huawei.  Some people were surprised by the sharp drop Nokia had in gross margins on equipment.  Why, if you assume they’re doing head-to-head price wars they cannot win without giving away the store?

I know from experience that ALU and Nokia always tended to look at competitive analysis in terms of pasting their features on a chart against those of others, including Huawei.  Why, I asked at one point, would you assume that Huawei would accept your features as the competitive baseline?  “Huawei is not a competitor,” I said at one meeting.  “They’re who wins if you decide not to compete.”  Not setting the feature agenda for a market when you depend on features, not price, is deciding not to compete.

Another problem that Nokia faces is a blind reliance on the “Field of Dreams” theory of telecom equipment sales.  Stick a new ascending digit as a prefix on the letter “G” and you have a new wireless market, guaranteed.  There are two problems with this thinking.  First, standards groups take forever to do anything, which means that you have little near-term new opportunity generated by a standards-driven market change.  Second, standards-driven market changes are great for the standards people and standards bodies, but the CFOs are getting increasingly wary of them.

You can see in the way that Nokia is positioning its RAN/NR assets that it’s recognizing that it needs to sell something before 5G is fully baked, and that even then it’s not clear just how much 5G adoption will drive the market overall.  We read about all these new 5G announcements in the media, but they carefully submerge the fact that most of those new cities are getting 5G/FTTN hybrids using millimeter wave for home broadband delivery, and players like Nokia aren’t leading that particular 5G niche.   Even the mobile 5G is really 5G New Radio (NR) rather than the full 5G spec.

Nokia has also proved to be just as remiss in promoting what’s probably their strongest strategic asset as Alcatel-Lucent (from which that asset was acquired) had been.  Nuage has been and is the strongest network operator SDN product in the industry…and perhaps continually the worst-marketed.  SDN was seen by router aficionados in ALU as a silly notion that just might impact router sales.  The Nuage people were largely geeky, which meant they weren’t very good at promoting their own stuff.  Both ALU and Nokia are also, as I’ve already noted, in the lower quarter of the space with respect to marketing/positioning skill.  No wonder Nuage doesn’t get a break.

Consider all of this in conjunction with another interesting truth, which is that the only large vendor in the space who’s worse at marketing and positioning is Huawei.  Their strategy has been to wait until a market develops, and then lead it in price.  The best defense against Huawei isn’t to wait till their eating your lunch on pricing, then whine.  It’s to force them to do what they are not good at, which is sing until buyers rush in tears to your door hoping you’ll deliver them feature redemption.

Consolidation is the only inevitable consequence of M&A, but all the things I’ve noted above are predicable, likely, consequences of consolidation.  The problem is that the efficiency impact of consolidation is a long-term thing, and all the consequences of the consolidation whose benefits you’re waiting for are meanwhile eating you alive.

If you’re being defeated by price in the present, you make buyers look to the future for reasons to buy your stuff.  Almost everything Nokia says about the network of the future, the 5G revolution, is so old and trite that an enthusiastic cheerleader could go comatose just hearing it.  Gosh, guys, don’t you have anything new to say?  How many generations of networking were supposed to be justified by things like “medical imaging”?  The future doesn’t mean science-fiction, it means credible new opportunities linked to available Nokia features.

Which should be a key point to Nokia planners, because most vendors in the space are dragging their feet on generating features, and so don’t really have them available.  Nokia has a lot of very strong assets, including the Labs it inherited with the merger with Alcatel-Lucent.  No feature asset is helpful if buyers either don’t know about it or don’t understand it well enough to value it.  That should be a much easier problem to fix, but I have to admit that it didn’t prove easy for any of the entities that combined to generate the Nokia of today.  They have to work now to make the positioning of the whole a lot better than the positioning of the parts, and other vendors like Juniper should be doing the same.

Is Verizon’s No-Content-Acquisition Approach a Good One?

With what seems to be an explosion of content provider acquisitions underway, Verizon remains committed to being a content delivery partner not a content competitor.  With streaming video seeming to be everyone’s direction, Verizon still doesn’t have its own streaming offering.  What makes Verizon think they can buck these trends when competitor AT&T has gone the acquisition route, and what will they need to do for that to happen?

Let’s start with their current financials.  Verizon’s revenues were up 5.4% y/y in their most recent quarterly announcement.  They added almost 400,000 postpay smartphone accounts, the kind mobile operators like best.  Their postpay churn was less than 0.8% for the fifth consecutive quarter.  FiOS revenues grew by 2.3%.  Is all this enough, particularly if Verizon stays with the promise it’s repeatedly made, which was not to get into the content acquisition game?

The first point to make in answering that question is the mobile smartphone postpay numbers.  Verizon continues to do exceptionally well in that space, and it’s where Wall Street knows they should be focusing.  Mobile services are much more profitable than wireline, and operators have known that for a long time.  AT&T has been using DirecTV Now to promote its mobile service by creating a bundle deal, but so far at least, Verizon doesn’t seem to need that.

On the wireline side, particularly with home TV viewing, Verizon has more FTTH than AT&T, and it’s recognized the value of the 5G/FTTN hybrid model of broadband delivery to the home as the direction it needs to go in.  That model offers lower “pass cost” meaning less initial investment to reach a point where the operator is positioned to serve a customer without custom provisioning.  It also offers speeds that are higher than current FiOS customers select.

All this is possible because Verizon has a very high “demand density” meaning that it has a lot of communications dollars per square mile in its service area.  High demand densities mean your infrastructure returns more on investment, and Verizon’s is a whopping seven times that of AT&T.  That would tend to give Verizon an advantage in deploying the 5G/FTTH hybrid, but Verizon is also planning to use that to compete with AT&T in areas outside Verizon’s home region.

This point about broadband competition versus video competition out of region may be a critical one.  AT&T, with both its DirecTV properties, has been poaching out-of-region TV customers by using either satellite or riding OTT on a competitor’s broadband.  In part, this strategy has grown out of the need to support its own customer base in areas where FTTH was out of the question and even 5G/FTTN hybrids might not be practical.  The combination of FiOS FTTH and 5G/FTTN could give Verizon a practical way of delivering broadband to all its key high-value customers, and at a decent price with decent margins.

The margin implications are then the step to answering the question of whether Verizon’s go-it-alone approach is workable.  Once Verizon commits to 5G/FTTN, it commits to streaming video delivery.  Once it does that, it not only can rework its FiOS delivery strategy in-region to match (it would almost have to do that), but it can then use that same platform for mobile video and out-of-region, whether the customer has Verizon 5G/FTTN broadband or some competitor’s broadband.  That would get Verizon to where AT&T says it wants to be with video, a single platform for all the customers regardless of the delivery model.

The downside of this streaming approach is that everyone who streams to the customer is dependent on the same broadband resources.  Unless an operator wants to risk regulatory intervention, they can’t make their own video better, which they can do with linear delivery.  But Verizon makes it very clear that they don’t like the linear model, and in particular the incremental cost.  If customers are going to stream from Netflix and Amazon and Hulu, even for things other than live TV, then there’s a lot of value in tuning your own TV delivery to match the streaming model and reducing the total equipment you need to support video subscriptions to customers.

A solid margin on video delivery, arising out of strong demand density, would let Verizon off the hook in terms of grabbing additional revenue by owning content.  That might then let Verizon accommodate future models of content delivery.  Listen to Verizon’s McAdam: “…we’re not going to be owning contents [sic] or we’re not going to be competing with other content providers, we’re going to be their best partner from a distribution perspective and I think that makes great sense for the company going forward.”

What is a “best partner from a distribution perspective?”  Verizon might be thinking that a company like AT&T or Comcast, with its own content assets, might not be an ideal delivery partner for competing content.  Could content owners not want to carry competitive channels that aren’t so popular their loss would cause a viewer revolt?  Could be.

A better distribution partnership could also mean putting less pressure on content providers for pricing or bundling channels.  A lot of negotiations go on now, and if you combine this point with the one about competitive content, you could see that Verizon might believe it would be able to get some deals that content providers wouldn’t make with AT&T or Comcast.

Could better broadband delivery margins let Verizon support a future when TV networks go their own way and offer content directly to the user?  It seems to me that this is the big question for Verizon, the determinant in whether its promise to stay out of content can be kept in the long term.  Linear TV almost cries out for third-party delivery bundling because the cost of getting linear delivery to a customer is too high for each network or studio to go their own way.  On the other hand, if a network has a good inventory of content, do they really need to be part of a bundle put together by any delivery company?  Could Verizon make enough margin on broadband not to stand in the way of networks independent delivery?  Do they even need to be a “TV provider”?

Network operators live on margins, profit per bit, and so forth.  You can’t make up for delivery network losses by having your own stuff to deliver, unless it’s truly your own and nobody else can get it.  That’s not true in the streaming video space.  Verizon obviously realizes this, and so they’re wringing the cost of delivery out of their network, focusing on making the thing that they truly can own as profitable as possible.  So is AT&T, but demand density works against them as it works for Verizon.

One thing seems clear, though.  Verizon will have to be able to deliver very strong streaming video if it plans to move to that (as it has to if it uses the 5G/FTTN hybrid) and plans to rely on content partners.  Partners don’t like problems with their stuff being delivered.  AT&T has suffered problems with its own unified platform and streaming service, and they’re still having it.  For now, Verizon has made the best choice for itself, but they’ll have to make darn sure they don’t mess it up down the line because all the good content properties will be gone by then.

Will Open-Source in Networking Cause or Cure Commodization?

TV viewers and network professionals alike love the idea of faceoffs.  What would happen if a hippo fought an alligator, a lion fought a leopard?  Hot stuff.  On the network side, we’ve already had “Who wins, Ethernet or IP (or maybe ATM)?” and now perhaps we’re looking at the ultimate tech faceoff, which is “Open-Source Versus Consolidation.”  We’re seeing vendor consolidation potentially driving down the number of vendors and products.  We’re seeing open-source software displace proprietary software, even the stuff embedded in devices.  Will one reinforce the other, or somehow change the market dynamic?

Tech is commoditizing, meaning that vendors in the space are losing feature differentiation.  That happens for a number of reasons, the most obvious of which is that you run out of useful features.  Other reasons include the difficulty in making less-obvious features matter to buyers, lack of insight by vendors into what’s useful to start off with, and difficulty in getting media access for any story that’s not a promise of total revolution.  Whatever the reason, or combination of reasons, it’s getting harder for network vendors to promote features they offer as the reasons to buy their stuff.  What’s left, obviously, is price.

A parallel truth is making price all-the-more important; networking is itself under benefit pressure.  Network operators have been struggling with falling ROI on infrastructure, businesses have been struggling to find new productivity gains that would justify additional network investment.  Till these negative pressures are resolved, the lack of benefits to draw on means more pressure on network costs, meaning network equipment prices.

Just as there are parallel factors driving this, there are parallel responses to it.  On the vendor side, commoditization tends to force consolidation.  A vendor who doesn’t have a nice market share has little to hope for but slow decline.  A couple such vendors (like Infinera and Coriant, recently) can combine with the hope that the combination will be more survivable than the individual companies were likely to be.  Consolidation weeds out industry inefficiencies like parallel costly operations structures, and so makes the remaining players stronger.

Or that’s the theory.  Even if we assumed that regulators wouldn’t balk at the notion of an industry of competitors vanishing into the black hole of a single giant, that giant now has no option but to face the continued pressure another way.  If there was one, all that consolidating would likely never have happened.

On the user/buyer side, the related trend is open-source software and open devices.  “Open” in the former case means that the software is developed not by a single vendor who then gains a measure of control over it (and its buyers), but by a collective who presumably operate for the good of a much larger group.  DT just joined the Linux Foundation Networking open-source group, where things like ONAP live, and they’re an operator not a software developer.  Many operators are getting into open-source in the hope it will cut their costs, improving their ROI.

Open devices are a little different and yet the same.  Open-source software arguably promotes enhanced capability, but most people believe open devices promote or at least acknowledge commoditization.  By developing a technical model for a device that anyone can build to, you increase the number of builders and the competition.  Since you can’t compete on a common device model, that means price competition, which again improves the buyers’ ROI.

Open-source also arguably commoditizes devices directly, because most functionality these days is created by software running on fairly generic hardware.  If you pull the software features away from hardware platforms by making the software available in open-source form, then the platforms themselves are robbed of feature differentiation, have to compete with the open-source solutions, and end up being only price-differentiated.

Commoditization, of course, is the same whether vendors drive it in the hope of surviving (even as an absorbed part of a larger organism) or it’s driven by creating a non-differentiable product model.  The question is not whether you end up with commoditization, but what gets commoditized, and how that impacts the market overall.

Imagine a single giant router vendor, the only one in the universe.  Would that vendor be a benign dictator of the market?  Cisco, the closest thing we have now to that super-predator, is rarely seen as “benign” even today.  The truth is that pure vendor-side commoditization probably has a natural counterforce created by the fact that as a player gets bigger, they tend to be seen as less trustworthy.  A bunch of studies have suggested that you end up with three players through this natural force-and-counterforce faceoff.

Imagine a single open-source project addressing any single functional area, whether it’s server OS or routing instance.  The project is funded by members, and presumably people don’t continue funding it if their objectives aren’t being met.  You end up with other projects with the same goal, and they in turn compete for money/membership.  Doesn’t this end up creating those same three competitors, and therefore is our faceoff a draw?

Nope.  The difference is that because software creates differentiable features, and does so with an overhead and delay modest in comparison with what hardware would generate, it promotes a series of alternative approaches to the same problem, tuned to slightly different needs.  It’s hard to name any single product area where open-source has converged on a single solution.  Even Linux (perhaps the most successful open-source thing of all time) or Java have competition.  As long as there are differences in mission, in requirements, we can expect different open-source projects providing different solutions.

That doesn’t mean that vendors are safe.  What I think we’re seeing is a lot more complicated than hippo versus croc.  Open-source software could be sucking the innovation out of proprietary technology.  We’ve talked for a decade about an increased role for software in networking, a model for network devices that’s similar to one we have for servers, laptops, and even phones.  Network vendors were hardly on board, and now it may be too late.

Buyers have the bit in their teeth.  They know that many vendors now embrace the shift to software-centric networking, but they also know that if that shift is based on proprietary software, it only substitutes one kind of lock-in for another.  Open-source is their answer, and it’s framing the features of the future for virtually everything, and as it does that it’s creating the functional requirements for the hardware that optimally supports it.  There’s an increased interest in making even specialized silicon an option; look at THIS work on FPGA use in Ethernet adapters and the P4 flow control language with plugins for various hardware elements.

Open-source isn’t perfect, though.  Most of today’s open-source projects in networking are flawed, some deeply, in their architecture.  In fact, we don’t have an overall architecture to describe how a software-centric world would serve both the functional and management needs of network buyers.  There is still time for vendors to do something truly insightful and innovative, and address a network challenge the right way, before the broad community driving open-source figures out an approach that can lead to an optimum solution.  If they don’t…well, a big enough croc can eat a hippo.

Can We Expect to See SDN Feature Evolution?

One of the questions raised by the onrush of interest in SD-WAN is where SDN is.  Obviously, two things whose acronyms start with “software-defined” should be related in some way, and I’ve noted in other blogs that SD-WAN and SDN may converge in the longer term.  What are the differences today, and where are we in coming up with a general, converged, model that would result in enhanced SDN features, including my “logical networking” capabilities?

There’s never been a lot of consistency in the SDN space.  The concept got started by some university work that suggested that separating the control and data planes of networks and moving to a centralized control point for forwarding rules would improve both network efficiency and resilience.  This stuff was the basis for the OpenFlow protocol and related device work, driven largely by the Open Network Foundation (ONF).

SDN in ONF form was a pretty radical concept.  You have some central controller that is responsible for framing the routes taken by traffic in the network, and this controller then sent forwarding instructions to each OpenFlow white-box device.  There are a lot of obvious questions raised by this architecture, such as how you ensure you don’t lose control connectivity with the devices or overload the controller.  These have reduced the adoption of ONF SDN beyond the data center, where it obviously can work.

In the data center, though, the main impetus for SDN was tenant/application isolation.  Ethernet LANs have various technologies to create private VLANs, but many cloud providers didn’t like them because of the limits they imposed on the number of VLANs or the fact that these technologies had to be implemented in the devices themselves.  That gave rise to a second SDN model, based on an overlay network, one that uses traditional IP (or Ethernet) as a transport network, a kind of virtual layer 1 or tunnel network, and did its own routing at a higher layer.  That allowed the overlay SDN model to support vast numbers of independent virtual networks.  Today, we see this from a variety of sources, including Nokia (who got it by acquiring Nuage) and VMware (who got theirs by acquiring Nicira).

Overlay SDN is easier to extend out of the data center, and in fact Nuage (for example) did that from the first, providing branch/workgroup connectivity to the overlay network.  That arguably creates the convergence between SDN and SD-WAN, since SD-WAN technology was at the same time moving to embrace data center and cloud connectivity as well as the branch connectivity it started with.

To make things even more complicated, bigger network vendors like Cisco tried to blunt any specific drive to SDN by offering application control over connectivity through distributed policy management of existing switches and routers.  This was also characterized (particularly by the vendors themselves) as a form of SDN, so we ended up with three different models.

It’s my view that the industry is gradually drifting toward the universal adoption of an overlay model of networking.  Bodies like the MEF (their “Third Network”) worked out a multi-network interconnect approach for an overlay network (but didn’t define the overlay).  Overlays are proving themselves in both data center and WAN missions and are now starting to look at the features and benefits that an overlay could provide.

The most obvious place this new overlay-centricity would influence product directions is in SDN and SD-WAN, which both use overlay technology and which are already facing a converging mission.  SDN overlays could easily be extended out to users instead of simply separating data center tenants, and SD-WAN already has to provide some support for cloud applications, support that could be extended into the data center.

We now have two vendors who are implicitly positioning for the convergence of SDN and SD-WAN.  Nokia/Nuage has always had the capability, as I’ve noted above, but interestingly they’ve elected to frame an SD-WAN offering rather than a unified offering (though it is in fact unified).  VMware has positioned its “Virtual Cloud Network” as a unified overlay model, but it still has a technical separation between their NSX SDN and Velocloud SD-WAN.  The two exemplars of overlay convergence, with totally opposite approaches.  Helpful, huh?

The barriers to unification probably start with the historic aversion of the network industry to the “god box”, a single device that essentially did everything you needed.  Trying to sell this proved to be a challenge, since most buyers had a very limited set of missions and had to be made to understand stuff they never thought they’d need to value the universal capabilities of the device.  Then there was the conviction of buyers that if a box cost X dollars for ten different missions, and if they valued only one, they should be able to find a box costing X/10 to serve their needs.

Another problem is that the quadrant charts of analyst firms rate vendors/products in specific categories, and something that fits into more than one is hard to position in these reports.  A “god-box” category is unattractive and difficult to “quadrantize”, so to speak, and analysts love new product categories that justify new reports, so there’s not much incentive to reduce the product category range.

Vendor influence is another challenge for a single-overlay-network positioning.  An overlay network provides all the features a user sees, because it rides on top where the user connects.  By its very nature, it reduces the differentiation available to the underlay network, and that’s where switches and routers deploy.  If anything you could overlay SDN/SD-WAN on top of would work to build the network of the future, the big vendors see this as hanging out a sign saying “White boxes welcome!”  Since venture capitalists want “laser focus” from the startups they fund, small vendors tend to push simple missions and messages only.

That leads to the question of publicity.  We have stories on both SDN and SD-WAN, and users are led to see the two product classes as totally independent rather than as different slants on a common (overlay network) approach.  It’s difficult to sell network technology door to door, and impossible for smaller vendors.  That means getting your name and product in front of prospects on a larger scale.  What better way than to get your name into the media?  Since editorial mentions drive website visits, then sales calls, then sales, you don’t want to mess with the publicity side, and remember we have SDN and SD-WAN stories, not overlay/god-box stories.

SDN cannot evolve without broadening its scope to include what SD-WAN does, and vice versa.  Whether there’s going to be an SDN evolution seems likely to be indicated by the actions of the two SDN vendors who have SD-WAN-ish plans or capabilities.  Do their actions indicate there will be SDN evolution?

No, they don’t, and some of my friends at the vendor level have a suggestion as to why, one related to my points above.  Vendors see different buyer constituencies for data center networking and the WAN, which encourages them to keep the SDN and SD-WAN products distinct.  In addition, it’s possible that each product area might arouse the competitive push-back of vendors like Cisco, if that vendor has a product strategy in either the branch/WAN or data center.  Separating the pieces might facilitate a quiet introduction of the new stuff.

That doesn’t mean that there won’t be tighter integration of SDN and SD-WAN products down the line.  Here the two vendors can each offer indicators.  VMware, because they have a more unified positioning of SDN and SD-WAN and yet have less functional integration of their offerings in the two areas, could signal us by providing an NSX on-ramp from Velocloud.  Competitor Nokia/Nukage has fairly complete integration, not yet leveraged in positioning.  Positioning has little inertia, so they could improve their story.

All of this depends on my final point, which is the technical value of the integration.  SDN applications for overlay technology involve creating an application subnetwork using overlay addressing to isolate applications and tenants in a cloud data center.  SD-WAN uses overlay technology to create a VPN-like subnet that could extend, replace, or overlay an MPLS VPN.  What’s the intersect?

There is really one intersect, and that’s the gateway that provides user access to an application subnetwork.  That gateway has always spoken “external IP” into an IP network where users (including other applications) can access it.  The “internal IP” used within the application subnet isn’t visible to the outside.  There doesn’t seem to me to be any utility in making components of an application that were declared not to be exposed as part of the company VPN, only the exposed gateway(s).  Logical networking enhancements only apply to visible network applications and components, which excludes the internal subnet connections of multi-component apps that SDN would typically connect.

If my view here is correct, a vendor who decided to declare for true convergence of SD-WAN both at the technical level and the positioning level would have to validate a benefit for the convergence, at least for now.  I don’t think that this would be easy in the current market, and that probably means that the two product categories will stay separate for some time, and that overly SDN isn’t likely to get many advanced logical networking features in the near term.

Another “Logical Network” SD-WAN Announcement

You might wonder why, if logical networking is such a great idea as I’ve suggested it is, SD-WAN vendors haven’t been flocking to it.  They still aren’t, but there’s at least movement in that direction.  Cato Networks has announced an “identity-aware routing” feature for its SD-WAN product, and I think it demonstrates that logical networking is coming—perhaps even faster than I’d thought it might.

As I noted in an earlier blog for which I provided a FIGURE, logical networking is based on a “what” versus “where” model of routing, which means that it has to work on logical identity rather than on IP addresses that indicate a network connection point rather than what’s connected there.  Cato’s term of “identity-aware” routing is certainly evocative of that capability, so it’s worth looking at a bit of the details they’ve provided.

If you refer to the figure, logical networking has the same “bottom” layer as any SD-WAN, which is the ability to encapsulate packets and route them over an SD-WAN VPN based on a set of useful routing rules and network policies.  The rest of the layers depend on the ability of the SD-WAN to recognize not just addresses but some logical identity set.  This ability is derived from the outer, “registration” layer, where a given connection is associated with a logical identity.

Cato gets their logical identity information from a directory source (Microsoft’s Active Directory or LDAP), which are frequently used tools for identifying users and services.  The information on their site isn’t highly detailed, which leaves some questions on exactly what can be done.  To make matters more complicated, there are two different “Active Directories”, one based on Windows Server (the original one) and one designed for Microsoft Azure and the cloud.  The latter, obviously, has features that address things like RESTful services, microservices, SaaS, and so forth, that the former does not.  The Cato material references Active Directory rather than Azure Active Directory, which I presume means they support the original Windows Server model.

The description of AD-driven registration in the Cato reference I’ve provided here isn’t definitive with respect to just what can be registered.  The key sentence is “Identity-awareness completes the evolution of routing by steering and prioritizing traffic based on organizational entities — team, department, and individual users.”  To me, this means that identity routing means user identities, not application, service, or process identities.  In a later sentence, Cato says it supports “Business process QoS where prioritization is based not just on application type but the specific business process.”  This seems to suggest there is some application/process identity registration.  The examples Cato provides for directory integration seem explicit to user identity, though.

This doesn’t mean that Cato doesn’t recognize applications, even if they don’t get that data from a directory registration process.  Cato’s basic SD-WAN capability includes the ability to “detect and classify hundreds of SaaS and datacenter applications regardless of port, protocol, or evasive technique and without SSL inspection”.  Obviously “hundreds” doesn’t mean “all”, and since the information doesn’t come from a formal directory, it likely doesn’t include the logical-name hierarchy that lets applications be divided into groups and dissected into components/services.

Taking an explicit directory-integrated approach to identity is smart; there are many directory resources used for access control, application component registration, and so forth.  Even single-sign-on systems have directories.  A directory system is easy to explain to users, and it doesn’t raise the prospect of maintaining a new list of stuff.  Best of all, it wouldn’t be rocket science to grab more information from a directory you’d already integrated with, or to access different ones.  Directory systems also typically include information useful in setting priorities and even in barring connections.

Whatever Cato currently provides in terms of directory-based registration of application objects/entities, there’s no question this is an advance in the direction of logical networking.  Very few SD-WAN products have any support for user/process identity today, but there are more developments in the space in the current product pipeline.  I’m glad to see Cato take a step in the right direction, even if it turns out that their capability is limited to the registration of user identities rather than full process/service/application registration.  I think they’ll end up in the right place—full registration and logical routing—eventually.

It would be easier to cast this as an avalanche of logical-network awareness if Cato followed the standard model for SD-WAN.  Instead, Cato offers what they call “SD-WAN-as-a-Service” or “Cato Cloud”, which means that rather than being an arbitrary over-the-Internet connection for the SD-WAN VPN, Cato uses a subnetwork of Cato Cloud points of presence, and routes by hop between them to make end-to-end connections.  Policy controls let users pick the routes they want, based on performance and application priority.  This is why the Cato model is an as-a-service approach; the Cato Cloud is the logical foundation for all users’ SD-WAN VPNs.

Another point of variation from the SD-WAN norm in the Cato approach is a more explicit MPLS-replacement direction.  Obviously the savings a user could achieve by completely eliminating MPLS would be larger than could be obtained simply by connecting thin-location sites another way.  Also obviously, the commitment a user makes to Cato is much more significant if a complete MPLS replacement is the target, and my enterprise surveys suggest that the biggest prospective SD-WAN users aren’t particularly eager to take this kind of risk.

The as-a-service position might also impact the sales channels accessible to Cato.  Enterprises would obviously be targets for direct sale, but today most SD-WAN is sold either by managed service providers (MSPs) or communications service providers (CSPs, meaning network operators like the telcos and cable companies).  MSPs might find the Cato approach better than the average all-in-the-Internet model, but they might also (rightfully) see Cato as a competitor.  CSPs might also see Cato more competitive to their core business services than other SD-WAN offerings.  With most new SD-WAN sales coming from the MSP/CSP channel, this could limit Cato’s success.

I like Cato’s identity direction, but it’s hard to say what impact the Cato story will have on the market in general because of the differences in their overall SD-WAN approach.  I don’t have any survey or model data on SD-WAN that could shed light on whether the broad Cato SD-WAN-as-a-Service approach is the right one for the market.  Given that, I can’t assess whether Cato’s directory-and-identity routing approach would have a direct competitive impact on the feature directions other SD-WAN vendors take.  However, I do think that Cato’s step demonstrates that the market forces driving the evolution of SD-WAN features in general are leading toward what I’ve called logical networking.  That means that other SD-WAN vendors are very likely to see the same market forces and make the same logical-networking decisions as future development priorities.

It would be nice to have more of the details on Cato’s approach here.  I told Cato about our briefing policy and asked for an analyst deck before the announcement, but no deck was provided (I was told one was in the works) and I don’t schedule briefings without an explicit document to support claims.  I’d suggest that anyone interested in the details of the Cato implementation check to see if they’ve posted more complete documentation or ask for the details.  If Cato does provide me better documentation, in the form of a website link, I’ll add it as a comment to my posting of this blog on LinkedIn.

Cloud Wars: Where Exactly is the Battlefield?

The battle for public cloud supremacy is far from over, but it may be taking a new direction.  Amazon has announced its Snowball edge appliance, which some characterize as an extension of the early Greengrass technology that let users run AWS elements on premises, meaning at the edge.  This comes as Microsoft is expected to announce significant Azure gains, which some on the Street estimate could approach 100% year over year, based in part by the symbiosis between Azure and on-premises Windows servers.  The IEEE has adopted the OpenFog Consortium’s reference architecture for edge computing, and Gartner says that in four years, 75% of enterprise data will be processed at the edge, not in the cloud.  Is an edge revolution happening?

Some of this is overblown, of course.  First, as I’ll explain, Snowball and Greengrass are really aimed at different stuff.  Second, if that 75% of data that’s edge-processed includes work handled on the premises, then current premises data centers and desktops are the edge too, and we’re already well over that 75%.  If Gartner means that 75% of data will be processed by fog computing or Snowball-like cloud extension, good luck with that one.  Clearly we need to explore what “edge computing” is, and even what the “edge” is, specifically.

The cloud computing market has been blundering through a series of misunderstandings from the first.  There was never any chance that everything was going to migrate to public cloud services.  Our model has consistently placed the maximum percentage of current IT applications migrating to public cloud at around 24%.  At the same time, though, our model said that new applications written for the cloud could almost double IT spending.  That means that the total new cloud opportunity would roughly equal what’s currently spent on legacy IT missions.

Enterprises told us, starting in about 2012, that their cloud plans were to build extensions to current business applications out from “on-ramps” to these current applications, into the cloud.  These extensions would enhance worker productivity, improve inventory management and cash flow, and serve other recognized business needs.

Cloud front-ends to existing applications may involve data-sharing between the premises and the cloud, which is what Snowball is really for.  It lets Amazon customers move large amounts of data back and forth, something that’s essential for both scaling and redeployments associated with cloud backup and for front-end transaction editing.  Amazon with Snowball is targeting the enterprises that, so far, Microsoft has been winning with Azure.

I think both Amazon and Microsoft believe that they will find it difficult to make a business case for deploying their cloud services in their own edge-hosting facilities.  Snowball and Greengrass show that Amazon has no intention of deploying edge hosting in the near term at least, and Microsoft has created its equivalent premises-edge model by linking Azure to Windows.  The problem with edge hosting is the economics.  An “edge” that’s not close to its user base isn’t an edge at all, and so edge deployment means pushing computing out close to every customer not just to a couple.  That demands a lot of real estate and also reduces the utilization efficiency of the servers.

Network operators already have the real estate, of course, but they are hardly powerhouses in the cloud computing space.  I asked a Tier One when they thought they’d deploy edge computing resources to support public cloud services, and they responded “When there’s a significant demand for them.”  I asked how that demand could develop, absent any facilities to realize it, and they shrugged.

The problem operators face is that in 2022, the end of Gartner’s four years, the opportunity for network operators to justify data centers through public cloud services is declining as a percentage of total data center opportunities.  Not only that, my model says that it would be 2023 or 2024 before the likely realization of all the network operator data center deployment opportunities combined could build out enough edge density to offer widespread edge computing.

That’s why Snowball is important.  I would argue that the opportunity Amazon is targeting with Snowball is the one enterprises are already doing, which is making the cloud into a front-end extension to transaction processing.  That can require access to a lot of data that, if it were accessed on premises, would run up transfer charges and introduce delays.  Greengrass enables the premises edge, while Snowball doubles down on the cloud as a front-end.  Amazon’s Snowball may acknowledge that enterprise edge computing isn’t exactly looming.

The premises connecting edge computing to events is simple; shorten the control loop.  The greater the distance between an event source and the event processing, the greater the latency injected into the round-trip between event and response.  There are obviously events that require quick action; most of those associated with process control come to mind, and then there’s the ever-popular self-driving cars and self-regulating intersections.  The problem is that these events require a chain of future application decisions, each of which have technical, policy, and financial issues to deal with.  Finding the magic event-and-edge formula for enterprises isn’t going to be easy, or quickly accomplished.

The most likely source of “events” that would require edge processing are actually related to things like consumer streaming video and ad delivery.  Personalization and contextualization are generally related to a combination of where a person is and what they’re doing, which can be event-based.  Since ad-related and video-related caching is also an edge function, it looks to me like edge computing will have to be driven by these applications, and any exploitation of the edge by business applications will have to either exploit “edges” deployed on the customer premises, or in-place edge resources justified another way.

The good news is that it’s easier to justify edge computing based on a small number of related applications and a limited number of users who’d have to deploy it.  Video/add-related services are operator-targeted, and a solid hundred operators who bought into the notion would drive over 80% of all edge opportunity.

The bad news is that even with edge-hosting resources in place (justified by another event mission), we still have to deal with the question of linking event processing and transaction processing in a way that delivers clear benefits.  Both Amazon and Microsoft have been working on that, but optimal event computing means what’s now being called “Function-as-a-Service”, and it’s a different style of programming and application workflow management.  A clear model has to emerge, and then be applied to each application/vertical opportunity.

Events will eventually drive the cloud, and will eventually create edge/fog computing, but the applications that drive those events aren’t ones enterprises will run.  We’ll have to stop looking for event-killer-apps among the business applications and focus instead on the consumer space.  That’s what’s going to get those edge data centers built.

What the Network Vendors Should REALLY Fear About Amazon

The story that Amazon is looking to get into the white box switch business is probably true at some level.  Perhaps they really want to sell them, or perhaps they’re simply looking to design their own hardware for their cloud data centers.  In either case, it should indeed make the network vendors who rely on switching (which is most of them) shiver a bit.  It also should tell us something about how vendors are approaching their own business.

Back in the ‘80s, I used to teach network salesforces how to sell.  One of the mantras was “feature, function, benefit”, meaning that you told a prospect about a feature, you told them what that feature did, and then you told them how it benefitted.  The point behind this progression was that people bought stuff because it did something good for them.

A corollary concept to this is the axiom that people will buy the stuff that’s best for them.  You can connect the two and say that they’ll by the one whose features create the most benefits, and that would also be true.  But suppose the features of every vendor, every product, are the same.  What you then buy is the cheapest.

What’s been happening in networking for the past ten years or so is that the pace of creating relevant features has slowed.  The new box is pretty much the same as the old, feature-wise.  That means that it’s the same as the old, benefit-wise, too.  That in turn means that unless it’s cheaper, there’s nothing you can say for it.  We’ve fallen into price differentiation because the new features have dried up.

This isn’t the fault of buyers, it’s the fault of sellers.  We can’t expect that enterprises or network operators will pay staffs of great thinkers to come up with network features that they hope somebody will build.  They’d expect the sellers to do the great thinking, and the sellers have fallen down on the job.

When I started this year, I was working to figure out what the major service and product trends in networking would be.  Many of you have read my pieces on logical networking, NaaS, and SD-WAN, all of which came out of this review.  The review provided other interesting insights, and one of them was that well over half the people who were actively pursuing network procurements believed that there were network features that would have made their network applications better, but that vendors hadn’t bothered to develop them.

It gets worse.  Of that group, almost 80% said that they believed that there were business benefits left on the table because their networks couldn’t address them.  Just short of that number believed that the cloud and virtualization demanded a new model of networking.  None said they had heard of such a model from their vendors.

The point here is simple.  Amazon is hardly a switch player, they’re a mass-market retailer.  If they believe there’s an opportunity for them in switching, they believe it’s a mass-market opportunity.  No special features, no new applications to teach users—buy a box and plug it in and may the cheapest box win.  The only alternative position that would make sense is that switching isn’t a commodity market because there are rich new things that networking could address, and only specialists can help address them.  So where’s that story?

The problem here, I think, is connectivity-centricity.  I remember when computer-to-computer communications took place at 2400 bits per second.  Today, kids would laugh at anything that slow and businesses probably couldn’t even turn on their lights with that kind of connection.  But faster connection isn’t necessarily more useful connection.  Even twenty years ago, I remember an Intel engineer telling me that in the newest and fastest chips, almost three-quarters of the incremental power was going into making the GUI pretty.  That says that the utility of compute power had more to do with presentation than with the quantity and quality of data.  Might this also be true of the networks?

I also found in my discussions with users that the term they thought best described the architecture of the future network was network-as-a-service, and as I said in a prior blog, the definition of NaaS they liked best was “a model of network service where connectivity is established where and when it’s needed.”

In our connection-centric mindset, this has been interpreted as meaning “to the sites I need connected”, but you can see that’s not really what users had in mind.  There would be little variability in how many sites a company had or where they were located, after all, so the “when” and “where” would be static.  Users obviously have something more dynamic in mind.

What?  Way back in 1974, there was a specification created called “Basic Reference Model for Open Systems Interconnect”.  The “OSI model” had seven layers (often today called “levels”), and almost everything we do today in network equipment stops with Layer 3, which today means IP.  Site networking is Layer 3 networking, for example.  What’s above that?  Is there something that this almost-ancient model got that was lost somewhere along the way?

Layer 4, transport, does exist with TCP, and it handles flow control.  Level 5, the session layer, is where the kind of ad hoc relationships between users and resources fits, and remember that seems to be what users think NaaS should provide.  Layer 6 deals with information formatting and presentation, and Layer 7 is the application layer where communications control, relationships, and identities reside.  The user was presumed to connect up at Layer 7.

Network vendors hunkered down at Layer 3 because it was easier, because site networking was necessary before anything else could be connected, and because the OSI model said that everything from Layer 1 to 3 was “the network” and the rest was in the end-user domain.  Not only that, those additional layers were explicitly creating an overlay network.

Today, people in many standards groups are looking at standardizing headers and features that fall within the definition of those higher layers.  It’s not that we don’t have people claiming to have “7-layer architectures”.  The problem is that they’re inventing the other layers, not doing what the model described.

The solution to competition like Amazon, or to commodity network hardware in general, is to accept that “the network” was supposed to be virtual or logical from the first, from those early glory days of the OSI model.  What we think of as “NaaS” is really a modernized formulation of an overlay network model that provides the kind of connectivity users want from the port side, and links to those old commodity site networks on the trunk side.

We could build NaaS with a product family like SD-WAN.  We could deploy it in universal CPE (uCPE), in edge computing, in a specialized network device, in a lot of different ways.  We could integrate it with traditional Layer 2/3 (switch/router) products, and thus differentiate them.  There are plenty of ways that traditional vendors could get into the NaaS space, to elevate networking beyond the commodity level.  Sure they might be admitting to their white-box and commoditization problems, but don’t most people see these problems already?  Not admitting to morning isn’t going to stop the sun from rising.

I think that something like NaaS, something like the “logical networking” concept I’ve blogged about, is a reasonable, perhaps even optimal, solution to the challenges of networks-beyond-sites, but I’m not promoting that.  What I’m promoting is remembering the past.  Remember Novell?  They were the startup of the age in their time, framing file-sharing and printer sharing and quickly becoming the player in the “network operating system”.  In a couple years they were gone, because they couldn’t figure out what came next.  Which was virtual resources, virtual machines, containers, and more.  Myopia is provably fatal.

Amazon’s white-box aspirations, whatever they are, threaten the current network vendors only because the current network vendors elect to fight head-to-head where there’s no more feature differentiation to be had.  The real risk that players like Amazon, Google, Microsoft, and a host of startups pose to the network giants isn’t that they’ll get into the switching business.  It’s that they’ll go higher on that old OSI stack and seize the real value.

Looking at Logical Networking from Both Sides

We looked at clouds from both sides, so the song says, and we probably need to do that with “logical networking” too.  From the bottom, SD-WAN is a simple way of extending VPNs, but from the top it’s different.  I’ve said in many past blogs that SD-WAN was the on-ramp to a new logical networking and network-as-a-service (NaaS) model.  Some vendors realize that, but not every SD-WAN implementation leads to logical networking, and from another angle, some non-SD-WAN implementations may be nosing in that direction.

The industry really doesn’t have a definition for “logical networking” (that’s no surprise since I think I first used the term).  In the context I’ve used it, logical networking means the set of tools and facilities needed to establish, maintain, and manage connectivity between logical entities regardless of where they are and how they’re connected.  It’s a network of the “What” and not the “Where”, in short.  Because virtualization necessarily creates a “What-level” abstraction, I think logical networking is an inevitable consequence of virtualization.

We don’t really have a good definition for network-as-a-service either.  Most of the definitions say it’s a subscription form of networking, something that lets you have connectivity without owning network equipment and diddling with configuration and parameterization.  But isn’t that pretty much what VPNs were supposed to be?  Or SDN?  In computing, the “as-a-service” model means that you get an application in usable form, for the asking.  Should NaaS then be networks-for-the-asking?

I asked a bunch of enterprises what NaaS meant to them, and the results were all over the place.  I took some common points and fed them back, and what I finally found the most resonance for was “NaaS is a model of network service where connectivity is established where and when it’s needed.”  Users also liked the idea of explicit connectivity rather than open, permissive, Internet-like connectivity.  They preferred using logical names for users and applications, liked the names to follow the things they represented even if they moved around, and thought that NaaS was the natural communications service for virtualization and the cloud.  Those are the features I’ve said are the foundation of “logical networking”.  Logical networking is an implementation of NaaS principles as users see them.

The next question, of course, is how you actually do logical networking, and here we have some historicity in thinking and implementations.  The answer is another layer, something that sits above the level of traditional IP networks (“Where” networks) that rely on network addresses that designate a specific fixed exit port.  Perhaps the best, and least-known, example is something called the “Next-Hop Resolution Protocol” or NHRP (RFC 2332).

NHRP was designed to make it possible to extend IP connectivity over a switched-virtual-circuit network like the old frame relay and asynchronous transfer (ATM) networks that never made much of a hit in the market.  The notion was simple; an IP packet got to the edge of one of these networks (which, with typical standards-body interest in creating acronyms, was called an “NBMA” or non-broadcast multi-access).  There, a routing table found the address of the NBMA exit point associated with the destination, set up a virtual circuit (if there wasn’t one already), and sent the packet on its way.  At the other end, normal IP routing applied.

There’s no reason this couldn’t be done for logical networking.  We build a higher layer, corresponding to the IP network in NHRP.  We use MPLS VPNs, the Internet, or any other set of connection facilities, as though they were NBMAs, and we use our higher-layer routing to identify the right NBMA and exit point.  That’s how SD-WAN works in NHRP language, except both the higher layer and the transport layers talk IP and we’re using the SD-WAN layer to extend a private VPN address space.

There would actually be more than one “higher layer” involved if we look toward all the features we’d expect from logical networking.  THIS FIGURE shows the logical layer structure needed to offer a complete logical-network implementation.  Network users (real humans or application components) are first registered, meaning that they are assigned or associated with a logical name.  The logical names would likely look like a qualified URL, something like “me.mybranch.mycompany” or “update.payroll.employeeapps.mycompany”.

Logical names would likely translate to an address, but the address would be a logical-network address.  This address would designate the place on the logical network to which the name was currently associated.  Changes in location would be reflected in the logical-name table, so a user that moved around or a redeployed or scaled application would get a new address.

Below that, there would be a logical connectivity and policy layer, where policies determined what connection associations were permitted and how they were to be treated in terms of routing, rerouting, QoS, etc.  Think of this as a firewall process that’s closed by default and opened only where a rule is provided.  This layer would then connect with the encapsulation and transport routing process that would move packets over one of the available transport networks to the exit point, where it would re-enter the logical network stack to be delivered.

If we refer back to the figure, today’s SD-WAN products typically handle the inner layer and may offer some basic features at connectivity and policy layers.  They don’t support logical addressing or registration.  Standards aiming at SD-WAN have even more limited scope; most deal only with behavior in the inner layer.  The higher layers of the figure represent the connection, policy, and security value-add points for logical networking, which is why I say that most SD-WAN solutions today don’t support the more advanced feature model I believe is emerging.

To me, logical addressing and explicit registration are the things that separate traditional overlay networking from logical networking and NaaS.  Because not only are these things above IP, they’re above the overlay networks of traditionally modeled SD-WANs, it’s theoretically possible that vendors who didn’t see themselves as SD-WAN vendors, at least not at first, could develop the layers and end up with a credible logical network offering.

The most obvious source of logical networking outside SD-WAN would be SDN players.  There are probably a dozen different overlay-SDN models for data center tenant separation, and some SDN solutions aimed at cloud and container networking.  An overlay network can be turned into SD-WAN simply by adding some client “nodes” that can be stuck in appliances and hosted in branch locations or deployed in the cloud to serve cloud-hosted components.

This doesn’t get you to logical networking or NaaS, though.  SDN vendors have not seemed interested in the higher layers in my figure, in fact.  Nokia/Nuage has, from the first, supported both data center networking and branch networking, but only fairly recently has the company positioned itself as an SD-WAN player.  They do not at this point support the higher logical-networking layers of my model.  VMware bought an SD-WAN vendor and wrapped its entire NSX overlay SDN position in the tag line “Virtual Cloud Network”, but so far has not advanced into the logical-network layers either.

I’m hearing reports from operators and larger enterprises that vendors who have products more aimed at security and cloud connectivity, not specifically called “SD-WAN” are emerging as well.  Some of these, say my contacts, are claiming to have more of the logical networking features that the figure illustrates.  That raises a question about the future of “SD-WAN” as a term.  If the SD-WAN market as we know it ossifies on the basic model, would vendors find it easier to claim they’ve invented another product category?  Analyst firms who do sector reports love that; they get to do more reports and ultimately charge more.  The media might like it too, since a new category seems revolutionary, and revolutions are easier to cover than evolutions.

How that will turn out may depend on the small number of SD-WAN vendors who have logical networking features already.  As I’ve noted in prior blogs, the vendors in the space are uniformly weak in positioning their assets.  Some buyers who have been involved in RFP activity over the last couple of months have complained to me that “nobody really gives us a full picture of what they can do.”  That not only implies that vendors are shooting behind the SD-WAN duck, but also that users are seeing at least some of the bigger picture.  They recognize they don’t have the full story.

Vendors outside the SD-WAN space are seeing some of that bigger picture too.  I’ve seen a couple of NDA presentations that make a stronger case for logical networking than the SD-WAN vendors make, even those who have it.  That might stimulate some better positioning from SD-WAN vendors, or it might be the opening for that “new product category”.  Who sings best, and fastest, wins.