Open Devices and the New Network Model

I want to pick up on yesterday’s blog about the “new network”, to illustrate how a network operator is responding to the pressures of profit per bit on conventional connection/access services.  Remember that operators have been facing declining profit per bit for over a decade, and this pressure is the force behind declining budgets for network enhancement and for various initiatives to lower costs.  Among those are initiatives to replace proprietary devices, which operators feel are inordinately expensive, with open software, servers, or white boxes.

AT&T has done a lot of interesting things to open up networking and reduce capex and opex, the latest of which is DDC, its distributed disaggregated chassis white-box form factor.  It’s getting a lot of attention in the telco world, as THIS Fierce Telecom article shows, and it may in fact be a game-changer in the open device or “white-box” movement.  It’s also likely to create major problems for vendors, particularly if you combine it with the “new network model” I blogged about yesterday.

AT&T is no stranger to open-model network equipment and software.  It’s already released an open operating system and open-white-box design for a 5G edge router, and it’s been active in open-source software for network transformation.  DDC is an expansion to the AT&T open model, a framework to create a highly scalable hardware design suited for more extensive deployment in 5G and other networks.  It’s also a potential factor in the reshaping of the network that I blogged about, which is important because (deliberately or otherwise) AT&T’s getting out in front of the “new network” while vendors seem to be behind.

As I noted that earlier blog, modern networks are transforming to a new model that replaces a hierarchy of core routers with a series of on-ramp (access) routers and gateway routers that offer the bridge between the service and access portions of the future network.  Both the on-ramp and gateway routers would be variable in size/capacity depending on the area they were serving.  In this network, agile optics and electrical tunnels would create a mesh within the service network, and would provide the aggregation of electrical-layer traffic onto lambdas (wavelengths) and then fiber.  Routing importance would be diminished, with many features pushed down into the opto-electrical layer.

AT&T’s DCC seems to aim not only at today’s evolving needs, but at this future model.  What AT&T has done is create a model of router that replaces chassis-backplane design with clustering and cabling.  This approach isn’t as good for true transit router missions where any incoming traffic could go in theory to any output trunk, because the cross-connecting of the distributed elements would need too much capacity for traditional external interfaces to handle easily.  It would work in either on-ramp or gateway missions where most incoming traffic was going to a small number (often only one) output trunks.  The differences in mission would likely require only changes in the configuration of the distributed cluster and to the software hosted on the devices.

AT&T’s description of the DDC concept illustrates this point.  They first explain the chassis-backplane model, then say “But now, the line cards and fabric cards are implemented as stand-alone white boxes, each with their own power supplies, fans and controllers, and the backplane connectivity is replaced with external cabling. This approach enables massive horizontal scale-out as the system capacity is no longer limited by the physical dimensions of the chassis or the electrical conductance of the backplane.”  This is saying that you can expand the “router” cluster through external connections, which of course means that you can’t assume the DDC creates a true non-blocking fabric, because some of those external paths would likely congest.  As a traditional core router, this would be a problem.  As an aggregator, a gateway or on-ramp in my new-network model, it’s far less problematic.

Aggregating to a point, meaning to a very limited number of destinations, is more like a hierarchy than a mesh, and that architecture has been used in data center networks for decades.  You can still provide failover paths within the distributed device mesh (as you can with some data center LAN extensions), but the nice thing about it is that the DDC is built from white-box routers, not from line cards, and those white-box routers are themselves suitable as on-ramp routers where traffic is limited.  They can also be aggregated into small DDC clusters, to serve higher-traffic missions and to provide growth potential (scale-out).

Imagine for a moment that the new-network vision I described in that past blog, and the DDC, combine.  We then have what’s essentially a one-device IP network, where the white-box elements of the DDC are the only real devices needed, and they’re just connected to suit the mission.  It’s obviously a lot easier for a white-box model of network devices to sweep the opportunities off the table when there’s really only one “network device” in use above the opto-electrical layer, connected into different structures to support those two new-network-model missions of access on-ramp and service gateway.

There are two things needed, in addition to DDC, to make this all work.  One is an architecture model of this future network, a model that shows all the elements, both “logical” or virtual and physical.  The other is the agile lower-layer devices that create the opto-electric mesh.  AT&T is working hard on the former piece, and the industry may be converging on, or at least recognizing, the latter.

I think it’s likely that the network gurus at AT&T already understand how the logical and physical elements of this new model would fit.  Remember that they first announced “dNOS”, a disaggregated network operating system, which became the Linux Foundation DANOS project.  Now we have the DDC, which is also “disaggregated”.  They’re working toward an overall model here, for sure.

The opto-electrical piece is something I mentioned in a prior blog about Cisco’s decision to acquire Acacia and Ciena’s challenge in maintaining margins.  I noted that pure optics was plumbing, pure electrical layer was under terminal price and open-model pressure (like DDC), and so both the electrical and optical players would inevitably fight over the middle, the agile lower-layer trunking that would mesh electrical elements and dumb down the higher layer.  Ciena, Infinera, and Cisco all have the technology and contacts to make a go of this agile opto-electric (in the optical vendors’ view) or electro-optical (in Cisco’s view) layer.  The hold-up is the classic “Do I mess with my current quarter in the name of securing my future or hope for divine intervention down the line?” question.

The optical guys have demonstrated they know about this new network model, but are helpless to make the internal transformation needed to address it.  If you’ve done the wrong thing consistently for five or six years, you may still theoretically be able to turn yourself around, but the odds are against it.  Recent announcements by both Ciena and Infinera suggest they’re determined to stay where they are, layer-wise.

Cisco is more of a wild card.  You can do a lot with Acacia’s stuff, including pretty much everything that needs to be done in the opto-electrical layer.  You can also just build a bottom layer in a router, assuming that all the opto-electrical features will be offered only by the big boxes.  That’s hardly an open approach, but it would be a smart move for Cisco.  For the buyer, it might seem to limit the structure of the opto-electrical layer, but in most cases, it may be that the logical place for those opto-electric features will be the same places there are gateways and on-ramps.  After all, you can’t stick network gear out in parking lots, you need real estate.

Whatever structure might be proposed for the opto-electrical network, the time available to propose it is limited.  Right now, the way this opto-electric magic would work is territory open for the taking, opportunity ripe for positioning.  The “hidden battle for the hidden layer” as I said in a prior blog, is a battle that the combatants are (for now) fairly free to define as they like.  The early positioning in this area, if compelling enough, is going to set the tone for the market, a tone that later entrants will have a very difficult time changing.  This isn’t the first time that we’ve had a small window to try to do great things, but it might well be the most important.

A solid opto-electrical open-device reference design, one that fits into AT&T’s “disaggregated” family, would solve all the router-layer problems…except for those of the vendors.  AT&T has led operators in coming up with open-model network technology.  I’ve not always agreed with their approach (ONAP is an example where I totally disagree), but both DANOS and DDC are spot-on conceptually.  It’s clear that these developments are direct results of vendor pricing and intransigence.  Vendors could stimulate the development of further members in the disaggregated family by continuing with their past attitudes, seen by operators as opportunistic foot-dragging.  It may well be that the network vendors, with their heads in the quarterly-earnings-cycle sand, have already lost the chance to respond to operator pressure to reduce network costs.

Which would leave us with enhancing benefits, which would mean revenue for operators and productivity gains for enterprises.  Operators really need hosted OTT-like experiential services, which their vendors have avoided pursuing.  Enterprises network spending has stagnated because of a lack of new productivity benefits that could drive increased spending.  Network vendors have known about this for a full decade (I know because I told them, in some cases right to the CEO’s face), and they’ve not responded with architectures and products designed to broaden the benefit base for network investment.

Unlocking new benefits may be the next battleground, if cost management fails for traditional services.  If connection services can’t keep the lights on for operators, then they’ll have to climb the value chain, and implement the higher-layer, benefit-enhancing, stuff that vendors have been ignoring.  There are already signs that both operators and enterprises are seeing cloud providers and cloud vendors as the go-to resources for new-benefit planning.  Most of the cloud is driven by open source.  Déjà vu, anyone?