Metro-Networking in the Fog and Optical Respect

If the future is as “foggy” (yes, I mean in the sense of being edge-distributed, not murky!) as I have suggested, and if networks have to be adapted to the mission of “fog-connection”, then how does this impact metro networking explicitly?  In particular, would this new mission create opportunities for the optical vendors, and could it help them “get more respect” as I’ve suggested they need to do?  Let’s look at the way things seem to be developing.

A rational fog-distributed model would be a set of CORD-modeled data centers linked with a dense optical grid that offered essentially inexhaustible capacity.  This model would coexist with current metro infrastructure, and logically you’d expect it to share metro connectivity with that infrastructure for the period of evolution to the fog model, which could last for five to seven years.

The connectivity mission for next-gen metro infrastructure, then, would consist of four services.  First, DCI-like service between fog data centers, for the purpose of connecting the resources there into uniform virtual pools.  These could be “service chains” or links between virtual functions or horizontal connections in carrier IoT or cloud computing.  Second, SDN trunk/path connection services created between fog data centers for the purpose of building connection services at the retail level.  Third, traffic from legacy sources in the same fog data centers, not part of the actual fog infrastructure, and finally wholesale or bulk fiber connectivity sold to other operators or large enterprises.

I’ve listed these in order of their importance to operator planners, most of whom see “carrier cloud” missions as their primary revenue goal.  Operators a decade ago had great hopes for cloud computing services, and even though most now believe these will be harder to promote and less profitable to offer, they still realize that some form of service-hosting revenue is the only credible way for them to boost their top lines.  So you can take the list, from the operator perspective, as a model for how to transition, or fund the transition, to metro-cloud deployments.

The most important point about these service targets is that the mixture of needs and the evolution they represent are the main reasons why optics could hope to escape plumbing status.  If the future is going to be fog-connection, then you can run glass right to the SDN- or service-layer devices and skip an independent optical layer.  At the least, this could reduce the feature demand on optical equipment to the point where low-price players would be the only winners.  If, for example, we had only service missions one and three above, we could satisfy the need for optical transport with minimalist add-drop wavelength multiplexing.

That one-and-three example also illustrates an important point for the optical space, because if you want to get more out of the market than carrying the bits that empower other vendors, you have to somehow climb up the food chain to the services.  Mission four, which is direct optical service fulfillment, isn’t the answer either.  The margins on these connections are very limited, and in many cases operators actually fear selling metro transport for fear it will empower competitors and generate regulatory pressures on pricing practices.  Mission three is just merging in legacy, so that means that elevating optics has to focus on missions one and two.

If, as I’ve suggested in an earlier blog, the metro area is destined to be an enormous virtual data center, then somehow the abstraction/virtualization/orchestration stuff has to be handled.  Missions one and two are network-as-a-service (NaaS) missions, with the services delivered to software residents of the fog data centers through available computer-network interfaces and conforming to the specific feature, service, and application missions those software elements support.  To see how that could work, we have to look at missions one and two in more detail.

It’s tempting to look at DCI as being a fat-pipe mission, and of course it could be.  That’s part of the “optics dilemma”; you always have the opportunity to lay down and accept being just a route others travel on.  If you go back to the CORD approach, though, you would see that DCI services should be abstracted into a virtual network-as-a-service model.  That abstraction would accept connectivity requests from applications or virtual functions deployment and lifecycle management.  Ideally, optical players would provide the full virtualization this represents, but at the very least they’d have to be able to fit under the SDN-service-layer technology that another vendor provided.  In short, if you present a pipe interface you’re a pipe.

This is even more true for the second mission, which is to provide actual new-age service connectivity.  Think of this as a mission like connecting a metro area where there’s one customer demarcation point to another where the main concentration of user access points is found, or where the HQ of the enterprise is located.  Here again, in modern terms, this is a NaaS mission.  Can an optical vendor be a virtual network onramp here?  Or at least, can they synchronize optical configuration and reconfiguration with the higher-layer service mission?

In theory, you could be a player in the virtual network game either at the data-plane/control-plane level or at the control-plane level alone.  A data-plane NaaS optical player would focus on creating an on-ramp to optical paths that could groom some set of electrical interfaces to optical wavelengths.  It would make no sense to do this without adding in control or management interfaces to connect and reconnect stuff.  A control-plane-only player would provide a means of connecting optical lifecycle management to service-layer lifecycle management.

The linkage with the service layer begs a question I’ve discussed in an earlier blog, which is the relationship between optical-layer provisioning and electrical-layer service changes.  I stand by my original views here, which are that there is no value to having a service request at the electrical layer modify optical configuration directly.  In fact, there’s a negative value because you’d have to be concerned about whether a single user could then impact shared resources.  At the most, you could use an electrical-layer service request to trigger a policy-based reconsideration of optical capabilities.

You can see from this that the ability to configure and reconfigure optical paths is valuable in context rather than directly, which means that obtaining service context from above is critical.  That context can be expressed in a NaaS service request if the optical player has a data/control connection, or in a pure control/management request if the optical player couples to services indirectly by policy support.  Without context, either the optical layer has to respond to conditions visible only to itself, or it has to rely on some external (and possibly human-driven) coupling.  Neither is optimal; both reduce optical value and potentially increase opex.

I think we’re at a crossroads here for metro optics, which means for optics overall since the metro space is going to be the bright spot in optical capex.  Take the path to the left and you focus only on the lowest generated cost per bit, and become truly plumbing.  Take the path to the right and you have to shoot tendrils upward to intersect with the trends that are driving fog dispersal of compute resources in the carrier cloud.  I’m planning to do an assessment of where pure-play optical types seem to be heading, and over time look at how the market and vendor plans seem to be meshing as we move forward.