The Access Revolution: What’s Driving It and How do We Harness It?

All networking reduces to getting connected to your services, which means access.  In the past and in a business sense, the “divestiture” and “privatization” trends have first split access from long-haul, then combined it.  The Internet has also changed access networking, creating several delivery models inexpensive enough to serve consumers.  Today, virtualization is creating its own set of changes.  So where is access going?

The first important point to make is that the notion that it’s “heading to multi-service” is false.  It’s been there for decades.  The evolution of “the local loop” from analog voice to multi-service voice/data/digital started back in the era of ISDN.  Cable companies have confronted voice/data/video for over two decades.  It’s less how many services than how services are separated.

What is true is that consumer data services based on the Web generated new interest in multi-service access because consumer data needs rapidly evolved to the point where voice traffic was a non sequitur on a broadband path.  “Convergence” meaning the elimination of service-specific silos is just good business.  And consumer Internet and VoIP were the dawn of the real changes in access.

Many, perhaps in many geographies, most of us use VoIP.  Most stream video as an alternative to channelized linear RF delivery.  The simple truth is that for the consumer market, the access revolution is a predictable consequence of the increased demand for data/Internet capacity.  The bandwidth available is exploitable under the Internet model by any service (hence Skype) and that drives a desire to consolidate services onto that new fat pipe, displacing service-specific access and its associated cost.

Business services haven’t moved as decisively.  In part, that is because businesses were always consumers of both voice (TDM) and data and there was no sudden change such as the one that drove consumer Internet demand.  Over time, though, companies increased their own data demand in support of worker productivity gains, and also put up ecommerce sites.

Where we are now with access relates to this business evolution.  Companies have typically retained service-specific access technology, though TDM voice access is rapidly being replaced by VoIP via SIP trunking.  At the same time, though, physical access media has begun to shift more to fiber, which means that we’ve seen first consolidation of access channels on the same fiber trunks, and more recently we’re starting to see access multiplexing climb the OSI stack toward L2/L3.

It’s this ride up the OSI ladder that’s being driven increasingly by virtualization.  Network virtualization, NaaS, or whatever you want to call it doesn’t have to be classic SDN, it could be about tunnels, or MPLS, or whatever.  The point is that if you have a service at a given OSI level, you can use some form of multiplexing below that level to create ships-in-the-night parallel conduits that share the access media.  You can do this multiplexing/virtualization at L2 if you have L3 services, and so forth.  You have multi-service at the level of service consumption, but you may have any service from OSI Level 1 through 3 down below as the carrier.

Virtualization is a more dynamic solution than threading new fiber or copper, and the increased potential dynamism facilitates dynamic service delivery.  We all know that SDN, NFV, and the cloud all postulate ad hoc services, and if those services were indeed to grow in popularity and become significant consumers of bandwidth, they would tend to break the static-bandwidth access model of today.

Dynamism at the service level may drive access changes, but access changes then drive changes to services, even the basic connection services.  You can sell somebody extemporaneous capacity to ride through a period of heavy activity, but only if the access connection will support that incremental capacity.  Turning up the turbo-dial isn’t useful if you have wait two weeks or more to get your access connection turned up.

Our vision of elastic bandwidth, so far at least, is surpassingly shortsighted.  I’ve surveyed enterprises about how they’d use it, and in almost nine of every ten cases their answer boils down to “Reduce cost!”  They expect to cut back on their typical bandwidth and burst when needed above it.  If that doesn’t turn out cheaper for them, forget elastic bandwidth.  That means that business service access changes probably have to be driven by truly new, higher-layer, services rather than tweaks to basic connectivity.

Even new services like cloud-hosted applications or NFV-delivered virtual features can be inhibited by lack of access capacity.  If the new service or feature requires more bandwidth, it may be impossible to deliver suitable QoE over the same tired access path—unless the operator had the foresight to pre-deploy something faster.  Two operators, serving the same customer or even building, might compete as much on residual capacity as on “price” in a strict sense.  “Bandwidth in waiting”, waiting for something new to deliver, means it’s waiting for operators to exploit to gain new revenues.  This is the trend now driving access evolution for business services.

The business service flagship, Carrier Ethernet, shows this trend most clearly.  The MEF’s Third Network concept is an attempt to define, first and foremost, an access line as a pathway for multiple virtual networks, and that’s the end-game of the current trends.  The Third Network redefines Carrier Ethernet, but at the same time redefines what it carries.  As ad hoc provisioning of services becomes possible, services that benefit from it become business targets to operators.  Further, if access limitations resolved through virtualization were necessary to make ad hoc services work, it follows that those limitations virtualization cannot address—the basic capacity of the pipe—have to be somehow minimized or it hurts service evolution.

One thing this would do is drive much more aggressive fiber deployment to multi-tenant facilities.  Even the facility owners might want this sort of thing, and we already have some facilities in some areas served by multi-fiber bundles from several operators.  Imagine what will happen if we see more dynamic services, and if elastic bandwidth actually means something!

That means that “service multiplexing” and ad hoc services versus ad hoc capacity is the key.  Cloud computing, offsite data storage, anything that has additional capacity requirements as an offshoot of the service delivery, is the only credible type of driver for access virtualization changes on a large scale.  Any of these could produce a carrier revenue model based on ad hoc service sale dependent on ad hoc capacity availability.  That implies oversupply of access capacity.

The question is whether the revenue model for pre-positioning access assets could be made to work.  Certainly at the physical media level, as with fiber deployment, it makes sense.  However, physical media doesn’t become an OSI layer without something to do the conversion.  We’d need to think about how to either up-speed connections dynamically or meter the effective bandwidth of a superfast pipe unless the buyer pays to have the gates opened up.  We also need to think about how to utilize “services” that appear ad hoc on an access line.  How do you distinguish them from an elaborate hacking attempt?

That’s the forgotten point about access evolution, I think.  More often than not, we have static device and interface facilities feeding a static connectivity vision.  We’ll have to work out a way to harness dynamism by converging changes in service delivery and service consumption to match our new flexibility.  Otherwise access evolution could be just another trend that falls short.