Service Lifecycle Management 101: Modeling Techniques

Picking the best approach to service modeling for lifecycle management is like picking the Final Four; there’s no shortage of strongly held opinions.  This blog is my view, but as you’ll see I’m not going to war to defend my choice.  I’ll lay out the needs, give my views, and let everyone make their own decision.  If you pick something that doesn’t meet the needs I’m describing, then I believe you’ll face negative consequences.  But as a relative once told me a decade or so ago, “Drive your own car, boy!”

I’ve described in prior blogs that there are two distinct layers in a service model—the service layer and the resource layer.  The need to reflect the significant differences in the missions of these layers without creating a brittle structure effectively defines a third layer, the boundary layer where I’ve recommended that the actual process of laying out the abstractions to use should start.  I’m going to start the modeling discussion at the top, though, because that’s where the service really starts.

The service layer describes commercial issues, parameters, elements, and policies at the business level.  These models, in my view, should be structured as intent models, meaning that they create abstract elements or objects whose exposed properties describe what they do, not how.  The beauty of an intent model is that it describes the goal, which means that the mechanisms whereby that goal can be met (which live within the intent model) are invisible and equivalent.

I’ve done a fair amount of intent modeling, in standards groups like IPsphere and the TMF and in my own original ExperiaSphere project (spawned from TMF Service Delivery Framework work, TMF519), the CloudNFV initiative where I served as Chief Architect, and my new ExperiaSphere model that addressed the SDN/NFV standards as they developed.  All of these recommended different approaches, from TMF SID to Java Classes to XML to TOSCA.  My personal preference is TOSCA because I believe it’s the most modern, the most flexible, and the most complete approach.  We live in a cloud world; why not accept that and use a cloud modeling approach?  But what’s important is the stuff that is inside.

An intent model has to describe functionality in abstract.  In network or network/compute terms, that means that it has to define the function the object represents, the connections that it supports, the parameters it needs, and the SLA it asserts.  When intent models are nested as they would be in a service model, they also have to define, but internally, the decomposition policies that determine how objects at the next level are linked to this particular object.  All of this can be done in some responsive way with any of the modeling approaches I’ve mentioned, and probably others as well.

When these models spawn subordinates through those decomposition policies, there has to be a set of relationships defined between the visible attributes of the superior object and those of its subordinates, to ensure that the intrinsic guarantees of the abstract intent model are satisfied.  These can operate in both directions; the superior passes a relationship set based on its own exposed attributes to subordinates, and it takes parameters/SLA exposed by subordinates and derives its own exposed values from them.

It follows from this that any level of the model can be “managed” providing that there are exposed attributes to view and change and that there’s something that can do the viewing and changing.  It also follows that if there’s a “lifecycle” for the service, that lifecycle has to be derived from or driven by the lifecycles of the subordinate elements down to the bottom.  That means that every intent model element or object has to have a “state” and a table that relates how events would be processed based on that state.  Thus, each one has to specify an event interface and a table that contains processes that are to be used for all the state/event intersections.

Events in this approach are signals between superior and subordinate models.  It’s critical that they be exchanged only across this one specific adjacency, or we’d end up with a high-level object that knew about/from something inside what’s supposed to be an opaque abstraction.  When an event happens, it’s the event that would trigger the model element to do something, meaning that it’s the event that activates the lifecycle progression.  That’s why this whole state/event thing is so important to lifecycle management.

A service model “instance”, meaning one representing a specific service contract or agreement, is really a data model.  If you took that model in its complete form and handed it to a process that recognized it, the process could handle any event and play the role of the model overall.  That makes it possible to distribute, replicate, and replace processes as long as they are properly written.  That includes not only the thing that processes the model to handle events, but also the processes referenced in the state/event table.  The model structures all of service lifecycle management.

It’s easy to become totally infatuated with intent modeling, and it is the most critical concept in service lifecycle management, but it’s not the only concept.  Down at the bottom of a tree of hierarchical intent models will necessarily be something that commits resources.  If we presume that we have a low-level intent model that receives an “ACTIVATE” event, that model element has to be able to actually do something.  We could say that the process that’s associated with the ACTIVATE in the “Ordered” state does that, of course, but that kind of passes the buck.  How does it do that?  There are two possibilities.

One is that the process structures an API call to a network or element management system that’s already there, and asks for something like a VLAN.  The presumption is that the management system knows what a VLAN is and can create one on demand.  This is the best approach for legacy services built from legacy infrastructure, because it leverages what’s already in use.

The second option is that we use something model-driven to do the heavy lifting all the way down to infrastructure.  TOSCA is a cloud computing modeling tool by design, so obviously it could be used to manage hosted things directly.  It can also describe how to do the provisioning of non-cloud things, but unless you’re invoking that EMS/NMS process as before, you’d have to develop your own set of processes to do the setup.

Where YANG comes in, in my view, is at this bottom level.  Rather than having a lot of vendor and technology tools you either inherit and integrate or build, you could use YANG to model the tasks of configuring network devices and generating the necessary (NETCONF) commands to the devices.  In short you could reference a YANG/NETCONF model in your intent model.  The combination is already used in legacy networks, and since legacy technology will dominate networking for at least four to five more years, that counts a lot.

I want to close this by making a point I also made in the opening.  I have a personal preference for TOSCA here, based on my own experiences, but it’s not my style to push recommendations that indulge my personal preferences.  If you can do what’s needed with another model, it works for me.  I do want to point out that at some point it would be very helpful to vendors and operators if models of services and service elements were made interchangeable.  That’s not going to happen if we have a dozen different modeling and orchestration approaches.

The next blog in this series will apply these modeling principles to the VNFs and VNF management, which will require a broader look at how this kind of structured service model supports management overall.


Service Lifecycle Management 101: Principles of Boundary-Layer Modeling

Service modeling has to start somewhere, and both the “normal” bottom-up approach and the software-centric top-down approach have their plusses and minuses.  Starting at the bottom invites creating an implementation-specific approach that misses a lot of issues and benefits.  Starting at the top ignores the reality that operators have an enormous sunk cost in network infrastructure, and a revenue base that depends on “legacy” services.  So why not the middle, which as we saw in the last blog means that boundary layer?

A boundary-layer-driven approach has the advantage of focusing where the capabilities of infrastructure, notably the installed base of equipment, meets the marketing goals as defined by the service-level modeling.  The trick for service planners, or for vendors or operators trying to define an approach that can reap the essential justifying benefits, is a clear methodology.

The most important step in boundary-layer planning for service lifecycle management and modeling is modeling legacy services based on OSI principles.  Yes, good old OSI.  OSI defines protocol layers, but it also defines management layers, and this latter definition is the most helpful.  Services, says the OSI management model, are coerced from the cooperative behavior of systems of devices.  Those systems, which we call “networks”, are of course made of the devices themselves, the “physical network functions” that form the repository of features that NFV is targeting, for example.

Good boundary-layer planning starts with the network layer.  A service or resource architect would want to first define the network behaviors that are created and exploited by current infrastructure.  Most network services are really two-piece processes.  You have the “network” as an extended set of features that form the communications/connection framework that’s being sold, and you have “access”, which is a set of things that get sites connected to that network framework.  That’s a good way to start boundary planning—you catalog all the network frameworks—Internet, VPN, VLAN, whatever—and you catalog all the access pieces.

You can visualize networks as being a connection framework, to which are added perhaps-optional hosted features.  For example, an IP VPN has “router” features that create connectivity.  It also has DNS and DHCP features to manage URL-versus-IP-address assignment and association, and it might have additional elements like security, tunnels, firewalls, etc.  The goal of our network behavior definition is to catalog the primary network services, like IP VPN, and to then list the function/feature components that are available for it.

From the catalog of services and features, we can build the basic models at the boundary layer.  We have “L3Connect” and “L2Connect” for example, to express an IP network or an Ethernet network.  We could also have an “L1Connect” to represent tunnels.  These lowest-level structures are the building-blocks for the important boundary models.

Let’s go back to IP VPN.  We might say that L3Connect is an IP VPN.  We might further classify IP VPN into “IPSubnet”, which is really an L2Connect plus a default gateway router.  We might say that an L1Connect plus a SDWAN access set is also an IP VPN.  You get the picture, I think.  The goal is to define elements that can be nuclear, or be made up of a combination of other elements.  All of the elements we define in the boundary layer relate to what it looks like as a service and how we do it through a device or device system.

Don’t get caught up in thinking about retail services at this point.  What we want to have is a set of capabilities, and a mechanism to combine those capabilities in ways that we know are reasonable and practical.  We don’t worry about the underlying technology needed to build our L2Connect or whatever, only that the function of a Level 2 connection resource exists and can be created from infrastructure.

The boundary-layer functions we create obviously do have to be sold, and do have to be created somehow, but those responsibilities lie in the resource and service layers, where modeling and lifecycle management defines how those responsibilities are met.  We decompose a boundary model element into resource commitments.  We decompose a retail service into boundary model elements.  That central role of the boundary element is why it’s useful to start your modeling there.

I think it’s pretty self-evident how you can build boundary models for legacy services.  It’s harder to create them when there is no service you can start with, where the goal of the modeling is to expose new capabilities.  Fortunately, we can go back to another structural comment I made in an earlier blog.  All network services can be built as a connection model, combined with in-line elements and hosted elements.  An in-line element is something that information flows through (like a firewall) and a hosted element is something that performs a service that looks something like what a network endpoint might do (a DNS or DHCP server).  A connection model describes the way the ports of the service relate to traffic.  Three connection models are widely recognized; “LINE” or “P2P”, which is point-to-point, “LAN” or “MP” which is multipoint, and “TREE” which is broadcast/multicast.  In theory you could build others.

If we presume that new services would be defined using these three most general models, we could have something that says that a “CloudApplication” is a set of hosted elements that represent the components, and a connection model that represents the network service framework in which the hosted elements are accessible.  Users get to that connection model via another connection model, the LINE or access model, and perhaps some in-line elements that represent things like security.

If new services can be built this way it should be obvious that there are some benefits in using these lower-level model concepts as ways to decompose the basic features like L2Connect.  That’s an MP connection model built at L2, presumably with Ethernet.  If this approach of decomposing to the most primitive features is followed uniformly, then the bottom of the boundary layer is purely a set of function primitives that can be realized by infrastructure in any way that suits the functions.  L3Connect is a connection model of MP realized at the Level 3 or IP level.  You then know that you need to define an MP model, and make the protocol used a parameter of the model.

Even cloud applications, or cloud computing services, can be defined.  We could say that an ApplicationService is a hosted model, connected to either an L2 or L3 Connect service that’s realized as a MP model.  How you host, meaning whether it’s containers or VMs, can be a parameter of the hosting model if it’s necessary to know at the service layer which option is being used.  You could also have a purely “functional” hosting approach that decomposes to VMs or containers or even bare metal.

There is no single way to use the boundary layer, but for any given combination of infrastructure and service goals, there’s probably a best way.  This means it’s worth taking the time to find what’s your own best approach before you get too far along.

In our next piece, we’ll look at the modeling principles for the service, boundary, and resource layers to lay out what’s necessary in each area, and what might be the best way of getting it.

Service Lifecycle Management 101: The Boundary Layer

This is my third blog in my series on service management, and like the past two (on the resource layer and service layer) this one will take a practical-example focus to try to open more areas of discussion.  I don’t recommend anyone read this piece without having read the other two blogs.

The service layer of a network model is responsible for framing commercial presentation and integrating OSS/BSS processes.  The resource layer manages actual deployment and FCAPS/network operations.  The goal should be to keep these layers independent so that technology doesn’t impact service definitions and retail considerations impact technical deployment only insofar as they change technical requirements.  The “boundary layer” that makes this happen is the topic today, and this layer is actually a kind of elastic speed-match function that will expand and contract, absorb functions and emit them, depending on just how an operator frames services and deploys them.

In a perfect world (meaning one aimed at presenting us with ease and pleasure!) we’d see a boundary layer exactly one object thick.  A resource layer would emit a set of “Behaviors” or “Resource-Facing Services” (RFSs) that would then be assigned commercial terms by a corresponding service-layer element.  We’d end up with a kind of boundary dumbbell, with one lump in the service layer and one in the resource layer, and a 1:1 bar linking them.

To understand why the boundary layers are probably more complicated, let’s look at an example.  Suppose we have a “Behavior” of “L3VPN”, which offers IP virtual-network capability.  We might have four different models for creating it—IP/MPLS, IP/2547, hosted virtual routers, and SD-WAN.  These technologies might be available in some or all of the service area, and might be able to deliver on any SLA offered for L3VPNs or only a subset.  That sets up our example.

Suppose an order for L3VPN comes in, and asks for an SLA and served locations that fit all the models, or even just two of them.  We could presume that the resource layer would decide on which to use, based on cost metrics.  Suppose that we had no options that did everything.  We’d now select multiple implementations, and to support that we’d have to ensure that each deployed L3VPN had a “gateway” port that let it attach to other implementations.  We’d pick the implementation based on cost as before.  So far, so good.

Now suppose some clever marketing person said that because SD-WAN was hot, they wanted to have a specific SD-WAN offering.  We now have two choices.  First, we could define a specific service of SD-WAN VPN, which would decompose only into SD-WAN implementations.  Second, we could introduce a “TechType” parameter into the L3VPN model, which could then guide the decomposition below.  It’s this situation that opens our boundary discussion.

Defining two services that are identical except in how they can be decomposed is an invitation to management/operations turmoil.  So passing a parameter might be a better solution, right?  But should the decomposition of that parameter-driven implementation choice lie in the service or resource layer?  Whether we do SD-WAN or IP/MPLS VPN is implementation, if we presume the SLA and location requirements can be satisfied either way.  But it was a commercial decision to allow the technical choice?  That might suggest that we needed to make the choice in the service layer.

A boundary layer strategy could accommodate this by exposing some parameters to allow resource-layer decomposition to accommodate technology selection and other factors based on retail service commitments.  You could consider the boundary layer, and implement it, either as part of the resource layer or the service layer, and where services are numerous and complicated you could make it a separate layer that was administered by agreement of the architects in both areas.

You have to be careful with boundary functions, and that’s a good reason to keep them independent of both layers.  Any parameters that don’t describe a basic SLA and yet are exchanged between the service and resource layers could end up complicating both layers at best, and creating brittle or condition-specific models and implementations.  A good example is that if you decide on consideration to withdraw a specific implementation of our L3VPN model, service definitions that included parameter-based decomposition that relied on that implementation would now be broken.  That could be fixed for new customers, but what happens when a service model for an active service instance is changed?

The boundary layer is probably the logical place to establish service and infrastructure policy, to integrate management practices between services and resources, and to create a “team mentality” around transformation and new service models.  Not surprisingly, it’s not a place on which operators (or vendors) have focused.  Most seem to think that having a boundary layer at all is an admission of failure, or perhaps an accommodation to the “antiquated” divided management model that now prevails between OSS/BSS and NMS.  Not true.

High inertia in the service-creation process is arguably a direct result of the view that somehow a service is a resource, that all services derive directly from inherent protocol-centric resource behaviors.  If that view could be battered down we could end up with more agile services even if no infrastructure changes were made, and the explicit focus on a boundary function is IMHO mandatory in doing the battering.

Infrastructure lives in a world of long capital cycles, from five to fifteen years.  Services live in a world where six months can be interminable.  With an explicit notion of a boundary layer, we could set both service and resource planners on the task of creating abstract components from which future services would be built.  Is that a formidable task?  Surely, but surely less formidable than the task of building infrastructure that anticipated service needs even five years out.  Less formidable than defining a service strategy that had to look ahead for a full capital cycle to prep installed resources to support it.

Services at the network level reduce to those pipes and features I talked about in the first blog of this series.  As simplistic as that may be, it is still a step whose value is recognized in software architecture—you define high-level classes from which you then build lower-level, more specific, ones.  We have many possible kinds of pipes, meaning connection tunnels, and clearly many types of features.  We could divide each of these base classes into subclasses—real or virtual for “pipes”, for example, or hosted experiences versus in-line event/control tasks for “features”.  If the boundary layer of the future was equipped with a set of refined abstractions that could easily map up to services or down to implementations, we need only find examples of both to create a transformed model, at any point we desire.

The boundary layer, in fact, might be the best possible place to apply standards.  There are many variations in operator practices for service management and resource management today, and many different vendor and technology choices in play even without considering SDN, NFV, and the cloud.  Standards, to be effective, would have to accommodate these without constraining “transformation” and at the same time stay applicable to the critical “what we have now” dimension.  That’s what the boundary layer could accomplish.

No software development task aimed at abstraction of features and resources would ever be undertaken without some notion of a converging structure of “classes” that were built on and refined to create the critical models and features.  Since both services and infrastructure converge on the boundary layer, and since the goal of that layer is accommodation and evolution in parallel, it’s probably the place we should start when building our models of future services.  Since we have nothing there now, we would be free to pick an approach that works in all directions, too.

In the next blog in this series, I’ll look at how starting service modeling with the boundary point might work, and how it would then extend concepts and practices to unify transformation.

Service Lifecycle Management 101: The Service Layer

Yesterday I talked about service transformation through lifecycle management, starting with how to expose traditional networking services and features through intent models.  Today, I’m going to talk about the other side of the divide—service modeling.  Later, we’ll talk about the boundary function between the two, and still later will take up other topics like how service architects build services and how portals then activate them.

The resource layer that we’ve already covered is all about deployment and management of actual resources.  The service layer is all about framing commercial offerings from contributed “Behaviors” or “Resource-Facing Services” (RFS) and sustaining commercial SLAs.  Again, there’s nothing to say that a common model and even common decomposition software wouldn’t be able to deal with it all.  I think TOSCA would enable that, in fact.  Whether that single-mechanism approach is useful probably depends on whether any credible implementations exist, which operators will have to decide.

The top of the service layer is the actual commercial offerings, which the TMF calls “Products” because they get sold.  These retail services would be represented by a model whose properties, ports, and parameters (those “three Ps”) are the attributes that the buyer selects and the operator guarantees.  The goal of the whole service modeling process is to decompose this single retail service object (which you will recall is an intent model) into a set of deployments onto real resources.  That, in my view at least, could include functional decomposition, geographic/administrative decomposition, or a combination thereof.

Functional decomposition means taking the retail service properties and dividing them functionally.  A VPN, for example, could be considered to consist of two functional elements—“VPNCore” and “VPNAccess”.  It would be these two functional elements that would then have to be further decomposed to create a set of Behaviors/RFSs that included our “Features” and “Pipes” primitive elements, and that were then deployed.

Geographic/administrative decomposition is essential to reflect the fact that user ports are usually in a specific place, and that infrastructure is often uneven across a service geography.  For example, a “VPNAccess” element might decompose into at least an option for cloud-hosted VNFs where there was infrastructure to support that, but might decompose into “pipes” and physical devices elsewhere.

Probably most services would include a mixture of these decomposition options, which I think means that what we’re really looking for is a set of policies that can control how a given higher-layer element would be decomposed.  The policies might test geography, functionality, or any other factor that the operator (and customer) found useful.  Because the policies are effectively based on retail element parameters, the relevant parameters have to be passed from the retail element, down as far as they’d need to be tested in policies.

Policies that control decomposition have, as their goal, selecting from among the possible deployment options associated with a given retail feature.  These features could be considered analogous to the TMF’s “Customer-Facing Services” or CFSs, but the latter seem to be more likely to be rigorous functional or administrative divisions, almost “wholesale” elements.  What I’m describing is more arbitrary or flexible; all you care about is getting to the point where resources are committed, not to creating specific kinds of division.  My functional and geographic examples are just that; examples and not goals.

If we presume (as I do) that the service domain is the provenance of “service architects” who are more customer- or market-oriented than technically oriented, it follows that the Behaviors/RFSs that are exposed by the resource layer are probably not going to be directly composed into services, or even directly decomposed from them.  Another likely role of the boundary layer is to frame the resource-layer offerings in the desired lowest-level units of service composition.

In our hypothetical technical and marketing meetings, a team from each group would likely coordinate what the boundary layer would expose to the service layer.  From this, service architects would be able to compose services based on pricing, features, and SLA requirements, and if technology/vendor changes were made down in the resource layer, those changes wouldn’t impact the services as long as the paramount rule of making sure that intent models supported common capabilities regardless of how they decompose, was followed.

The service layer is also where I think that federation of service elements from others would be provided.  If we have a service-layer object, we could have that object decompose to a reference to a high-level service object (a wholesale service element) provided by a third party.  The decomposition would activate an order process, which is a model decomposition, in another domain.  This could be a 1:1 mapping, meaning a single object in the “owning” service model decomposes to a like object in the “wholesale” service model, or the wholesale federated option could be one decomposition choice.

The service/resource boundary would set the limit of customer/CSR management visibility, and also the visibility a retail provider had into the service contribution of a wholesale partner.  Remember that every service/resource element is an intent model with those three Ps.  One of them includes the SLA the element is prepared to assert to its superiors, and every element at every level is responsible for securing that SLA either by selecting subordinates that can do it, or by providing incremental management practices.

The management practices are important because if we presume our service/resource boundary, then we would probably find that the network management and network operations processes, people, and tools would be directed to the resource layer, and the service management and OSS/BSS tools and practices at the service layer.  That would again open the possibility of considering the modeling and decomposition might differ on either side of that boundary, though I stress that I believe a single process from top to bottom would preserve management flexibility just as well.

I’ve tended to describe these models as hierarchies—a single thing on top and a bunch of decomposed subordinates fanning out below.  If you looked at the complete inventory of models for a given operator there would be a lot of “things on top”, and the trees below would often be composed of the same set of elements, with some adds and removals for each. That’s a form of the same kind of component reuse that marks modern software development processes.

One value of this complex “forest” of model trees is that we could define one set of service definitions that a customer might be able to order from a portal, and another that would require customer service intervention.  That would maximize self-service without risking instability.  In any case, the SLAs of the intent models would drive the portal’s management state, so the customer wouldn’t be able to directly influence the behavior of shared resources.

It’s also true that some “component reuse” would be more than just reusing the same model to instantiate independent service tenancies.  Some features, like the subscriber management or mobility management elements of IMS/EPC, are themselves multi-tenant.  That means that our modeling has to be able to represent multi-tenant elements as well as create tenant-specific instances.  After all, a federated contribution to a service might well be multi-tenant and as long as it did what the three Ps promised, we’d never know it because the details are hidden in the black box of the intent model.

We can’t say as much about the service layer as you might think, other than to say that it enforces commercial policies.  The real details of the service layer will depend on the boundary layer, the way that the service and resource policies and models combine.  There are a number of options for that, and we’ll look at them in the next blog in this series.

Service Lifecycle Management 101: Defining Legacy Services via Intent Modeling

One of the challenges of transforming the way we do networking is the need for abstraction and the difficulty we experience in dealing with purely abstract things.  What I’ll be doing over the next week or ten days (depending on what else comes up that warrants blogging) is looking at the process of building and running the network of the future, as a means of exploring how technologies should be used to get us there.  I’m hoping that the process example will make the complexities of this evolution easier to understand.

We know by now that the best way to do transformation is to automate the service lifecycle from top to bottom, and the best way to do that is to model the services and decompose the models to drive software processes.  There’s probably not much disagreement on this, but there’s a lot of mystery in the area of how that can be done.  The goal here is to work to dispel some of that mystery.

The approach I’m going to advocate here is one that separates the commercial (service-domain) and technical (resource-domain) activity, and that is based on intent modeling and abstraction.  I’m a fan of top-down approaches, but in this case I’m going to start at the bottom because we have a network already, and the first test of any new network methodology is whether it can embrace where we are and so jump from that to where we want to be.

Network “services” at the high level are made up of two things—“pipes” and “features”.  A pipe is something that has two ends/ports and provides for the movement of information through it.  A feature has some indeterminate number of ends/ports, and the outputs are a complex function of the inputs.  Everything from access connections to the Internet can be built using these two things.

When network architects sit down to create a model of the network of the future, they’ll be building it from nuclear pieces that we’d likely recognize, things like “VPN”, “VLAN”, “Application”, and “Connection”.  The temptation might be to start with these elements, but a good software architect would say that you have to go back to the most basic form of things to standardize and optimize what you’re doing.  So “Connections” are “pipes”, and all of the other things we have listed here are “Features”.  Keep this in mind as we develop this.

Our network architects should start their processes by defining the things that infrastructure as it now exists can deliver.  A “network” today is a layer of protocols that has specific properties, meaning that it is a combination of pipes and features that combine to deliver specific capabilities.  I’ve called these capabilities Behaviors in my ExperiaSphere work, and they are roughly analogous (but not exactly so) with the TMF’s notion of Resource-Facing Services (RFS).  All of the technical pieces of current retail or wholesale services are Behaviors/RFSs.

An RFS should be functional, not procedural, meaning that it should describe what happens and not how it’s done.  If we have an RFS called “VPN”, that means in our convention that it’s a Level 3/IP private network feature with an unspecified number of access ports.  It doesn’t mean it’s MPLS or RFC2547 or SD-WAN; all of these are means of implementing the VPN RFS.  The same is true for our “Firewall” feature, our “IMS” feature, and so on.

When our network architects are done with their process, they’d have a list of the “feature primitives” that are used to create services based on current technology.  This is an important fork in the road, because it now defines how we achieve service automation and how we take advantage of the feature agility of virtualization.

The goal of service automation is to define a set of models and processes that will deliver each of our abstract features no matter what they’re built on.  That means that all mechanisms for building a VPN would be structures underneath the general structure “VPN”.  We have to define “VPN” in terms of its properties, its ports, and the parameters (including SLA) that it either accepts or asserts, then we have to insure that every mechanism for implementing the VPN supports exactly that set of things, no more or less.

Ideally, you’d think up your properties, ports, and parameters based on the functionality you’re describing, but in the real world it’s likely that you would do a rough cut based on foresight, then refine it as you subdivide “VPN” into the various ways you could create one.  Our team of network architects would probably take this approach, and at the end of their work they’d have a complete list of the “three Ps” for each feature.  This would become input into the work of network engineers who would take charge of the feature implementations.

A feature, as a high-level abstraction, can be implemented in any way or combination of ways that conforms to the high-level description (our three Ps).  In today’s networks, implementation variations are caused by geographic scope, administrative areas, technology or vendor differences, and so forth.  For a given feature, like our VPN example, the first step would be to list out the various implementation options and the way that a given service would be assigned a set of these options.  Thus, decomposition of an abstract VPN feature starts by examining the way that an implementation (or set of implementations) is selected.  For each, you’d then have network engineers describe the process of deploying the implementations they’re responsible for, and mapping between their parameters and those of the high-level feature.

The implementations referenced here would not need to be modeled or described in any specific way, as long as their model/description was suitable for decoding by something (a DevOps tool, YANG, etc.) and as long as the model could be referenced and activated by the high-level selection process just described.

I mentioned the policy-based selection of implementation alternatives, and this would make up what I’d call a boundary layer, meaning that in theory there is a set of processes that link retail services to network services, and could be considered to be divided among those two things in any reasonable way.  The only caveat is the requirement of isolation; you should never mix commercial and technical policies because that would risk setting up a brittle service definition that might expose implementations up in the service layer where they definitely don’t belong.  See below!

The other pathway from the high-level feature model inventory is to service architects who want to build new features, those that perhaps were not available with standard network devices but would be suitable for the application of virtualized features.  An example might be a publish-and-subscribe tool for event distribution.  The service architect would define the high-level feature (“PandS”) and would also provide the three “Ps” for it.  The result would then again be turned over to a network or server/software engineer for translation into a deployment and lifecycle implementation.

To return to an earlier point, it’s important in this model that the implementation differences that are reflected when decomposing the high-level objects have to be technical policies and not business policies.  What we want here is for the “Behaviors” or RFSs that are exposed by infrastructure are the boundary between the service domain and the resource domain.  Customer portals and customer service reps should not be making implementation decisions, nor should commercial issues be exposed directly to resources.  It’s OK to have parameters passed that guide selection, but these should be technically and commercially neutral.

I’ve noted before that I think a modeling approach like TOSCA is the best way to describe services, even ones that have nothing to do with hosted features.  However, you can see that since the decomposition of our RFSs or Behaviors into actual resource commitments is hidden inside the RFS/Behavior object, there’s no reason why we couldn’t let the decomposition be handled any way that works, meaning that it could take advantage of vendor features, previous technology commitments, etc.

If this approach is followed, we could build models of current services and describe those services in such a way that automated lifecycle processes could operate, reducing opex.  These same processes would also work with implementations of current features that had been translated to virtual-function form, facilitating the evolution to SDN and NFV where the business case can be made.

Some will say that by tapping off opex benefits, this approach could actually limit the SDN and NFV deployments.  Perhaps, but if it’s possible to utilize current technology more efficiently, we need to identify that mechanism and cost/benefit analyze it versus more radical transformations.

In the next blog on this topic, I’ll talk about the service layer of the model, and in the one following it the way that we could expect to see these two layers integrated—those pesky boundary functions.  The other topics will develop from these three main points; stay tuned!

This Year is the Crossroads for Networking

There seem to be a lot of forces driving, suggesting, or inducing major changes in the networking industry.  As indicators, we have mergers at the service provider, equipment vendor, and content provider level, and we have proposed breakups of hardware and software at the equipment level.  Another hardware player broke itself to death, selling pieces to multiple parties.  To this mix, add in the fact that it’s clear that in the US, regulatory policy is going to shift decisively, and you have a recipe for some chaotic times.  Let’s look at what’s happening, and how it might all shake out.  I warn you that this is a long blog that covers a lot of territory, but this is our industry and we have to work through it thoroughly.

The heart of the present situation is the “Internet business model” that evolved from a time when (sadly) Wall Street analysts were framing (accidentally) the structure of the industry of the future.  In the ‘80s, when the Web was driving massive changes in consumer data usage, the Street took it on themselves to value ISPs based on their subscriber count.  One picked $50,000 per customer, for example.  Needless to say, it didn’t take ISPs long to realize that the key to getting a high valuation for your stock was to have a lot of customers.  This led to the “all-you-can-eat” and “bill-and-keep” model of Internet pricing, a sharp change from the traditional settlement model of earlier data services.

That current business model divides the Internet into the “top” and “network” players.  In a pure bill-and-keep world, the former players (including consumers) pay to attach to the Internet via their ISP, but not for what they send or receive.  The ISPs bill, and keep what they get, which from almost the first has meant that priority services are difficult to create, even if regulations permit it.  You can’t expect others to offer your traffic priority when you got paid for the feature and they didn’t.  The wireline world says that you can use unlimited data, and that’s sometimes true in wireless broadband too, but in both cases, you may have your speed throttled at some point if you go over a limit.  That’s the “all-you-can-eat” part.  The marginal cost of bandwidth is near-zero.

In this world, fortune has favored those in the top layer, the OTTs, because they provide the meat of the service.  What separates the Internet from the prior data services is that the Internet is really not a communications service but a content service.  However, it overlays on a communication/connection service, and the underlay (network) part is available to all.  OTT versions of underlay services (VoIP is an example, as is the Internet VPN) have competed with the service providers’ native services, and the ISPs have little ability to recover the cost of the incremental traffic generated either by cannibalizing their current services or carrying new OTT services.  The result has been a steady decline in revenue per bit carried.  That puts pressure on capex, which puts pressure on the network equipment vendors.

If content is the service, then it makes sense for service providers to want to buy content producers.  Comcast and NBC, AT&T and both DirecTV and Time Warner, are examples of buying yourself higher on the value chain.  So, arguably, is Verizon and Yahoo.  On the network equipment side, we’ve seen partnerships like the Cisco/Ericsson relationships and outright mergers like Alcatel-Lucent and Nokia.  That hasn’t solved the problem of capex pressure on buyers, and that means network operators are turning to lower-cost commodity hardware and working hard to avoid vendor lock-ins.

The lock-in piece is especially problematic to bigger vendors, who rely on symbiosis among their product lines to give them control of large deals.  This has been a worry for network operators for a decade, and it’s the driving force behind AT&T’s Domain 2.0 approach to dividing networking into pieces that assign vendors to limited roles.  Companies who try to take a big bite out of network functionality are hit hardest, and smaller players hardest of all.  Brocade had big dreams and big opportunities, but not big enough thoughts and execution.

The commodity hardware issue is an offshoot of a bunch of trends, not the least of which is the Facebook-driven Open Compute Project, but which also includes hosted routing and NFV.  If network equipment vendors are going to lose the hardware part of their revenue stream anyway, it could make sense to unbundle the network operating system software and sell it separately.  Otherwise there’s a rampant open-source market stimulus, and you lose everything.  Some vendors have already taken this approach, at least in part, including Cisco rivals Arista and Juniper.

Cisco is now reportedly looking at a revolutionary step, unbundling its hardware/software elements (Cisco says this is “unsubstantiated” which isn’t exactly the same as false).  I think that this move would represent a logical response to conditions, but I also think that if it’s done it will validate the commoditization of network hardware.  Just because Cisco will reportedly offer a code-named Lindt standalone network software stack to run on white-box gear doesn’t mean that everyone will buy it.  If you admit to commodity hardware, you admit to interchangeable software.  The capital burden of the hardware doesn’t lock buyers to you anymore.  True or not true in Cisco’s case, unbundling software from hardware has started among competitors, and is a sign of an industry facing pressures that can only mount over time.

I’ve outlined the business-model backdrop for all of this because the business model issue is what’s driving things, and that drive will continue.  Because the regulatory changes are very likely to impact the business model of the Internet, the future of networking will depend on how both providers and equipment vendors respond.

The latest round on the regulatory side is the undoing of the extensions in Internet privacy that were put into place by the Obama administration in its last days.  These rules would impose restrictions on how the ISPs themselves can collect and use personal data, and they were opposed by many from the first on the basis that they continued the policy of offering benefits to the “top” of the Internet that were denied to the bottom, or network, providers.  They were not in effect at the point of repeal, so what’s the impact?  It could be a lot.

Service providers are heavily regulated and thus regulation-driven in planning.  Having seen the writing on the wall with respect to personal data, they either stayed out of the ad game or dabbled in it by buying up OTT companies (Verizon and Yahoo come to mind).  If the ISPs have the same right to collect personal data without consent as the OTTs like Google and Facebook, then they could be competitors in the ad-driven or personal-context-driven services spaces.  Even more significantly, the overall regulatory trends both here in the US and internationally seem to be shifting toward a more balanced model (which by the way has been recommended by experts all along).  Such a shift might turn over things like prohibitions on paid prioritization and settlement.  It could change literally everything, and that is the point on which the future turns.

“Could”, because relaxing regulations in general could lead operators to believe they’ve been given a new lease on connection-services life, not a new opportunity to look beyond them.  Whether we’ll continue to see commoditization and consolidation or a new age in networking will depend on whether operators can see a clear path to those new opportunities.  If they don’t see one, they’ll fall back—and fall literally as an industry—on the familiar.

Both network equipment vendors and service providers are victims of decades of inertial, supply-side-driven, behavior.  They fear competition more than they seek opportunity, and that is true on both the seller and buyer sides.  Now we have what might be the killer opportunity, a chance for the turtle to grow muscular hind legs to overcome the inherent race disadvantage.  Will the operators seize that opportunity?  Will vendors encourage them?  We don’t know yet, and whether they do or not determines whether network equipment stays separate from experience-based services.  That determines whether it’s valuable, and whether vendors who provide it can prosper.

The key to vendor success in the future isn’t to divorce hardware and software to take advantage of and promote commoditization at the same time (at best a zero-sum game, and probably worse than that), but to focus on what operators are going to need to have to address their own problems (which vendors and operators know today) and opportunities (which both parties need to think about, in terms of where they are and how they might be exploited).

What are the impacts of being cleared to collect personal data?  First, the collecting has to happen, and if you’re not a party to the relationship directly (you don’t own Facebook or provide an alternative) then you have to be able to detect what’s happening from the outside.  What are the implications of using the interior data of a web relationship with a third party?  Second, you have to be able to monetize what you learn, which means that you either have to support ad-sponsored stuff or you have to offer paid services whose personalization is based somehow on collection of information.

The collecting part of this is really problematic because the interior data is already largely encrypted via https.  Further, it’s not clear that just having Congress void the FCC ruling on collecting private data would give ISPs the right to actually tap the conversation.  They would very likely have to limit themselves to learning things from the visible aspects of the relationships—the IP addresses that they necessarily see, and the DNS requests that they also see and in most cases actually field (your default DNS is usually set by your access provider, though you can override that).  What can be gleaned from the minimal data actually available.  Alone, not much—far less than the OTTs who provide the services being used would have available.  In combination, perhaps a lot.  If you know everything the user has going on, even at a superficial level, you have context.

Ad personalization is all about context.  Personalized services are all about context.  There is surely an event-creating process associated with getting data that new regulations would allow you to get, but the bigger problem is making it valuable.  The irony is that if operators had looked at the IoT opportunity sensibly, they’d have seen that the correlation of contextual information is the real gold mine in IoT too.  They might then have seen that common technology features could support IoT and their new ad opportunity.

New ad opportunities are nice, particularly if you see your current competition being created and funded by that source, but advertising is still at a zero year-over-year growth rate, and online advertising isn’t doing much better.  It’s also true that there is little opportunity for ISPs to snoop into web streams; all it takes is an HTTPS session to stomp that out.  There is an opportunity for operators to digest information they get from Internet addresses and, most of all, from DNS hits.  There’s an opportunity to correlate voice/SMS session data with other things like movement of the user, and to contextualize voice/SMS contacts with subsequent searching and other web activity.  Operators can share in that, but they can’t own the space because the actual providers of the services have the edge, those OTTs who have created social media and web video.

This is where the vendors need to start thinking.  Contextual data from a lot of sources, generated as events and combined using complex event processing, is the only way that the regulatory shift can really benefit the network operators, unless they want to actually compete with OTTs.  If that’s what they want, then they’ve always had the option.  Look at Verizon and Yahoo (again).

The contextual opportunity—both from the service context side and the IoT side—are also probably the last hope of operators and network vendors to pull the industry out of commoditization.  Even if you could improve operational efficiency with service automation, you don’t have anything more than a quick fix to the profit-per-bit problem.  Only new revenue can save you in the long term.  My modeling has consistently said that there is over nine hundred billion dollars per year in new opportunity that could be exploited by network operators.  None of it comes from enhanced versions of connectivity and transport services—it’s all from carrier cloud.  That’s already transforming vendor business, driving more data center switching while routing is under pressure.  If vendors think realistically about what carrier cloud will mean, they can be a player and even a driver in addressing that new revenue.  If not, then either another vendor will step up and win, or operators will go entirely to open-source software and commodity hardware.

Huawei’s sales were up this quarter, and profits were stagnant.  Even the network equipment price leader can’t make a business success out of the current market, and it’s only going to get harder to do that if “the current market” is what you think will save you.  On the other hand, the future is truly bright.

A Somewhat-New Approach to VNFs

Most of you know that I like the concept of a VNF platform as a service (VNFPaaS) as a mechanism for facilitating onboarding and expanding the pace of NFV deployment.  That’s still true today, but I had some recent conversations with big network operators, and they tell me that it’s just not in the cards.  The time it would take to redirect industry efforts along those lines is too great; NFV would be obsolete by the time the steps were taken.  They did have an alternative that I think has merit; the notion of supporting a series of cloud PaaS frameworks as coequal VNFPaaS models.  There may even be a sign of commercial (vendor) interest in that approach.

The NFV specifications don’t define a specific hosting environment for VNFs.  Presumptively, they are deployed on a VM with a complete set of supporting platform software (operating system and middleware) bundled with them.  There are also minimal definitions to establish how a VNF in such a framework would link to the rest of the NFV management and orchestration structure.  This combination generates a lot of variability, and that means that prepping VNFs for deployment is hardly a standard process.  VNFPaaS would declare a single framework for deployment and execution, one that could provide a single set of APIs to which VNFs could be written.  Obviously, that would facilitate the onboarding and use of VNFs, but whose framework would the VNFPaaS be?  That question can’t be answered in a world where consensus has to be reached for progress, and when consensus means competitive trade-offs by all those involved.

The “multi-VNFPaaS” approach says that while software development takes place on multiple platforms, there are a small number that dominate.  One example is Java, which is supported on nearly all the hardware out there.  Suppose, instead of trying to get everyone to agree on one platform, we took the group of popular ones and prepped a VNFPaaS for each?  Staying with the Java example, we could define a specific set of Classes that represented the extensions to basic Java needed to make VNFs work with the central NFV processes.  You could have cooperative efforts define these platform-specific VNFPaaSs or you could let the platform’s owners do the heavy lifting to qualify their stuff.

Operators might prefer having full portability of VNFs across platforms, but that’s not in the cards.  They say that because most software is written to one single platform, it would be difficult to port software from one platform to another.  Basic service APIs wouldn’t line up.  Today, for example, we’re still working on a perfect solution for porting between Java and Microsoft’s .NET, even though we have more than a decade of experience.  Thus, while it might seem that we’d be giving up portability by accepting multiple VNFPaaSs, the truth is that probably wasn’t a realistic option anyway.

Even if we accept the notion of multiple platforms for VNFPaaS, there’s still the question of coming up with those Classes or APIs that provide the links between the VNFs and NFV.  We end up with a set of “basic service APIs” that access operating system and middleware services and that are outside the VNFPaaS scope, and another set of APIs that have to connect to the NFV foundation or core services and infrastructure.  This other set of APIs should be “adapters” that convert a convenient API structure for the platform involved to whatever API is used by the NFV core implementation.

Which is?  We really need to have some specific API here, which means that we really need to have a specific structure in mind for the VNF management process (VNFM).  There is a presumption in the current specs that VNFM has two pieces, one that is co-resident with the VNF and the other that is centralized and shared.  Leaving aside for the moment just how functionality is divided among these and what the specific APIs are for each, making this approach work in a multi-platform VNFPaaS would be a matter of the implementation of the API within the platform.  You could envision a platform that took responsibility for all VNFs using a remote service, one that left everything to a local process embedded in the VNF, one that used a local service bound to the VNF, or any combination of these.

What has to be standardized in this case is the relationship between the VNF-bound VNFM and the central VNFM.  It wouldn’t be too hard to do that if we had a complete picture of what a VNF deployment looked like in terms of network address space and components, but we really don’t have that as yet.  What could be done without it?  Well, the problem is that unless we assume that every NFV implementation exposes a common central VNFM API, we’re stuck with customization of the platform’s management APIs to match the APIs of the core NFV software.  That would mean that VNFs wouldn’t be fully portable within a given platform, because different NFV MANO/VNFM implementations might use different APIs.

The good news in this area is that we have a number of activities working on streamlining the onboarding of VNFs.  The specific details of these are difficult to obtain because most of the initiatives haven’t produced public output, but since it’s hard to see how you could automate a service lifecycle without any standard interfaces to work with, they should at least expose the issues.  That would probably be a useful step in solving the problems.

The reason for the “probably” qualifier is that we’re still not thinking systemically about the problem of service lifecycle management.  There are multiple management models—the “resource management only” approach that says you just keep things working as planned and it will all work out, through the “service-driven” approach that says all problems are detected at the service layer and remediation is driven from the top.  Which are we talking about?  There are issues of the network model for NFV—you have services and customers who might be “tenants” and have to be kept separate at one level, yet in most cases will resolve onto the Internet address space together.  You also have your own NFV control processes, which have to be isolated from tenants, and yet have to provide them services.  Till all of this stuff is laid out, it’s going to be difficult to apply the lessons of onboarding VNFs to the broader problem of service lifecycle management.

I don’t want to put on rose-colored glasses here, though.  Facilitating VNF onboarding removes a barrier to VNF deployment, but a removed barrier is significant to a driver only if they are on a journey where the barrier is encountered.  We still have major issues with making a broad business case for NFV.  My view that such a business case demands addressing service lifecycle management and its automation hasn’t changed.  Some of the new initiatives on onboarding could also expose issues and opportunities for lifecycle management overall, and this could be instrumental in proving (or disproving) whether operations efficiencies are enough to drive NFV forward.

The Future of Satellite Broadband

I blogged about cable and telco broadband last week, which leaves us a third significant broadband source—satellite.  The advantages of satellite broadband are obvious; you can get it anywhere in the world.  The disadvantages have been equally obvious—higher cost and performance issues on at least some delay-sensitive applications.  There are rumors that NFV or 5G or both will promote satellite, and also rumors that 5G could deal it a mighty blow.  As usual, we’ll have to wade through the stories to get a realistic picture.

Satellite broadband for commercial applications (leaving out military, in other words) consists largely of three categories—fixed satellite broadband, mobile broadband to ships, aircraft, etc., and “leased line” broadband offering point-to-point bandwidth.  We can expect demand in all these spaces to increase over the next five years, and for some specific markets demand could easily triple.  All that is good news.

The bad news, for the satellite players at least, is that there’s been a rampage of launch activity and scheduled activity, and most of the new birds are the HTS (high-throughput satellite) variety.  How much?  Start with the fact that current satellite broadband capacity is on the order of a terabit per second.  One satellite (ViaSat-3), which will see multiple launches, has a per-bird capacity that’s more than the total capacity of all the current broadband data satellites in use, more than that terabit.  The industry will probably see capacity grow by five times or more by 2020, and then double or triple again by 2022.  The result is that the Street expects the cost of satellite bandwidth to decline to about a quarter of current levels by 2020, and full HTS deployment alone could drive it down to half again that level by 2022.

All of this is the familiar geosynchronous market, too.  Low- and medium-earth-orbit (LEO/MEO) plans are even more ambitious, and they could add four or five terabits capacity by 2022, and offer lower latency for real-time communications and other applications that are sensitive to geosynchronous propagation delay.  It’s harder to estimate the total impact of the LEO/MEO satellites because the usage and geometry of the path can involve multiple satellites and thus make load predictions difficult.  Suffice it to say that if the plans are carried out, thousands of new LEO/MEO satellites could be up there by 2022.

Obviously, the big question is how the demand growth and the supply growth will play together.  Satellite data service pricing tends to be negotiated for the longer term, so the biggest changes in price will probably begin in 2020 when current data carriage contracts are expiring in significant numbers.  Current contract pricing already shows a steep discount, and so it’s reasonable to expect to see signs of price/demand elasticity by the end of this year.  However, you can’t judge how that will impact the market without knowing what the total addressable market is.  A lot of that depends on rather subtle aspects of geography, demography, and consumer behavior.

The consensus of the Street’s positive analysts on the space is that all three of the satellite broadband commercial data opportunities have large upsides.  The negative analysts all say (of course) just the opposite.  I’ll look at the segments and try to sort out a realistic view of each.

Satellite fixed consumer broadband is the potentially most interesting of all the spaces because of price/demand elasticity.  A sharp increase in capacity and a corresponding drop in prices would enable as much as 30x growth in this space, most of it coming from emerging market areas.  The challenge is that the equipment needed on the ground side is still costly for these markets, though there may be an opportunity to “hybridize” satellite broadband with other (fixed wireless access) services to reduce per-household cost.

Some Street analysts say that there could be a billion new satellite broadband customers unlocked at the new price points.  I’m doubtful.  A billion new users, meaning a billion new VSAT terminals?  I don’t see the data to prove that such a market exists, and I’m particularly doubtful that the initial cost of the terminal fits developing-market budgets.  FWA would lower the cost of access but reduce the number of satellite users by sharing the VSAT.  Then there’s the fact that concentrations of population that inevitably follow if you presume a billion new users would be attractive enough to justify looking at terrestrial solutions.

I think that the realistic growth opportunity for this space is on the order of 5x by 2022, the time when the pricing curve will be declining the most.  That’s good, but the interesting thing is that my model suggests that the cost reductions needed to boost satellite broadband in developing markets would reduce the revenue from current satellite broadband applications by down-pricing bandwidth, to the point where in the pre-2022 period you’d see a slight decline in revenue.  This means that you’d need to make up the revenue loss by increasing deployment in major markets, and for satellite broadband that doesn’t seem to be in the cards.

The aircraft and marine broadband space looks more promising, and could even provide some of that revenue relief.  As people get more dependent on their phones, they come to expect to be able to get WiFi anywhere they’re spending time.  Today we see an uptake rate of less than 10% on most aircraft broadband services, and slightly less for maritime services.  The model says that maritime broadband could attain 100% penetration if it were free, and could achieve 40% penetration if costs were half what they are today.  That is within the range of reduction possible with HTS and LEO/MEO technology.  This could make every cruise ship a candidate for significant bandwidth—hundreds of megabits even for mid-sized ones.  This is probably the opportunity that has the best chance of creating demand enough to sustain overall revenue for the industry in the face of slipping bandwidth cost.

Aircraft is another big opportunity for satellite, because there are already airlines that offer satellite-based WiFi free and because consumers who are able to slake their thirst for social media on planes could well become even more dependent on broadband, and thus demand it any time they travel or stay anywhere.  Early data suggests that the uptake rate for broadband on aircraft is only about 30% even when it’s free, and while my model says that changing social trends could bring in-flight WiFi use to 100% on all flights, that wouldn’t be likely until after 2022.  Some consumer survey data suggests that the greatest near-term opportunity would lie in flights between 3 and 6 hours’ duration.  This would equate to a trip of about 1,500 to 2,500 miles, which is a bit higher than the average flight mileage in the US.  International market data seems to show a slightly lover mileage per flight.

One important trend in the airline space is the move to offer consumer broadband and video streaming as an alternative to in-flight entertainment.  Airline policies on entertainment vary widely by market, but in general the airlines offer it on longer flights, flights falling into that 3-to-6-hour sweet spot.  By deferring the cost of entertainment systems, airlines can justify subsidizing WiFi onboard, which then gets the airlines closer to the “free” WiFi that could bring radical changes to the market.   However, we can be fairly certain based on terrestrial content delivery practices that aircraft would end up caching their feature videos aboard, reducing the need to support a separate satellite stream per viewer.

Satellite “leased line” services, point-to-point broadband, is in my view the most problematic of the opportunities.  There are absolutely locations where industrial operations or tourism demand broadband service but are too isolated to build out terrestrial infrastructure.  However, we all know that fiber optic cables span most of the world’s oceans today, and multiple times.  I think there is a clear disaster recovery opportunity here, and I also think that beyond 2022 we could see satellite leased line services supplementing terrestrial services for mobile coverage, etc.  I don’t see this application contributing much before that date.

The challenge we have here is that even if we saw total satellite broadband use double by 2020, which is possible, that would still represent demand equal only to about 40% of the bandwidth that will be available by then, neglecting any contribution from LEO/MEO.  Remember that one future HTS satellite could provide as much capacity as everything we have in orbit now.  The big question is whether anything else could come along.  What that might be falls into two categories, one a “supply-side” story and the other a “demand-side” story.

The supply-side theory is that modern virtualization initiatives (SDN, NFV, 5G) will level the playing field with respect to including satellite services into a higher-level retail (or MVNO) framework.  The problem with this is that like all three of these named technologies have proven in their own spaces, just making something technically possible doesn’t mean you’ve made business sense of it.  Absent a demand driver that suddenly makes a technology an impediment in making a boatload of money, I don’t see supply-side initiatives opening any meaningful opportunity.  However, the principles of these three initiatives might help operationalize evolving satcomm infrastructure more effectively, which as we’ll see could be important.

The demand-side story runs up against the reality that the really good broadband markets are served today with terrestrial technology because they are really good.  Satellite is not going to get cheaper than terrestrial options, particularly if we do start deploying FWA to enhance FTTH, starting just beyond 2020.  The aviation and marine markets are significant in terms of number of VSAT terminals, and both will likely contribute to considerable demand growth after 2020, but unless the pace of HTS deployment slows (unlikely) the gap between demand and supply will expand as we move into the next decade.

From a technology/equipment perspective, as opposed to a satellite provider/capacity perspective, the picture we’ve painted generates some interesting shifts and opportunities.  As unit bandwidth cost falls and satellite provider profits are pressured, the role of operations efficiency grows.  That’s particularly true when what satellite networks have to deliver is in effect what a terrestrial ISP delivers, with all the subnet addressing, gateway definitions, DNS/DHCP, video caching and CDN, and so forth.  The demand growth we’re seeing is all going to happen in spaces where even more IP management is linked into satellite service management.  There are definitely things emerging from SDN, NFV, and (in the future) 5G that could help control and even reduce the cost of creating and sustaining IP connectivity rather than just satellite connectivity.

So where do we end up here?  Leading up to about 2020, I think it’s clear that supply of satellite bandwidth will outstrip demand, and at the same time operations costs for the terrestrial part of the link (including onboard aircraft) will rise with the complexity of the delivery models being adopted.  The result will be profit compression on the part of the satellite providers, which is consistent with the negative thesis of the Street.

Beyond 2020 things get more complicated.  The negatives will continue to pile up unless there are steps taken to radically increase the delivery demand in the areas of aircraft and marine satellite broadband; no other options offer much.  5G will kick in here, but in my view 5G offers more negatives to the satellite industry than positives, because the FWA hybridization with FTTN will lower fiber-infrastructure delivery costs further and make it nearly impossible to extend satellite usage significantly except in the rural parts of developing economies, which clearly have fairly low revenue potential.  We could see satellite play a role in things like IoT, but absent a clear and practical IoT model (which we don’t have) we can’t say how much of a role that will be.

The net, in my view, is that satellite broadband will face trying times and profit compression for at least five years, and very possibly longer.  That will start to impact the launches planned beyond 2022, unless we figure out a new mission.  If that’s to be done, then we need to start thinking about it pretty quickly.

Who’s Winning the Telco/Cable Battle?

There has recently been a lot of media attention focused on the cable providers, not only because they’ve been emerging as players in some next-gen technologies like SDN, NFV, and the cloud, but because they’ve been gaining market share on the telcos after losing it for years.  All of this seems tied to trends in television viewing and broadband usage, but it’s hard to say exactly what factors are driving the bus, and so hard to know where it’s heading.

One thesis for the shift is that because cable infrastructure is fairly constant throughout the service area, cable companies can deliver broadband services more consistently.  Telcos usually have zones where they can justify high-capacity broadband infrastructure, where customer density and economic status is high, but others where it’s plain DSL.  There can easily be a factor of 100 between the fastest and slowest broadband available, and cable rarely has anything like that ratio.

Another thesis is television viewing.  TV is dominated by channelized video services, the competition for “broadband” was really a competition for video.  Cable infrastructure is inherently superior to DSL (and, most agree, to satellite) for delivering channelized video.  The slower DSL connections have to husband programming to avoid congestion on the access line, and I think this was a major factor in inducing AT&T to move to satellite video delivery.

The third theory is that it’s really mobile broadband that’s the culprit.  Telcos have been focusing increasingly on mobile services because they’re more profitable, and as a result they’ve been scrimping on modernization of their wireline services, both Internet and video.  The cable companies’ primary revenue and profit center is the delivery of TV and wireline broadband, so it’s not surprising that they’ve put more into those areas, and are reaping the reward.

There are other factors too, which might form the basis of their own thesis or might be a complication in one or more of the others.  Telcos came late to the TV delivery market, and had an initial advantage in being the new kid, able to cherry-pick geographies and tune services to beat competitive offerings.  Those benefits have now passed on.  Cable companies have been a bit more successful in consolidating than telcos, and up to the AT&T/Time Warner deal (yet to be approved, but it probably will be) the cable companies have had a leg up on getting their own content properties.  All of these points are factors.

The current situation is that cable companies, who lost customers to telcos from the time when telco TV launched, have started to gain market share back.  The shift is slow because it generally requires some considerable benefits to drive consumers to go through the hassle of changing their TV, Internet, and phone, but it’s already quite visible among new customers.  At the same time, there is an indication that TV isn’t the powerful magnet that it used to be.  Verizon reported that its vanilla-minimum-channel offering ended up taking about 40% of renewals and new service deals.  Streaming video has changed the game.

Streaming video’s immediate impact on both cable companies and telcos is to shift viewing away from channelized programming, even in the home.  This means that the inherent advantage of cable for channelized delivery is minimized, but it also means that satellite TV isn’t going to save low-speed DSL companies from cable predation and that you’ll need better WiFi and data service to the home.  The phone or tablet, or the streaming stick or smart TV, is the TV of the future, and it needs a data connection.  So far, the net advantage is with cable companies.

The next level of impact here is the mobile/TV symbiosis.  AT&T’s plan to offer unmetered mobile streaming to its DIRECTV customers, and possible symbiotic features/services to enhance viewing of a telco offering on the telco’s own mobile network would open ways to empower TV and fixed broadband providers who have a mobile service, which cable companies do not.  This is almost certainly why Comcast is looking to offer some sort of MVNO service that, like Google’s Fi, feeds on WiFi wherever possible.  Comcast has public WiFi hubs, and could certainly deploy more.

In my view, the future of wireline services is tied to the mobile device, which means that if cable companies don’t secure some form of MVNO offering that can give them some latitude in pricing video streaming, they are going to lose market share again, and probably very quickly.  Some on the Street think cable companies will romp wild and free for as much as five or six years, but I think they could end up losing share even in 2017.

All of this frames infrastructure planning too.  For telcos, it means that there is a renewed reason to look at streaming video to the home, but in the form of a pure on-demand service.  Things like sporting events and news could remain “magnetic” enough to justify channelized video, but you’d be better off using your streaming bandwidth to support on-demand streaming consumption.  Five or six people aren’t going to watch the same show at the same time on individual phones or tablets, after all.  For cable companies, it means you need to have WiFi-centric MVNO or you’re dead.

This could all frame some of the 5G issues.  One of the applications of 5G that operators want to see is enhanced mobile speed—which would make video delivery easier and lower the operator cost to support a given number of streaming consumers in a cell.  Another is the Fixed Wireless Access (FWA), which would use 5G radio technology at the end of an FTTN connection to make the last jump to homes and businesses.  These drive, in a sense, wireless and wireline convergence.  They also make network slicing more valuable, because all of a sudden, we could see a lot of new MVNO candidates.  Operators like Sprint and T-Mobile would almost surely be candidate partners for the cable companies because they’re not wireline competitors.  These two, by the way, are partners with Google in Fi.

The net here in my view is that there is no winner or no truly meaningful trend in wireline broadband or video at all, there is only a set of mobile-driven trends.  The people who can be players in the mobile space can pick their features and battles like the telcos did a decade ago in channelized video.  Those who can’t plan in mobile are now going to face major problems, and if 5G or 5G-like convergence emerges by 2020, they’ll have a serious problem creating a survivable business model by 2022.

Ciena’s Liquid Spectrum: Are They Taking It Far Enough?

The Ciena announcement of what they call Liquid Spectrum has raised again the prospect of a holistic vision of network capacity management and connectivity management that redefines traditional OSI-modeled protocol layers.  It also opens the door for a seismic shift in capex from Layers 2 and 3 downward toward the optical layer.  It could jump-start SDN in the WAN, make 5G hopes of converging wireline and wireless meaningful, and introduce true end-to-end software automation.  All this, if it’s done right.  So is it?  I like Ciena’s overall story, including Liquid Spectrum, but I think they could tell, and perhaps do, more.

Liquid Spectrum, not surprisingly, is about systematizing agile optics to make the optical network into more of a collective resource than a set of dedicated and often service-specific paths.  There is no question that this could generate significantly better spectrum utilization, meaning that more traffic could be handled overall versus traditional optical networking.  There’s also no question that Liquid Spectrum does a better job of operationalizing spectrum management versus what even agile-optics systems have historically provided.

Liquid Spectrum is a better optical strategy, but that point raises two points of its own.  First, is it the best possible optical strategy?  Second, and most important, should it be an “optical” strategy at all?  These two questions are related, and harder to answer, so let’s start with the simple case and work up.

The most basic use of optical networking for services would be to provide optical routes to enterprise or cloud/web customers for things like data center interconnect (DCI).  For this mission, Liquid Spectrum is a significant advance in terms of simple provisioning of the connections, monitoring the SLA, and affecting restoration processes if the SLA is violated.  If the operator has latent revenue opportunity for this kind of service, then Ciena is correct in saying that it can bring in considerable new money.

As interesting as DCI is to those who consume (or sell) it, it’s hardly the basis for vast optical deployments even in major metro areas.  The primary optical application is mass-market service transport.  Here, the goal isn’t to create new services as much as new service paths, since truly new connection services would be very difficult to define in an age where IP and Ethernet are so totally adopted.  Liquid Spectrum’s ability to improve overall spectrum efficiency could mean that more transport capacity for services would be available per unit of capex, which is an attractive benefit.  The coming improved metrics/analytics of Liquid Spectrum will improve this area further.

It should be possible to combine some of the principles of intent-modeled networking, meaning SLA-dependent hierarchies, to define optical transport as a specific sub-service with an SLA that optical agility offered by Liquid Spectrum could then meet.  Since optical congestion management and path resiliency would be addressed invisibly within these SLAs and model elements, the higher layers would see a more reliable network, and the operations cost of that configuration should be lower.  It’s hard to say exactly how much because the savings are so dependent on network topology and service dynamism, but we’re probably talking about something on the order of a 10% reduction in network operations costs, which would equate to saving about a cent of every revenue dollar.

That’s not insignificant, but it’s not profound given that other strategies under review could save ten times that amount.  The reason why optical networking, even Liquid Spectrum, fall short of other cost reduction approaches is the tie to automation of the service lifecycle.  Obviously, you can’t automate the service lifecycle down so deep that services aren’t visible.  Service automation starts at the protocol layer where service is delivered because that’s where the money meets the network.  Optics is way down the stack, invisible unless something breaks, which means that to make something like Liquid Spectrum a meaningful pathway to opex savings, you have to tie it to service lifecycle management.

Ciena provides APIs to do just that, and they cite integration with their excellent Blue Planet orchestration platform.  There’s not much detail on the integration; Blue Planet is mentioned only in the title of a slide in the analyst deck and the slide itself shows the most basic of diagrams—a human, a box (Blue Planet) and the network.  This leaves open the critical question of how optical agility is exploited to improve service lifecycle management.  Should we look at optical agility as the tail of the service lifecycle automation dog?

You absolutely, positively, do not want to build a direct connection between service-layer changes and agile optics, because you risk having multiple service requests collide with each other or make inconsistent requests for transport connectivity.  What needs to happen is an analysis of the transport conditions based on service changes, and the way that has to happen would necessarily be reflected in how you model the “services” of the optical layer and the services of the layers above.  We don’t have much detail on Blue Planet’s modeling approach, and nothing on the specific way that Liquid Spectrum would integrate with it, so I can’t say how effective the integration would be.

Another thing we don’t have is a tie between Liquid Spectrum and SDN or “virtual wire” electrical-layer technology.  There are certainly cases where connectivity might require optical-level granularity in capacity and connection management, but even today those are rare, and if we move more to edge-distributed computing they could become rarer still.  It would be logical to assume that optical grooming was the bottom of a grooming stack that included electrical virtual-wire management as the layer above.  I think Ciena would have been wise to develop a virtual-wire strategy to unite Blue Planet and their optical products.  Logically, Ciena’s packet-optical approach could be integrated with modern SDN thinking, and it’s a referenced capability for Blue Planet, but nothing is said in the preso about packet optical or Ciena’s products in that space.

There have been a lot of optical announcements recently, and to be fair to Ciena none of them are really telling a complete network-infrastructure-transformation story.  ADVA, who also has a strong orchestration capability, did an even-more-DCI-centric announcement too, and Nokia told an optical story despite having, in Nuage, an exceptional SDN story to tell.  Product compartmentalization is driven by a lot of things, ranging from the way media and analysts cover technology to the desire to accelerate the sales cycle by focusing on a product rather than boiling the ocean.  However, it can diminish the business case for something by demanding that it be considered alone when it’s really part of a greater whole.

You have to wonder whether this compartmentalization issue is a part of a lot of technology problems.  Many emerging technologies, even “revolutions”, have been hampered by compartmentalization.  NFV and SDN both missed many (perhaps most) of the benefits that could drive them forward because they were “out of scope”.  It seems that biting off enough, these days at least, is equated to biting off too much.

I think Ciena needs to bite a bit deeper.  They have an almost unparalleled opportunity here, an opportunity to create a virtual-wire-and-optics layer that would not only improve operations efficiency but reduce the features needed in Layers 2 and 3 of the network.  That would make it easier to replace Ethernet and IP devices with basic SDN forwarding.  Sure these moves would be ambitious, but Ciena’s last quarter didn’t impress the Street.  They need some impressive quarters to follow.  Competition is tough in optics, and the recent success of the open-optical Facebook Voyager initiative shows that it would be easy to subsume optical networking in L2/L3 devices rather than absorb electrical-layer features in optical networks.  If Ciena and other optical vendors lose that battle, it’s over for them, and only a preemptive broad service-to-optics strategy can prevent the loss.

Ciena has the products to do the job, and Liquid Spectrum is a functional step along the way.  It’s also an example of sub-optimal positioning.  You can argue that the major challenge Ciena faces is that it wants to be strategic but sells and markets tactically.  If you have a broad product strategy you need to articulate it so that your product symbiosis is clear.  If that doesn’t happen, it looks like you’re a hodgepodge and not a unified company.  Ciena has a lot of great stuff and great opportunities, including Liquid Spectrum.  They still need to sing better.