The Ups and Downs of Cisco’s “Self-Publishing” Network

Light Reading had an interesting article that featured a Cisco presentation on “The Self-Publishing Network”.  The points Cisco’s quoted on are interesting, and while I don’t fully agree with them, I think they reflect some important changes in the way we visualize networks, network services, and service lifecycle automation.

The basic premise is that network operators need to think of their networks in three layers—resources, orchestration/automation, and operations/business support (OSS/BSS).  I’ve said much the same thing in my own blogs, with the same layers in fact, so I don’t disagree with that premise.  We used to have a two-layer structure where OSS/BSS linked (via manual processes) directly to the resource layer, but we’ve moved into an age of APIs and models, and that introduces the orchestration and automation layer in between the two original layers.

The article and Cisco also make a good point about what I’ll call “resource modeling”, the use of things like YANG and NETCONF to control network resources and provide a vendor-independent approach to coercing service-cooperative behavior from switches and routers.  In my view, though, this really creates a kind of “sublayer” structure of the middle orchestration/automation layer.  As I noted yesterday, I postulated in my ExperiaSphere work that network services were made up of a service and resource layer, each having its own orchestration/modeling joined by having the service layer bind to “Behaviors” of the network asserted by the resource layer.

All of this works fine as long as we’re talking about networks built up from switching and routing devices.  The challenge comes when you add in hosted features, either augmenting/replacing traditional switching and routing or living above them, perhaps all the way up to the OTT layers where experiences rather than connections are the service targets.  When you get to hosted stuff, you run into the problem I’ve noted before (including yesterday), which is that management of the service at the functional level has separated from managing the resources that make the services up.

The article quotes the Cisco spokesperson as advocating the abandonment of things like SNMP and CORBA as “exotic”, in favor of network-centric stuff like YANG.  Even for connectivity services like IP or Ethernet, this doesn’t recognize the fact that a set of software-generated and server-hosted features have to be orchestrated at the resource level, and should be orchestrated more like the cloud works than like network devices work.

The Cisco model, stated in what I’d call “cloud terms” would be something like this.  At the top, you have a commercial service offering that includes a mixture of functions, some related to traditional connectivity and some to non-connection features.  The commercial offering would be realized by a service-level model, from which the deployment of the service would be controlled (and automated).  The bottom of the service-level model branches would be bound across to resource behaviors, some of which would be native network device behaviors (for which YANG is fine) and some of which would be software-hosted feature behaviors for which something cloud-like such as TOSCA would be more sensible.  These resource behaviors would control the actual infrastructure.

There is absolutely no way you could contend that YANG was a viable model for deploying applications in the cloud.  Why then should we be even thinking about it as a means of deploying features (which, in software terms, are equivalent to application components) in the cloud?  As always, there are two possible explanations for this.  First, Cisco is being Cisco-the-marketing-giant, and since it effectively owns YANG (having bought Tail-f, who was the primary developer/promoter of it), is simply trying to own that middle orchestration/automation layer.  Second, Cisco has IP blinders on.

If the first explanation is true, Cisco has a problem because the operators are really looking for strategies that support carrier-cloud-based services in the long run.  While I think NFV has gone way off track, there are already many in the NFV community who think “cloud-native” is the way to go.  NFV even now is based mostly on TOSCA-related modeling.  5G, which promotes hosted features, would surely drive operators more in the TOSCA/cloud direction as it deploys.  You can’t own the orchestration/automation layer by promoting a modeling approach that’s already been rejected.  Still, Cisco has a history of pushing its own approach in defiance of logic and market commitment for as long as it can, then pretending it had the other (right) approach all along.

If the second explanation is true, then Cisco is stuck in IP-neutral.  They think of “services” as being nothing more than IP connectivity.  Operators are doomed to build only dumb networks, using of course Cisco devices.  This would be, IMHO, a worse problem for Cisco because it would risk Cisco isolating themselves from there operators know they need to be going for higher-level, higher-revenue services as well as for the implementation of agile virtual elements of mobile and content services.  It’s bad enough to get the modeling for these new opportunities wrong (which the first explanation would suggest), but to get the mission wrong altogether would be a big problem.

I also think Cisco is wrong in proposing that OSS/BSS systems be modernized in orchestration terms, unless you want to make service orchestration as a sublayer a part of the OSS/BSS process, which flies in the face of the way that service orchestration has to tie in specific features and functions.  In any event, I think the clean approach is to assume that the top of the resource layer, the abstractions that are consumed to fulfill functional requirements makes the service orchestration and modeling process a consumer of abstract resources.  The OSS/BSS should then be a consumer of abstract services, neither knowing nor caring about how they’re made up, only how they’re offered to customers.

Ironically, this is exactly where Cisco seems to have been heading with the self-publishing notion.  Any layer publishes abstractions that are consumed as the input to the layer above.  You build “services” from resource “behaviors” (to use my ExperiaSphere term).  You build customer relationships by selling them services.  When you add anything as the “output” of any layer, you can then exploit it up the line and make money from it.   You can “publish” the capabilities of any layer to the layer above, and since there’s that inter-layer exchange the result looks a lot like the old OSI model where each service layer uses the features of the one below.

Publishing is interesting, but you can’t publish what you don’t have, or what you can’t present in an organized way.  Intent modeling, as Cisco suggests, is a key piece of the notion because it lets services or service features or service resources be represented by their capabilities not their implementations.  Cisco has a lot of good points in its self-publishing approach, but if it wants it to be more than marketing eye candy, it needs to align it more with clouds and less with networks.  Without that shift, this is more about a self-aggrandizing network than a self-publishing one.