So the NFV ISG Wants to Look at Being Cloud-Like: How?

The ETSI NFV ISG is having a meeting, one of which’s goals is to explore a more cloud model of NFV.  Obviously, I’d like to see that.  The question is what such a model would look like, and whether it (in some form) could be achieved from where we are now, without starting another four-year effort.  There are certainly some steps that could be taken.

A “cloud model of NFV” has two functional components.  First, the part of NFV that represents a deployed service would have to be made very “cloud-friendly”.  Second, the NFV software itself would have to be optimized to exploit the scalability, resiliency, and agility of the cloud.  We’ll take these in order.

The first step would actually benefit the cloud as well as NFV.  We need a cloud abstraction on which we can deploy, that represents everything that can host functions and applications.  The model today is about hosts or groups of hosts, and there are different mechanisms to deploy containers versus VMs and different processes within each.  All of this complicates the lifecycle management process.

The biggest NFV challenge here is dealing with virtual CPE (vCPE).  Stuff that’s hosted on the customer prem, in a cloud world, should look like a seamless extension of “the cloud”, and the same is true for public cloud services.  This is a federation problem, a problem of agreeing on a broad cloud abstraction and then agreeing to provide the mechanisms to implement it using whatever mixture of technology happens to be available.  The little boxes for vCPE, the edge servers Amazon uses in its Greengrass Lambda extension, and big enterprise data centers are all just the edge of “the cloud” and we need to treat them like that.

If we had a single abstraction to represent “the cloud” then we would radically simplify the higher-level management of services.  Lifecycle management would divide by “in-cloud” and “not-in-cloud” with the latter being the piece handled by legacy devices.  The highest-level service manager would simply hand off a blueprint for the cloud piece to the cloud abstraction and the various domains within that abstraction would be handed their pieces.  This not only simplifies management, it distributes work to improve performance.

Our next point is Cloudy VNFs, to coin an awkward term, should be for all intents and purposes a cloud application component, no different from a piece of a payroll or CRM system.  If it breaks you can redeploy it somewhere, and if it runs out of capacity you can replicate and load-balance it.  Is this possible?  Yes, but only possible because the attributes of a VNF that could make those attributes available aren’t necessarily there.

If I have a copy of an accounting system that runs out of capacity, can I just spin up another one?  The problem is that I have a database to update here, and that update process can’t be duplicated across multiple instances unless I have some mechanism for eliminating collisions that could result in erroneous data.  Systems like that are “stateful” meaning that they store stuff that will impact the way that subsequent steps/messages are interpreted.  A “stateless” system doesn’t have that, and so any copy can be made to process a unit of work.

A pure data-plane process, meaning get-a-packet-send-a-packet, is only potentially stateless.  Do you have the chance of queuing for congestion, or do you have flow control, or do you have ancillary control-plane processes invoked to manage the flow between you and partner elements?  If so then there is stateful behavior going on.  Some of these points have to be faced in any event; queuing creates a problem with lost data or out-of-order arrivals, but that also happens just by creating multiple paths or by replacing a device.  The point is that a VNF would have to be examined to determine if its properties were consistent with scaling, and new VNFs should be designed to offer optimum scalability and resiliency.

We see this trend in the cloud with functional programming, lambdas, or microservices.  It’s possible to create stateless elements, to do back-end state and context control, but the software that’s usually provided in a single device didn’t face the scalability/resiliency issue and so probably doesn’t do what’s necessary for statelessness.

Control-plane stuff is much worse.  If you report your state to a management process, it’s probably because it requested it.  Suppose you request state from Device Instance One, and Instance Two is spun up, and it gets the request and responds.  You may have been checking on the status of a loaded device to find out that it reports being unloaded.  In any event, you now have multiple devices, so how do you obtain meaningful status from the system of devices rather than from one of them, or each of them (when you may not know about the multiplicity)?

All this pales into insignificance when you look at the second piece of cloud-centric NFV, which is the NFV software itself.  Recall that the ETSI E2E model describes a transactional-looking framework that controls what looks like a domain of servers.  Is this model a data-center-specific model, meaning that there’s a reasonably small collection of devices, or does this model cover an entire operator infrastructure?  If it’s the former, then services will require some form of federation of the domains to cover the full geography.  If it’s the latter, then the single-instance model the E2E diagram describes could never work because it could never scale.

It’s pretty obvious that fixing the second problem would more work than fixing the first, and perhaps would involve that first step anyway.  In the cloud, we’d handle deployment across multiple resource pools by a set of higher-layer processes, usually DevOps-based, that would activate individual instances of container systems like Docker (hosts or clusters) or VM systems like OpenStack.  Making the E2E model cloud-ready would mean creating fairly contained domains, each with their own MANO/VNFM/VIM software set, and then assigning a service to domains by decomposing and dispatching to the right place.

The notion of having “domains” would be a big help, I think.  That means that having a single abstraction for “the cloud” should be followed by having one for “the network”, and both these abstractions would then decompose into domains based on geography, management span of control, and administrative ownership.  Within each abstraction you’d have some logic that looks perhaps like NFV MANO—we need to decompose a service into “connections” and “hosting”.  You’d also have domain-specific stuff, like OpenStack or an NMS.  A high-level manager would orchestrate into high-level requests for abstract services, and that would invoke a second-level manager that would divide things by domain.

We don’t have that now, of course.  Logically, you could say that if we had a higher-layer system that could model and decompose, and if we created those limited NFV domains, we could get to the good place without major surgery on NFV.  There are some products out there that provide what’s needed to do the modeling and decomposing, but they don’t seem to be mandatory parts of NFV.

I’d love to be able to go to meetings like this, frankly, but the problem is that as an independent consultant I have to do work that pays the bills, and all standards processes involve a huge commitment in time.  To take a proposal like this to a meeting, I’d have to turn it into a contribution, defend it in a series of calls, run through revision cycles, and then face the probability that the majority of the body isn’t ready to make radical changes anyway.  So, instead I offer my thoughts in a form I can support, which is this blog.  In the end, the ISG has the ability to absorb as much of it as they like, and discard what they don’t.  That’s the same place formal contributions would end up anyway.

Who Will Orchestrate the Orchestrators (and How)

What exactly is “service automation” and who does it?  Those are the two questions that are top of the list for network operators and cloud providers today, and they’re ranking increasingly high on the list of enterprises as well.  As the complexity of networks increases, as technology changes introduce hosted elements in addition to discrete devices, and as cloud computing proliferates, everyone is finding that the cost of manual service operations is rising too fast, and the error rate even faster.  Something obviously needs to be done, but it’s not entirely clear what that something is.

Part of the problem is that we are approaching the future from a number of discrete “pasts”.  Application deployment and lifecycle management have been rolled into “DevOps”, and the DevOps model has been adopted in the cloud by users.  Network service automation has tended to be supported through network management tools for enterprises and service providers alike, but the latter have also integrated at least some of the work with OSS/BSS systems.  Now we have SDN and NFV, which have introduced the notion of “orchestration” of both application/feature and network/connection functions into one process.

Another part of the problem is that the notion of “service” isn’t fully defined.  Network operators tend to see services as being retail offerings that are then decomposed into features (the TMF’s “Customer-Facing Services, or CFSs).  Cloud providers sometimes see the “service” as the ability to provide platforms to execute customer applications, which separates application lifecycle issues from service lifecycle issues.  The trend in cloud services is adding “serverless” computing, which raises the level of features that the operator provides and makes their “service” look more application-like.  Enterprises see services as being something they buy from an operator, and in some cases what they have to provide to cloud/container elements.  Chances are, there will be more definitions emerging over time.

The third piece of the problem is jurisdictional.  We have a bunch of different standards and specifications bodies out there, and they cut across the whole of services and infrastructure rather than embracing it all.  As a result, the more complex the notion of services becomes, the more likely it is that nobody is really handling it at the standards level.  Vendors, owing perhaps to the hype magnetism of standards groups, have tended to follow the standards bodies into disorder.  There are some vendors who have a higher-level vision, but most of the articulation at the higher level comes from startups because the bigger players tend to focus on product-based marketing and sales.

If we had all of the requirements for the service automation of the future before us, and a greenfield opportunity to implement them, we’d surely come up with an integrated model.  We don’t have either of these conditions, and so what’s been emerging is a kind of ad hoc layered approach.  That has advantages and limitations, and balancing the two is already difficult.

The layered model says, in essence, that we already have low-level management processes that do things like configure devices or even networks of devices, deploy stuff, and provide basic fault, configuration, accounting, performance, and security (FCAPS) management.  What needs to be done is to organize these into a mission context.  This reduces the amount of duplication of effort by allowing current management systems to be exploited by the higher layer.

We see something of this in the NFV approach, where we have a management and orchestration (MANO) function that interacts with a virtual infrastructure manager (VIM), made up presumably of a set of APIs that then manage the actual resources involved.  But even in the NFV VIM approach we run into issues with the layered model.

Some, perhaps most, in the NFV community see the VIM as being OpenStack.  That certainly facilitates the testing and deployment of virtual network functions (VNFs) as long as you consider the goal to be one of simply framing the hosting and subnetwork connections associated with a VNF.  What OpenStack doesn’t do (or doesn’t do well) is left to the imagination.  Others, including me, think that there has to be a VIM to represent each of the management domains, those lower-layer APIs that control the real stuff.  These VIMs (or more properly IMs, because not everything they manage is virtual) would then be organized into services using some sort of service model.  The first of these views makes the MANO process very simple, and the second makes it more complicated because you have to model a set of low-level processes to build a service.  However, the second view is much more flexible.

There are also layers in the cloud itself.  OpenStack does what’s effectively per-component deployment, and there are many alternatives to OpenStack, as well as products designed to overcome some of its basic issues.  To deploy complex things, you would likely use a DevOps tool (Chef, Puppet, Ansible, Kubernetes, etc.).  Kubernetes is the favored DevOps for container systems like Docker, which by the way does its own subnetwork building and management and also supports “clusters” of components in a native way.  Some users layer Kubernetes for containers with other DevOps tools, and to make matters even more complex, we have cloud orchestration standards like TOSCA, which is spawning its own set of tools.

What’s emerging here is a host of “automation” approaches, many overlapping and those that don’t covering a specific niche problem, technology, or opportunity.  This is both a good thing, perhaps, and a bad thing.

The good things are that if we visualize deployment and lifecycle management as distributed partitioned processes we allow for a certain amount of parallelism.  Different domains could be doing their thing at the same time, as long as there’s coordination to ensure that everything comes together.  We’d also be able to reuse technology that’s already developed and in many cases fully proven out.

The bad thing is that coordination requirement I just mentioned.  Ships passing in the night is not a helpful vision of the components of a service lifecycle automation process.  ETSI MANO, SDN controllers, and most DevOps, are “domain” solutions that still have to be fit into a higher-level context.  That’s something that we don’t really have at the moment.  We need a kind of “orchestrator of orchestrator” approach, and that is in fact one of the options.  Think of an uber-process that lives at the service level and dispatches work to all of the domains, then coordinates their work.  That’s probably how the cloud would do it.

The cloud, in fact, is contributing a lot of domain-specific solutions that should be used where available, and we should also be thinking about whether the foundation of the OofO I just mentioned should be built in the cloud and not outside it, in NFV or even OSS/BSS.  That’s a topic for my next blog.

Can We Make ETSI NFV Valuable Even If It’s Not Optimal?

Network Functions Virtualization (NFV) has been a focus for operators for five years now.  Anyone who’s following my blog knows I have disagreed with the approach the NFV ISG has taken, but it took it.  The current model will never, in my view, be optimal, as I’ve said many times in past blogs and media interviews.  The question now is whether it can be useful in any way.  The answer is “Yes”, providing that the industry, and the ISG, take some steps quickly.  The goal of these steps is to address what could be serious issues without mandating a complete redesign of the software, now largely based on a literal interpretation of the ETSI ISG’s End-to-End model.

The current focus of NFV trials and deployments is virtual CPE (vCPE), which is the use of NFV to substitute for traditional network-edge appliances.  This focus has, IMHO, dominated the ISG to the point where they’ve framed the architecture around it.  However, the actual deployments of vCPE suggest that the real-world vCPE differs from the conceptual model of the specs.  Because of the central role of vCPE in early NFV activity, it’s important that these issues be addressed.

What was conceptualized for vCPE was a series of cloud-hosted features, each in its own virtual machine, and each linked to the others in a “service chain”.  What we actually see today for most vCPE is a general-purpose edge device that is capable of receiving feature updates remotely.  This new general-purpose edge device is more agile than a set of fixed, purpose-built, appliances.  Furthermore, the facilities for remote feature loading make a general-purpose edge device less likely to require field replacement if the user upgrades functionality.  If vCPE is what’s happening, then we need to optimize our concept without major changes to the ETSI model or implementation.

Let’s start with actual hosting of vCPE features in the cloud, which was the original ETSI model.  The service-chain notion of features is completely impractical.  Every feature adds a hosting point and chain connection, which means every feature adds cost and complexity to the picture.  My suggestion here is that where cloud-hosting of features is contemplated, abandon service chaining in favor of deploying/redeploying a composite image of all the features used.  If a user has a firewall feature and adds an application acceleration feature, redeploy a software image that contains both to substitute for the image that supports only one feature.  Use the same VMs, the same connections.

Some may argue that this is disruptive at the service level.  So is adding something to a service chain.  You can’t change the data plane without creating issues.  The point is that the new-image model versus new-link model has much less operations intervention (you replace an image) and it doesn’t add additional hosting points and costs.  If the cost of multi-feature vCPE increases with each feature, then the price the user pays has to cover that cost, and that makes feature enhancement less attractive.  The ETSI ISG should endorse the new-image model for cloud-hosted vCPE.

Let’s now move to the market-dominant vCPE approach, which is a general-purpose edge device that substitutes for cloud resources.  Obviously, such a hosting point for vCPE doesn’t need additional hosting points and network connections to create a chain.  Each feature is in effect inserted into a “virtual slot” in an embedded-control computing device, where it runs.

One of the primary challenges in NFV is the onboarding virtual functions and interoperability of VNFs.  If every general-purpose edge device vendor takes their own path in terms of the device’s hosting features and local operating system, you could end up with a need for a different VNF for every vCPE device.  You need some standard presumption of a local operating system, a lightweight device-oriented Linux version for example, and you need some standard middleware that links the VNF to other VNFs in the same device, and to the NFV management processes.

What NFV could do here is define a standard middleware set to provide those “virtual slots” in the edge device and support the management of the features.  There should be a kind of two-plug mechanism for adding a feature.  One plug connects the feature component to the data plane in the designated place, and the other connects it to a standard management interface.  That interface then links to a management process that supplies management for all the features included.  Since the whole “chain” is in the box, it would be possible to cut in a new feature without significant (if any) data plane interruption.

This same approach could be taken for what I’ll call the “virtual edge device” approach.  Here, instead of service-chaining a bunch of features to create agility, the customer buys a virtual edge device, which is a cloud element that will accept feature insertion into the same image/element.  Thus, the network service user is “leasing” a hosting point into which features could be dynamically added.  This provides a dynamic way of feature-inserting that would preserve the efficiency of the new-image model but also potentially offer feature insertion with no disruption.

The second point where the NFV community could inject some order is in that management plug.  The notion here is that there is a specific, single, management process that’s resident with the component(s) and interacts with the rest of the NFV software.  That process has two standard APIs, one facing the NFV management system (VNFM) and the other facing the feature itself.  It is then the responsibility of any feature or VNF provider to offer a “stub” that connects their logic to the feature-side API.  That simplifies onboarding.

In theory, it would be possible to define a “feature API” for each class of feature, but I think the more logical approach to take would be to define an API whose data model defines parameters by feature, and includes all the feature classes to be supported.  For example, the API might define a “Firewall” device class and the parameters associated with it, and a “Accelerator” class that likewise has parameters.  That would continue as a kind of “name-details” hierarchy for each feature class.  You would then pass parameters only for the class(es) you implemented.

The next suggestion is to formalize and structure the notion of a “virtual infrastructure manager”.  There is still a question in NFV as to whether there’s a single VIM for everything or a possible group of VIMs.  The single-VIM model is way too restrictive because it’s doubtful that vendors would cooperate to provide such a thing, and almost every vendor (not to mention every new technology) has different management properties.  To make matters worse, there’s no organized way in which lifecycle management is handled.

VIMs should become “infrastructure managers” or IMs, and they should present the same kind of generalized API set that I noted above for VNFM.  This time, though, the API model would present only a set of SLA-type parameters that would then allow higher-level management processes to manage any IM the same way.  The IM should have the option of either handling lifecycle events internally or passing them up the chain through that API to higher-level management.  This would organize how diverse infrastructure is handled (via separate IMs), how legacy devices are integrated with NFV (via separate IMs), and how management is vertically integrated while still accommodating remediation at a low level.

The final suggestion is aimed at the problem I think is inherent in the strict implementation of the ETSI E2E model, which is scalability.  Software framed based on the functional model of NFV would be a serialized set of elements whose performance would be limited and which would not be easily scalable under load.  This could create a major problem should the failure of some key component of infrastructure cause a “fault cascade” that requires a lot of remediation and redeployment.  The only way to address this is by fragmenting NFV infrastructure and software into relatively contained domains which are harmonized above.

In ETSI-modeled NFV, we have to assume that every data center has a minimum of one NFV software instance, including MANO, VNFM, and VIM.  If it’s a large data center, then the number of instances would depend on the number of servers.  IMHO, you would want to presume that you had an instance for each 250 servers or so.

To make this work, a service would have to be decomposed into instance-specific pieces and each piece then dispatched to the proper spot.  That means you would have a kind of hierarchy of implementation.  The easiest way to do this is to say that there is a federation VIM that’s responsible for taking a piece of service and, rather than deploying it, sending it to another NFV instance for deployment.  You could have as many federation VIMs and layers thereof as needed.

All of this doesn’t substitute completely for an efficient NFV software architecture.  I’ve blogged enough about that to demonstrate what I think the problems with current NFV models are, and what I think would have to be done at the bottom to make things really good again.  These fixes won’t do that, but as I said at the opening of this blog, my goal isn’t to make current NFV great or even optimal, but rather to make it workable.  If that’s done, then we could at least hope that some deployment could occur, that fatal problems with NFV wouldn’t arise, and that successor implementations would have time to get it right at last.

What to Expect in Network Operators’ Fall Planning Cycle

Network operators generally do a fall technology plan to frame their following-year budget.  The timing varies with geography and operator, but most are active between mid-September and mid-November.  This year, a fair number of operators have done some pre-planning, and we can actually see the results in their quarterly earnings calls, as well as the calls of the network equipment vendors.  I’ll track the plans as they evolve, but this is a good time to baseline things.

Nearly all the operators reported lower capex could be expected for 2017, and most have actually spent a bit ahead of their budget plans.  As a result, the 4th quarter is looking a bit soft, and you can see that in the guidance of the equipment vendors and that for the operators themselves.  This shouldn’t come as a surprise, given that operators are feeling the pressure of declining profit per bit, which makes investment in infrastructure harder to justify.

Among the operators who have done some pre-planning, three issues have been raised.  First is whether SDN and NFV could bring about any meaningful change in revenue or profit, and for some at least, if “not” then “what might?”  Second is whether there is a potential for a change in regulatory climate that could help their profits, and third is just what to expect (if anything) from 5G.  We’ll look at each of these to get a hint of what might happen this fall and next year.

What operators think of either SDN or NFV is difficult to say because the response depends on who you’re talking to.  The CTO people are the most optimistic (not surprisingly, given that they include the groups working on the standards), and the CFO people tend to be the least.  Among the specific pre-plan operators, the broad view is “hopeful but not yet committed”.  There is general agreement that neither technology has yet made a business case for broad adoption, and that means neither has a provable positive impact on the bottom line.

Perhaps the biggest issue for this fall, based on the early input, is how a better business case could be made.  Nobody disagrees that both SDN and NFV will play a role in the future, but most operators now think that “automation”, by which they mean the automated service lifecycle management I’ve been blogging about, is more important.  Full exploitation of automation is outside the scope of both SDN and NFV in current projects and plans, and there is no standards body comparable to the ONF or ETSI NFV ISG to focus efforts.

“No standards body” here is interesting because of course the TMF is a body that could drive full service lifecycle automation.  It didn’t come up as much among pre-planning users, in large part because only the CIO organizations of operators seem to have much knowledge of or contact with the TMF.  In my view, the TMF also tends to generate its documents for consumption by its own members, using their own terminology.  That makes it harder for operator personnel who aren’t actively involved to understand them, and it reduces their media coverage as well.  In any event, the TMF doesn’t seem to be pushing “automation”, and so we’re a bit adrift on the SDN/NFV side for the fall planning cycle.

The regulatory trends are another up-in-the-air issue.  In the US, the Republican takeover of the FCC seems to be intent on reversing the pro-OTT mindset of previous FCCs, particularly the Wheeler Chairmanship that preceded the current (Pai) one.  Under Wheeler the FCC declared that the Internet was a telecommunications service regulated under Title II, which gave the FCC the ability to control settlement and pricing policies.  Wheeler took that status as a launching-pad for ruling against settlement among ISPs and paid prioritization, both of which could help ISP (and thus network operator) business models.  Pai seems determined to eliminate that classification, but even if he does the position could change with a change in administration in Washington.  There’s talk of Congress passing something to stabilize the net neutrality stance, but that might never happen.

Outside the US, regulatory trends are quite diverse, as has been the case for a decade or more.  However, operators in both Europe and Asia tell me that they see signs of interest in a shift to match the US in accepting paid prioritization and settlement.  If that were to happen, it could at least provide operators with temporary relief from profit compression by opening a revenue flow from OTTs to operators for video.  That would probably boost both legacy infrastructure spending and work on a longer-term revenue and cost solution.  However, operators don’t know how to handicap the shift of policy, and so far it’s not having a big impact on planners.

The final area is the most complicated—5G.  Generally, operators have accepted that they’ll be investing in 5G, with the impact probably peaking in 2021-2022, but the timing and the confidence operators have in a specific infrastructure plan varies considerably.  In the US, for example, there is considerable interest in using 5G with FTTN as a means of delivering high bandwidth to homes in areas where FTTH payback is questionable.  Operators in other countries, particularly those where demand density is high, are less interested in that.  Absent the 5G/FTTN connection, there isn’t a clear “killer justification” or business case for 5G in the minds of many operators.  “We may be thinking about an expensive deployment justified by being able to use the ‘5G’ label in ads,” one operator admits.

The 5G issue is where pre-planners think the overall focus for fall planning will end up.  Some would like to see a 5G RAN-only evolution, including those with FTTN designs.  Others would like to see the convergence of wireless and wireline in the metro, meaning the elimination or diminution of investment in Evolved Packet Core for mobile.  Still others with MVNO partner aspirations like network slicing.  Everyone agrees that it’s not completely clear to them that 5G evolution will improve things, and they say they’ll go slow until that proof is out there.  The pre-planners didn’t see IoT support as a big near-term driver for 5G, interestingly.

4G transition came along, operators say, at a critical point in market evolution, where the advent of smartphones and the growth in mobile phone usage drove demand upward sharply and outstripped old technologies.  There’s a question among operators whether that kind of demand drive will work for 5G, in no small part because it’s not clear whether competition will stall ARPU growth or even drive it down.  Operators would invest to fend off competition as long as service profits overall were promising, but it’s not clear to them whether they will be.  They’ll try to find out this fall.

Which raises the last point, the last difficulty.  Operators have historically relied on vendor input for their technology planning, under the logical assumption that it did little good to speculate about technologies that nobody was offering.  The problem is that the vendors have demonstrably failed to provide useful technology planning support in areas like SDN and NFV, and are failing in 5G by most accounts.  The pre-planners think that vendors still think that operators are public utilities engaged in supply-side market expansion.  Build it, and they will come.  The operators know that’s not a reasonable approach, but their own efforts to move things along (such as the open-source movement in both SDN and NFV) seem to have very long realization cycles and significant technology uncertainties.

We’re in an interesting time, marketing-wise.  We have a group of buyers who collectively represent hundreds of billions in potential revenue.  We have a group of sellers who don’t want to do what the buyers want and need.  The good news is that there are some signs of movement.  Cisco, who more than any other vendor represents a victory of marketing jive over market reality, is reluctantly embracing a more realistic position.  Other vendors are taking steps, tentatively to be sure, to come to terms with the new reality.  All of this will likely come to focus this fall, whether vendors or operators realize it or not.  There’s a real chance for vendors here, not only the usual chance to make the most of the fall planning cycle, but a broader chance to fill market needs and boost their own long-term success.