Why Is Carrier Cloud on the Back Burner for Carriers?

I noted in my blog yesterday that I was surprised and disappointed by the fact that the network operators didn’t seem to have given much thought to carrier cloud in their fall technology planning cycle.  I had a decent number of emails from operators, including some that I’d surveyed, to explain why that seemed to be (and for most, was) the case.  I thought I’d share a summary with you.

The net of the comments was that for better or worse, the operators have come to view “carrier cloud” as the outcome of things they’re doing rather than as a technical objective.  About half those who commented to me said that they believed that over time as much as a quarter of all their capex would be spent on servers and cloud technology.  However, they were all over the map in terms of how they believed they’d get to that point.

NFV remains the favored technology to drive carrier cloud, despite the fact that there is relatively little current indication that the connection exists, much less is growing.  This is in sharp contrast to my own modeling of the carrier cloud opportunity, which says that nothing will happen with NFV/carrier cloud in 2017 and that only about 4% of carrier cloud opportunity in 2018 comes from NFV.  In fact, the attitudes on carrier cloud demonstrate how difficult it is to survey users on many technology trends.  Two years ago, an even larger percentage of operators told me that in 2017 NFV would be driving carrier cloud.  My model always said “No!”

The second-favored technology to drive carrier cloud is 5G, and interestingly the percentage of operators who say 5G will be the driver in 2020 is almost exactly the same who said that NFV would be two years ago.  The majority of this group still think that NFV is the real driver, and they believe carrier cloud comes about because of NFV involvement in 5G implementation.

It’s really difficult to say what 5G would do for carrier cloud, largely because it’s difficult to say what 5G will do overall, both functionally and in a business sense.  A third of the comments I got from operators that favored 5G as a carrier cloud driver admitted that 5G “has a long way to go” before real adoption can be expected.  In other dialogs I’ve had with operators, they indicated that their current 5G planning focused on the radio network (RAN).  Some said they wanted to extend FTTN with 5G instead of DSL/copper, but most thought they’d do the upgrade for competitive and capacity reasons.

Those who think 5G boosts NFV, which boosts carrier cloud, are thinking mostly of a 5G goal of making network service features interconnectable and “roamable” to the same extent that connectivity is.  The problems with this vision are 1) there is no currently approved approach for VNF federation in NFV, 2) there’s no significant VNF deployment except in premises devices, 3) many operators don’t like the notion of constructing services from components like that, fearing it would eliminate a large-provider advantage, and 4) we still don’t have a 5G standard in this area (and probably won’t get one till next year).

The actual place where 5G might help carrier cloud is in the residential broadband space.  I’ve been blogging for almost a year on the fact that operators told me the most interesting 5G application was DSL replacement in FTTN deployments, and Verizon has now announced it will be starting to deploy that way halfway through 2018.  Clearly the thing that needs 5G capacity versus DSL capacity would be video, and video turns out to be the thing my model says is the best near-term driver of carrier cloud.

In 2017, video delivery enhancements and advertising caching (CDN and related tools) accounted for almost 60% of the opportunity driver for carrier cloud, and you’ve got to go out to 2020 before it drops below the 50% level.  Obviously there hasn’t been much uptick in the adoption of carrier cloud for video/ad hosting, but here’s an important point—you can’t deliver traditional FiOS video over 5G/FTTN.  You’d have to stream; thus, it is very likely that the Verizon-style 5G/FTTN model would require augmented caching for video and ad delivery.

The good thing about this particular carrier cloud driver is that it would very likely create a demand for edge-caching, meaning edge-hosting, meaning edge-concentrated carrier cloud.  FTTN terminates in central offices where there’s real estate to establish carrier cloud data centers.  These data centers could then be expected to serve as hosting points for other carrier cloud applications that are not yet capable of justifying one-off deployments of their own.

By 2020, when video/ad support finally drops below 50%, the biggest uptick in carrier cloud driver contribution comes from the 5G/IMS/EPC area, meaning the virtual hosting of 5G-and-mobile-related elements.  This is because as wireline DSL/FTTN is replaced by 5G/FTTN, there’s certain to be symbiotic use of that home 5G.  One of the easiest ways to stall out a new RAN technology is to have no handsets capable of using it, which happens in part because there’s no 5G cells to use those handsets in.  If many homes have local 5G, then why not let those same 5G connections support the homeowner?  In fact, why not let those FTTN links to 5G-for-home also serve as 5G RAN cells for mobile services?  You end up with a lot of 5G deployment, enough to justify handset support for 5G.

The big carrier cloud opportunity starts to show at about this same point (2020) and by 2022 it makes up half of the total carrier cloud driver opportunity.  It’s the shift to event/contextual services, including IoT.  The edge data centers that are driven by 5G/FTTN are available for event processing and the creation of contextual, event-driven, services that most cloud providers won’t be able to supply for lack of edge data centers.  This is what finally gives the network operators a real edge in cloud services.

Of course, they may not take the opportunity and run with it.  You can fairly say that the big problem with carrier cloud is that it’s driven by a bunch of interdependent things and not one single thing, and that’s probably why operators don’t think of it as a specific technology planning priority.  They need to learn to think a different way, and I’m trying now to find out if there are real signs that’s going to happen.  Stay tuned!

Operators’ Technology Plans for 2018: In a Word, “Murky”

We are now past the traditional fall technology planning cycle for the network operators, and I’ve heard from the ones that cooperate in providing me insight into what they expect to do next year and beyond.  There are obviously similarities between their final plans and their preliminary thinking, but they’ve also evolved their positions and offered some more details.

Before they got started, operators told me there were three issues they were considering.  First, could SDN and NFV be evolved to the point where they could actually make a business case and impact declining profit per bit?  Second, was there hope that regulatory changes would level the playing field with the OTTs?  Third, what could really be done with, and expected of, 5G?  They’ve addressed all of these, to a degree.

There’s some good news with respect to NFV, and some not-so-good.  The best news in a systemic sense is that operators have generally accepted the notion that broader service lifecycle automation could in fact make a business case.  The not-so-good news in the same area is that operators are still unconvinced that any practical service lifecycle automation strategy is being offered by anyone.  For SDN and NFV, the good news is that there is gradual acceptance of the value of both technologies in specific “low-apple” missions.  The bad news is that operators aren’t clear as to how either SDN or NFV will break out of the current limitations.

From a business and technology planning perspective, operators think they have the measure of vCPE in the context of business edge services.  They believe that an agile edge device could provide enough benefits to justify vCPE deployment, though most admit that the ROIs are sketchy.  They also believe that the agile-edge approach is a reasonable way to jump off to cloud-edge hosting of the same functions, though most say that their initiatives in this area are really field trials.  That’s because virtually no operators have edge-cloud deployments to exploit yet.

It’s interesting to me that SDN and NFV haven’t introduced a new set of vendors, at least not yet.  About two-thirds of the operators say that the vendors they’re looking hardest at for SDN and NFV are vendors who are incumbents in their current infrastructure.  The biggest place that’s not true is in “white-box” switching, and in that space, operators are showing more interest in rolling their own based on designs from some open computing and networking group than in buying from a legacy or new vendor.

In the NFV space, computer vendors are not showing any major gains in strategic influence, which is interesting given that hosting is what separates NFV from device-based networking.  The reason seems to be that “carrier cloud” is where servers deploy, and so far, NFV is confined to agile CPE and doesn’t contribute much to proactive carrier-cloud (particularly edge-cloud) deployment.  Somewhat to my own surprise, I didn’t see much push behind “carrier cloud” in the planning cycle.  I think that’s attributable to a lack of strategic focus among the computer vendors, and lack of a single decisive driver.

The lack of a decisive driver is reflected in my own modeling of the market opportunity for carrier cloud.  Up to 2020, the only clear opportunity driver is video and advertising, and operators have both regulatory and competitive concerns in both these areas.  Video on demand and video streaming are both slowly reshaping content delivery models, but there seems little awareness of the opportunity to use operator CDNs as a carrier-cloud on-ramp, and frankly I’m disappointed to see this.  I hope something changes in 2018.

On the regulatory side, note my blog on Monday on the FCC’s move.  Operators are both hopeful and resigned on the proposed change, which is somewhat as I’d feared.  They recognize that unless the FCC were to impose Title II regulation on operators, they have little chance of imposing restrictions on settlement and paid prioritization.  They also believe that whatever happens in the US on “net neutrality” up to 2020, it’s not going to reimpose Title II.  Thus, their primary concern is that a change in administration could result in a reversal of the Title II ruling in 2020.  That limits the extent to which they’d make an aggressive bet on paid prioritization and settlement.

The US operators I’ve talked with are cautious about even moving on settlement, fearing that a decision to charge video providers (in particular) for delivery would result in consumer price hikes and potential backlash on the whole regulatory scheme.  Thus, they seem more interested in the paid prioritization approach, offering at least content providers (and in some very limited cases, consumers) an opportunity to pay extra for special handling.

Outside the US, operators believe that if the US applies paid prioritization and settlement to the Internet, many or even most other major markets would follow suit.  However, they don’t think it would happen overnight, and that makes the confidence that US operators feel in the longevity of the regulatory shift very important.

For 2018 and probably 2019, I don’t see any signs that regulatory changes will have a major impact on technology choices.  SDN could be facilitated by paid prioritization, but current plans don’t include SDN because the shift couldn’t be easily reversed if policies changed.  Fast lanes may come, but they won’t drive near-term technology changes.

Any hopes of changes, at least technology changes, come down to 5G.  In areas where mobile services are highly competitive (including the US and EU), 5G deployment may be mandatory for competitive reasons alone.  In the US and some other markets, 5G/FTTN combinations offer the best hope of delivering “wireline” broadband at high speeds to homes and small business/office locations.  All of this adds up to the likelihood that 5G investment is baked into business plans, and that’s what I’ve been told.

Baking it into technology plans is a given at one level (it’s mandated) but difficult at another (what do you bake in?)  Like almost everything else in tech, 5G has been mercilessly overhyped, and associated with a bunch of stuff whose 5G connection is tenuous at best.  Let me give you some numbers to illustrate this.

Of the operators who’ve talked to me on the topic, 47% say that there’s a credible link between 5G and NFV, and 53% say 5G doesn’t require NFV.  On SDN, 36% say there’s a credible link and 64% say there isn’t.  In carrier cloud 68% say there’s a credible link to 5G and 32% say “No!”  Fully 85% say that you could do 5G credibly with nothing more than changes to the radio access network (RAN).  So where does this leave 5G technology planning?

5G core specifications won’t be ratified for almost a year, and it’s not clear to operators how much of the 5G capabilities that are then finalized in standards form will be translated into deployed capabilities, or when.  Much of 5G core deals with feature/service composability, and some operators argued that this sort of capability hasn’t been proved in the higher-value wireline business services market.

Where this has left operators this fall is a position of management support for 5G deployment but only limited technical planning to prepare for it.  The sense I get is that operators are prepared to respond to competitive 5G pressure and do what the market demands, but they really hope (and perhaps believe) that 5G won’t be a technical planning issue before 2020 or even 2021.

Across the board, that same confusion seems to hold.  In fact, this year’s planning cycle is less decisive than any I can recall in 30 years, though some of the winning technologies of prior cycles never really made any impact (ATM comes to mind).  Might the lack of direction, the emphasis on response rather than on tactical planning, be a good sign?  Perhaps the market can pick better technologies than the planners, and it appears that for 2018 at least, the planners are looking to the market for direction.

Sorry, ONAP, I Still Have Questions

The ONAP Amsterdam release is out, and while there are reports that the modular structure eases some of the criticisms made of ONAP, I can’t say that it’s done anything to address my own concerns about the basic architecture.  I’ve tried to answer them by reviewing the documentation on ONAP, without success.  They’re important, basic, questions, and so I’ll address them here and invite someone from ONAP to answer them directly.

Let me start by talking about a VNF, which is the functional unit of NFV.  A VNF is a hosted feature, something that has to be deployed and sustained, like any software component.  VNFs have network connections, and these can be classified into three general categories.  First, there are the data-plane connections that link a VNF into a network.  Firewall VNFs, for example, would likely have two of these, one pointing toward the network service and the other toward the user.  Second, there are management connections representing portals through which the management of the element can be exercised.  SNMP ports and CLI ports are examples.  Third, there may be a connection provided for user parametric control, to do things like change the way a given TCP or UDP port is handled by a firewall.

When we deploy a VNF, we would do a bunch of stuff to get it hosted and make whatever connections it has accessible.  We would then exercise some sort of setup function to get the management and user parametrics set up to make the function operational.  Lifecycle processes might have to renew the connections and even change management and user parameters.  The connection management part of deployment involves making the connections addressable in an internal (private) address space, and exposing into the “service address space” any connections that are really visible to the user.

I think it’s clear that the process of “deployment”, meaning getting the VNF hosted and connected, has to be a generalized process.  There is no reason why you’d have to know you were doing a firewall versus an instance of IMS, just to get it hosted and connected.  A blueprint has to describe what you want, not why you want it.

In the management and user parameterization area, it is possible that you will not have a generalized interface.  All SNMP MIBs aren’t the same, and certainly all the possible firewall implementations don’t have the same interface (often it’s a web portal the device exposes) to change its parameters.  If we don’t need to set up the VNF because the user is expected to do that, then management and parameterization are non-issues.  If we do have to set it up (whether there’s a standard interface or not) then we need to have what I’ll call a proxy that can speak the language of that interface.  Logically, we’d ask that proxy to translate from some standard metamodel to the specific parameter structure of the interface.

Given this, the process of onboarding a VNF would be the process of describing the hosting/connection blueprint (which probably can be done with existing virtualization and DevOps software) and providing or identifying the proper proxy.  I would submit that there is nothing needed that’s VNF-specific beyond this, and nothing that’s specific to the mission of the VNF.

OK, so given this, what are my concerns with Amsterdam?  The answer is that a good, promotional, industry-responsive description of VNF-specific processes would look like what I just offered.  I’d start with that, a powerful comment on my generalized approach.  I might then say “We are releasing, with Amsterdam, the metamodel for VoLTE and residential gateway (vCPE), and also a proxy for the following specific management/parameter interfaces.”  I’d make it clear that any VNF provider could take one of the proxies and rebuild it to match their own interfaces, thus making their stuff deployable.  This would be a highly satisfactory state of affairs.

ONAP hasn’t done that.  They talk about the two “use case” applications but don’t say that their support for them is a sample adaptation of what’s a universal VNF lifecycle management capability.  So is it?  That’s my question.  If there is any degree of VNF or service specificity in the ONAP logic, specificity that means that there really is a different set of components for VoLTE versus consumer broadband gateway, then this is being done wrong and applications and VNFs may have to be explicitly integrated.

The blueprint that describes deployment is the next question.  Every VNF should deploy pretty much as any other does, using the same tools and the same set of parameters.  Every service or application also has to be composable to allow that, meaning a blueprint has to be created that describes not only the structure of the service or application, but also defines the lifecycle processes in some way.

People today seem to “intent-wash” everything in service lifecycle management.  I have an intent model, therefore I am blessed.  An intent model provides a means of hiding dependencies, meaning that you can wrap anything that has the same external properties in one and it looks the same as every other implementation.  If something inside such a model breaks, you can presume that the repair is as hidden (in a detail sense) as everything else is.  However, that doesn’t mean you don’t have to author what’s inside to do the repair.  It doesn’t mean that if the intent-modeled element can’t repair itself, you don’t have to somehow define what’s supposed to happen.  It doesn’t mean that there isn’t a multi-step process of recommissioning intent-modeled components, and that such a process doesn’t need to be defined.

I don’t see this stuff either.  I’m not saying it’s not there, but I do have to admit that since operators tell me that this is the sort of thing they’d need to know to place a bet on ONAP, it’s hard to see why it wouldn’t be described triumphantly if it is there.

ONAP may not like my criticism and doubt.  I accept that, and I accept the possibility that there’s some solid documentation somewhere on the ONAP wiki that explains all of this.  OK, ONAP people, assume I missed it (there is a lot of stuff there, candidly not structured to be easily reviewed at the functional level), and send me a link.  Send me a presentation that addresses my two points.  Whichever you do, I’ll read it, and alter my views as needed based on what I find.  If I didn’t miss it, dear ONAP, then I think you need to address these questions squarely.

While I’m offering ONAP an opportunity to set me straight, let me do the same for the NFV ISG.  This is how NFV deployment should work.  Show me where this is explained.  Show me how this model is referenced as the goal of current projects, and how current projects align with it.  Do that and I’m happy to praise your relevance in the industry.  If you can’t do that, then I humbly suggest that you’ve just defined what your next evolutionary phase of work should be targeting.

To both bodies, and to everyone else in the NFV space who talks with the media and analyst communities and wants publicity, I also have a point to make.  You are responsible for framing your story in a way that can be comprehended by the targets of your releases and presentations.  You should never presume that everyone out there is a member of your group, or can take what might be days or weeks to explore your material.  Yes, this is all complicated, but if it can’t be simplified into a media story then asking for publicity is kind of a waste of time, huh?  If it can’t be explained on your website with a couple diagrams and a thousand words of text, then maybe it’s time to revisit your material.

The FCC Neutrality Order: It’s Not What you Think

We have at least the as-yet unvoted draft of the FCC’s new position on Net Neutrality, and as accustomed as I am to reading nonsense about developments in tech, the responses here set a new low.  I blogged about the issues that the new FCC Chairman (Pai) seemed to be addressing here, and I won’t reprise all the details.  I’ll focus instead on what the draft says and how it differs from the position I described in the earlier blog, starting with some interesting “two-somes” behind the order.

There are really two pieces of “net neutrality”.  The first could be broadly called the “non-discrimination” piece, and this is what guarantees users of the Internet non-discriminatory access to any lawful website.  The second is the “prioritization and settlement” piece, and this one guarantees that no one can pay to have Internet traffic handled differently (prioritized) or be required to pay settlement among ISPs who carry the traffic.  The public debate has conflated the two, but in fact the current action is really aimed at the second.

There are also two competing issues in net neutrality.  The first is the interest of the consumers and OTTs who are using the Internet, and the second the profit interest of the ISPs who actually provide the infrastructure.  The Internet is almost totally responsible for declining profit per bit, and at some point this year or next, it will fall below the level needed to justify further investment.  While everyone might like “free Internet”, there will be no race to provide it.  A balance needs to be struck between consumer interest and provider interest.

As a practical matter, both the providers and the OTTs have powerful financial interests they’re trying to protect, and they’re simply manipulating the consumers.  Stories on this topic, as I said in my opening paragraph, have been simply egregious as far as conveying the truth is concerned.  The New York Attorney General is investigating whether some comments on the order were faked, generated by a third party usurping the identities of real consumers.  Clearly there’s a lot of special interest here.

Finally, there are two forums in which neutrality issues could be raised.  The first is the FCC and the second the Federal Trade Commission (FTC).  The FCC has a narrow legal mandate to regulate the industry within the constraints of the Communications Act of 1934 as amended (primarily amended by the Telecommunications Act of 1996).  The FTC has a fairly broad mandate of consumer protection.  This is a really important point, as you’ll see.

So, what does the new order actually do?  First and foremost, it reverses the previous FCC decision to classify the Internet as a telecommunications service (regulated under Title II of the Telecommunications Act of 1934).  This step essentially mandates an FCC light touch on the Internet because the Federal Courts have already invalidated many of the FCC’s previous rules on the grounds they could be applied only to Telecommunications Services.

All “broadband Internet access services”, independent of technology, would be classified as information services.  It includes mobile broadband, and also includes MVNO services.  People/businesses who provide broadband WiFi access to patrons as a mass consumer service are included.  It excludes services to specialized devices (including e-readers) that use the Internet for specialized delivery of material and not for broad access.  It also excludes CDNs, VPNs, or Internet backbone services.  The rule of thumb is this; if it’s a mass-market service to access the Internet, then it’s an information service.

The classification is important because it establishes the critical point of jurisdiction for the FCC.  The FCC is now saying that it would be restrictive to classify the Internet as Title II, but without that classification the FCC has very limited authority to regulate the specific behavior of the ISPs.  Thus, the FCC won’t provide much in the way of specific regulatory limits and penalties.  It couldn’t enforce them, and perhaps it could never have done so.  Everything they’ve done in the past, including non-discrimination, has been appealed by somebody based on lack of FCC authority, and the Title II classification was undertaken to give the FCC authority to do what it wanted.  Absent Title II, the FCC certainly has no authority to deal with settlement and prioritization, and probably has insufficient authority to police non-blocking and discrimination.  That doesn’t mean “net neutrality” goes away, as the stories have said.

The FCC will require that ISPs publish their terms of service in clear language, including performance, terms of service, and this is where the FCC believes that “neutrality” will find a natural market leveling.  The order points out that broadband is competitive, and that consumers would respond to unreasonable anti-consumer steps (like blocking sites, slowing a competitor’s offerings, etc.) by simply moving to another provider.

The order also points out that the “premier consumer protection” body, the FTC, has authority to deal with situations where anti-competitive or anti-consumer behavior arise and aren’t dealt with effectively by competitive market forces.  Thus, the FCC is eliminating the “code of conduct” that it had previously imposed, and is shifting the focus of consumer protection to the FTC.  As I noted earlier, it’s never been clear whether the FCC had the authority to impose “neutrality” except through Title II, and so the fact is that we’ve operated without strict FCC oversight for most of the evolution of the Internet.

The FTC and the marketplace are probably not enough to prevent ISPs from offering paid prioritization and for requiring settlement to deliver high-volume traffic.  In fact, one of the things I looked for in the order was the treatment of settlement among ISPs, the latter topic being particularly dear to my heart since I’ve opposed the current “bill and keep” practice for decades, and even co-authored an RFC on the topic.  The order essentially says that the FCC will not step in to regulate the way that ISPs settle for peering with each other or through various exchanges.  Again, the FCC says that other agencies, including DoJ antitrust and the FTC, have ample authority to deal with any anti-competitive or unreasonable practices that might arise.

Paid prioritization is similarly treated; the FCC has eliminated the rules against it, so ISPs are free to work to offer “fast-lane” behavior either directly to the consumer or to OTTs who want to pay on behalf of their customers to improve quality of experience.  This may encourage specific settlement, since the bill-and-keep model can’t compensate every party in a connection for the additional cost of prioritization.  We should also note that paid prioritization could be a true windfall for SD-WAN-type business services, since the economics of high-QoS services created over the top with paid prioritization would surely be a lot better than current VPN economics.  You could argue that SD-WAN might be the big winner in the order.

The OTTs will surely see themselves as the big losers.  What they want is gigabit broadband at zero cost for everyone, so their own businesses prosper.  Wall Street might also be seen as a loser, because they make more money on high-flyers like Google (Alphabet) or Netflix than on stodgy old AT&T or Verizon.  VCs lose too because social-media and other OTT startups could face higher costs if they have to pay for priority services.  That might mean that despite their grumbling, players like Facebook and Netflix could face less competition.

It will be seen as an improvement for the ISPs, but even there a risk remains.  Network operators have a very long capital cycle, so they need stability in the way they are regulated.  This order isn’t likely to do that for two reasons.  First, nobody believes that a “new” administration of the other party would leave this order in place.  Second, only legislation could create a durable framework, and Congress has been unable to do even major things.  They’ve avoided weighing in on Internet regulation for 20 years now.  Thus, realizing the full benefits of the order may be illusive because operators might be reluctant to believe the changes will persist long enough to justify changing their plans for investment in infrastructure.

The long-term regulatory uncertainty isn’t the only uncertainty here.  The Internet is global, and its regulation is a hodgepodge of competing national and regional authorities, most of whom (like the FCC) haven’t had a stable position.  “We brought in one side and gave them everything they wanted, then we brought in the other side and gave them everything they wanted,” is how a lawmaker in the US described the creation of the Telecom Act in 1996.  That’s a fair statement of regulatory policy overall; the policies favor the firms who favor the current political winners.

My view, in the net?  The FCC is taking the right steps with the order, and that view shouldn’t surprise those who’ve read my blog over the last couple of years.  Net neutrality is not being “killed”, but enforcement of the first critical part of it (what consumers think neutrality really is) is shifted to the FTC, whose power of enforcement is clear.  There is no more risk that ISPs could decide what sites you could visit than there has been—none, in other words.  It’s not a “gift to telecom firms” as one media report says, it’s a potential lifeline for the Internet overall.  This might reverse the steady decline in profit per bit, might restore interest in infrastructure investment.  “Might” if the telcos believe the order will stand.

It’s not going to kill off the OTTs either.  There is a risk that the OTTs will be less profitable, or that some might raise their rates to cover the cost of settlement with the ISPs.  Will it hurt “Internet innovation?”  Perhaps, if you believe we need another Facebook competitor, but it might well magnify innovation where we need it most, which is in extending broadband at as high a rate and low a cost as possible.

If the ISPs are smart, they’ll go full bore into implementing the new position, offering paid prioritization and settlement and everything similar or related, and demonstrating that it doesn’t break the Internet but promotes it.  That’s because there could be only about three years remaining on the policy before a new FCC threatens to take everything back.  The only way to be sure the current rules stay in place is to prove they do good overall.

Cisco’s Quarter: Are They Really Facing the Future at Last?

Cisco reported its quarterly numbers, which were still down in revenue terms, but they forecast the first growth in revenue the company has had in about 2 years of reports.  “Forecast” isn’t realization of course, but the big question is whether the gains represent what one story describes as “providing an early sign of success for the company’s transition toward services and software”, whether it’s mostly a systemic recovery in network spending, or just moving the categories of revenue around.  I think it’s a bit of everything.

Most hardware vendors have been moving to a subscription model for all their software elements, which creates a recurring revenue stream.  New software, of course, is almost always subscription-based, and Cisco is a bit unique among network vendors in having a fairly large software (like WebEx) and server/platform business.

Cisco’s current-quarter year-over-year data shows a company that’s still feeling the impact of dipping network equipment opportunity.  Total revenue was off 2%, infrastructure platforms off 4%, and “other products” off 16%.  Security was the big winner, up 8%, with applications up 6% and services up 1%.  If you look at absolute dollars (growth/loss times revenue), the big loser was infrastructure and the big winner was applications.

Here’s the key point, the point that I think at least invalidates the story that this is an “early sign of success” for the Cisco shift in emphasis.  Infrastructure platforms are over 57% of revenue as of the most recent quarter.  Applications are about 10%, Security about 5%, and Services about 25%.  Two categories of revenue—applications and security—that are showing significant growth combine to make up only 15% of revenue, and that 57% Infrastructure Products sector is showing a significant loss.  How can gains in categories that account for only 15% of revenue offset losses in a category that account for almost four times as much revenue?

Two percent of current revenues for Cisco, the reported q/q decline, is about $240 million.  To go from 2% loss to a 2% gain, which is where guidance is, would require $480 million more revenue from those two gainer categories, which now account for about $1.8 billion in total.  Organic growth in TAM of that magnitude is hardly likely in the near term, and change in market share in Cisco’s favor similarly so.  What’s left? [see note below]

The essential answer is M&A.  Cisco has a decent hoard of cash, which it can use to buy companies that will contribute a new revenue stream.  However, Cisco classifies the revenue, getting about half a billion more would create everything Cisco needs.  Cisco is being smart by using cash and M&A to diversify, to add products and revenue to offset what seems the inevitable diminution of Cisco’s legacy, core, products’ contribution.  So yes, Cisco is transforming, but less by a transition toward software and services than by the acquisition of revenues from outside.

It may seem this is an unimportant distinction, but it’s not.  The problem with “buying revenue” through M&A is that you easily run out of good options.  It would be better if Cisco could fund its own R&D to create innovative products in other areas, but there are two problems with that.  First, what would an innovator in another “area” want with a job with Cisco?  They probably have experts in their current focus areas, which doesn’t help if those areas are in perpetual decline.  Second, it might take too long; if current infrastructure spending (at 57% of revenue) is declining at a 4% rate, the Cisco’s total revenue will take a two-and-a-quarter-percent hit.  To offset that in sectors now representing 15% of revenue, Cisco would need gains there of about 12%, right now.  That means that at least for now, Cisco needs M&A.

Most of all, it needs a clear eye to the future.  You can’t simply run out to the market and look for people to buy when you need to add something to the bottom line.  The stuff you acquire might be in at least as steep a decline as the stuff whose decline you’re trying to offset.  If you know where things are going you can prevent that, and you can also look far enough out to plan some internal projects that will offer you better down-line revenue and reduce your dependence on M&A.

Obviously, it’s not easy to find acquisitions to make up that needed $350 billion.  Cisco would have to be looking at a lot of M&A, which makes it much harder to pick out winners.  And remember that the losses from legacy sectors, if they continue, will require an offset every year.  A better idea would be to look for acquisitions that Cisco could leverage through its own customer relationships, and that would represent not only that clear current symbiosis but also future growth opportunity.  That kind of M&A plan would require a whole lot of vision.

Cisco has spent $6.6 billion this year on the M&A whose prices have been disclosed, according to this article, of which more than half was for AppDynamics.  Did that generate the kind of revenue gains they need?  Hardly.  It’s hard to see how even symbiosis with Cisco’s marketing, products, and plans could wring that much from the M&A they did.  If it could, it surely would take time and wouldn’t help in the coming year to get revenues from 2% down to 2%up.

To be fair to Cisco, this is a tough time for vision for any network vendor, and a tough industry to predict.  We have in networking an industry that’s eating its heart to feed its head.  The Internet model under-motivates the providers of connectivity in order to incentivize things that consume connectivity.  Regulations limit how aggressively network operators could elect to pursue those higher-layer services, which leaves them to try to cut costs at the lower level, which inevitably means cutting spending on equipment.

That which regulation has taken away, it might give back in another form.  The FCC will shortly announce its “end of net neutrality”, a characterization that’s fair only if you define “net neutrality” much more broadly than I do, and also that the FCC was the right place to enforce real net neutrality in the first place.  Many, including Chairman Pai of the FCC, think that the basic mission of non-discrimination and blocking that forms the real heart of net neutrality belongs in the FTC.  What took it out of there was less about consumer protection than OTT and venture capital protection.

The courts said that the FCC could not regulate pricing and service policy on services that were “information services” and explicitly not subject to that kind of regulation.  The previous FCC then reclassified the Internet as a telecommunications service, and the current FCC is now going to end that.  Whether the FCC would end all prohibitions on non-neutral behavior is doubtful.  The most it would be likely to do is accept settlement and paid prioritization, which the OTT players hate but which IMHO would benefit the ISPs to the point of improving their willingness to capitalize infrastructure.

What would network operators do if the FCC let them sell priority Internet?  Probably sell it, because if one ISP didn’t and another did, the latter would have a competitive advantage with respect to service quality.  Might the decision to create Internet QoS hurt business VPN services?  No more than SD-WAN will, inevitably.

Operators could easily increase their capex enough to change Cisco’s revenue growth problems into opportunities.  Could Cisco be counting on the reversal of neutrality?  That would seem reckless, particularly since Cisco doesn’t favor the step.  What Cisco could be doing is reading tea leaves of increasing buyer confidence; they do report an uptick in order rates.  Some of that confidence might have regulatory roots, but most is probably economic.  Networking spending isn’t tightly coupled to GDP growth in the long term (as I’ve said in other blogs) but its growth path relative to GDP growth still takes it higher in good times.

The question is what tea leaves Cisco is reading.  Their positioning, which is as strident as always, is still lagging the market.  Remember that Cisco’s strategy has always been to be a “fast follower” and not a leader.  M&A is a better way to do that because an acquired solution can be readied faster than a developed one, and at lower cost.  But fast following still demands knowing where you’re going, and it also demands that you really want to be there.  There is nowhere network equipment can go in the very long term but down.  Value lies in experiences, which means software that creates them.  I think there are players out there that have a better shot at preparing for an experience-driven future than any Cisco has acquired.

What Cisco probably is doing is less preparing for “the future” than slapping a band-aid on the present.  They are going to leak revenue from their infrastructure stuff.  The market is going to create short-term wins for other companies as the networking market twists and turns, and I think Cisco is grabbing some of the wins to offset the losses.  Regulatory relief would give them a longer period in which to come to terms with the reality of networking, but it won’t fend off the need to do that.  The future doesn’t belong to networking at this point, and Cisco has yet to show it’s faced that reality.

[The paragraph in italics had errors in its original form and is corrected here!]

MEF 3.0: Progress but not Revolution

We have no shortage of orchestration activity in standards groups, and the MEF has redoubled its own Life Cycle Orchestration (LSO) efforts with its new MEF 3.0 announcement.  The overall approach is sound at the technical level, meaning that it addresses things like the issues of “federation” of service elements across provider boundaries, but it also leaves some gaps in the story.  Then there’s the fact that the story itself is probably not completely understood.

Virtualization in networking is best known through the software-defined network (SDN) and network functions virtualization (NFV) initiatives.  SDN replaces a system of devices with a different system, one based on different principles in forwarding.  NFV replaces devices with hosted instances of functionality.  The standards activities in the two areas are, not surprisingly, focused on the specific replacement mission of each.  SDN focuses on how forwarding substitutes for switching/routing, and NFV on how you make a hosted function look like a device.

The problem we’ve had is that making a substitution workable doesn’t make it desirable.  The business case for SDN or NFV is hard to make if at the end of the day, the old system and the new are equivalent in every way, yet that’s the “replacement” goal each area has been pursuing.  Operators have shifted their view from the notion that they could save enough in capital costs by the change to justify it, to the notion that considerable operational savings and new-service-opportunity benefits would be required.  Hence, the SDN and NFV debates have been shifting toward a debate on service lifecycle management automation.

Neither SDN nor NFV put SLMA in-scope for standardization, which means that the primary operations impact of both SDN and NFV is to ensure that the opex and agility of the new system isn’t any worse than that of the old.  In fact, NFV in particular is aiming at simple substitution; MANO in NFV is about getting a virtual function to the state of equivalence with a physical function.  It’s the lack of SLMA capability that’s arguably hampering both SDN and NFV deployment.  No business case, no business.

The MEF has taken a slightly approach with its “third network”, and by implication with MEF 3.0.  The goal is to create not so much a virtual device or network, but a virtual service.  To support that, the LSO APIs are designed to support “federated” pan-provider control of packet and optical elements of a service, and also for the coordination of higher-layer features (like security) that are added to basic carrier Ethernet.

There are three broad questions about the MEF approach.  First is the question of federation; will the model address long-standing operator concerns about it?  Second is the question of carrier-Ethernet-centricity; does the MEF really go far enough in supporting non-Ethernet services?  Finally, there’s the overarching question of the business case; does MEF 3.0 move the ball there?  Let’s look at each.

Operators have a love/hate relationship with federation, and I’ve worked for a decade trying to help sort things out in the space.  On one hand, federation is important for operators who need to provide services larger than their own infrastructure footprint.  On the other, federation might level the playing field, creating more competitors by helping them combine to offer broader-scope services.  There’s also the problem of how to ensure that federation doesn’t create a kind of link into their infrastructure for others to exploit, by seeing traffic and capacity or by competing with their own services.

Facilitating service federation doesn’t address these issues automatically, and I don’t think that the MEF takes substantive steps to do that either.  However, there is value to facilitation, and in particular for the ability to federate higher-layer features and to integrate technology domains within a single operator.  Thus, I think we can say that MEF 3.0 is at least useful in this area.

The second question is whether the MEF goes far enough in supporting its own notion of the “third network”, the use of carrier Ethernet as a platform for building services at Level 3 (IP).  I have the launch presentation for the MEF’s Third Network, and the key slide says that Carrier Ethernet lacks agility and the Internet lacks service assurance (it’s best-efforts).  Thus, the Third Network has to be agile and deterministic.  Certainly, Carrier Ethernet can be deterministic, but for agility you’d have to be able to deliver IP services and harmonize with other VPN and even Internet technologies.

While the basic documents on MEF 3.0 don’t do much to validate the Third Network beyond claims, the MEF wiki does have an example of what would almost have to be the approach—SD-WAN.  The MEF concept is to use an orchestrated, centrally controlled, implementation of SD-WAN, and they do define (by name at least) the associated APIs.  I think more detail in laying out those APIs would be helpful, though.  The MEF Legato, Presto, and Adagio reference points are called out in the SD-WAN material, but Adagio isn’t being worked on by the MEF, and as a non-member I’ve not been able to pull the specs for the other two.  Thus, it’s not clear to me that the interfaces are defined enough in SD-WAN terms.

Here again, though, the MEF does something that’s at least useful.  We’re used to seeing SD-WAN as a pure third-party or customer overlay, and usually only on IP.  The MEF extends the SD-WAN model both to different technologies (Ethernet and theoretically SDN, but also involving NFV-deployed higher-layer features), and to a carrier-deployed model.  Another “useful” rating.

The final point is the business-case issue.  Here, I think it’s clear that the MEF has focused (as both SDN and NFV did) on exposing service assets to operations rather than on defining any operations automation or SLMA.  I don’t think you can knock them for doing what everyone else has done, but I do think that if I’ve declared SDN and NFV to have missed an opportunity in SLMA, I have to do the same for the MEF 3.0 stuff.

Where this leaves us is hard to say, but the bottom line is that we still have a business-case dependency on SLMA and still don’t have what operators consider to be a solution.  Would the MEF 3.0 and Third Network approach work, functionally speaking?  Yes.  So would SDN and NFV.  Can we see an easy path to adoption, defined and controlled by the MEF itself?  Not yet.  I understand that this sort of thing takes time, but I also have to judge the situation as it is and not how people think it will develop.  We have waited from 2012 to today, five years, for a new approach.  If we can’t justify a candidate approach at the business level after five years, it’s time to admit something was missed.

There may be good news on the horizon.  According to a Light Reading story, Verizon is betting on a wholesale SD-WAN model that would exploit the MEF 3.0 approach, and presumably wrap it in some additional elements that would make it more automated.  I say “presumably” because I don’t see a specific framework for the Verizon service, but I can’t see how they’d believe a wholesale model could be profitable to Verizon and the Verizon partner, and still be priced within market tolerance, unless the costs were wrung out.

We also have news from SDxCentral that Charter is looking at Nuage SD-WAN as a means of extending Ethernet services rather than of creating IP over Ethernet.  That would be an enhanced value proposition for the Third Network vision, and it would also establish that SD-WAN is really protocol-independent at the service interface level, not just in the support for underlayment transport options.  This is the second cable company (after Comcast) to define a non-MPLS VPN service, and it might mean that this will be a differentiator between telco and cableco VPNs.

How much the MEF vision alone could change carrier fortunes is an issue for carriers and for vendors as well.  Carrier Ethernet is about an $80 billion global market by the most optimistic estimates, and that is a very small piece of what’s estimated to be a $2.3 trillion communications services market globally.  Given that, the MEF’s vision can succeed only if somehow Ethernet breaks out of being a “service” and takes a broader role in all services.  There’s still work needed to support that goal.

Are Fiber Network Players Really Playing Well Enough?

We are seeing more signs of the fiber challenge and opportunity, and more uncertainty about how it will play out, especially in terms of winners and losers.  Ciena continues to take sensible steps, Infinera continues to stumble, and making sense of these seeming contradictions is the challenging part of assessing fiber’s future.

It’s not like we don’t all know that fiber deployment has nowhere to go but up.  Wireless alone could double fiber in service by 2025, and there’s a lot of global interest in increasing the commitment to fiber access, especially FTTN combined with 5G.  The challenge for fiber network players like Ciena and Infinera is that they don’t sell glass, but systems, and the role of those systems in a fiber-rich future is much more difficult to determine.

Most network hardware includes fiber interfaces, so single-mission point-to-point or even multipoint connections don’t require the equipment fiber networking vendors offer.  What you need their gear for is building “fiber networks”, which are connective Layer-One structures that provide optical multi-hop paths, aggregation, and distribution of capacity.  If you’re a fiber vendor, you either have to focus on expanded applications of fiber networking, or you have to bet on expansion in the few areas of fiber deployment that are essentially point-to-point but do require or justify specialized devices.

Infinera seems to have taken the second option, talking more about things like subsea cables for intercontinental connection.  Yes, we’re likely to have more of that, but no, it’s not likely to be a huge growth opportunity.  Data center interconnect is another area that they’ve identified, and while surely the cloud will increase the need for that, it’s not exactly a household-scale opportunity.  Of the 7.5 million business sites in the US, for example, only about 150,000 represent any scale of data center, and my surveys say that only 8,000 even represent multiple data centers of a single business.

Ciena has done a better job in positioning optical networking as a target, and focusing on what I think is the fundamental truth for optical network vendors—you need to have a connective, multi-hop, complex Layer One infrastructure opportunity if you want to have an opportunity for discrete optical network products versus glass connected to the interfaces of electrical-layer stuff.  Even Ciena, though, may not be going quite far enough or being quite explicit enough.

It’s helpful here to look at the extreme case.  What would magnify the value of optical networking in its true sense?  Answer, diminution of electrical networking in the same sense.  Put in the reverse, the more connectivity we manage at the optical layer the more the electrical layer looks like a simple edge function.

This is a clear description of what a combination of agile optics and “virtual wires” would be.  If Level 1 is fully connective (in virtual wire form), fully resilient in recovery from faults, and fully elastic in terms of capacity, then higher protocol layers are just the stuff that creates an interface and divides traffic up among the virtual pipes.  SD-WAN is a good example; if you’re going to build services on an overlay principle you’d achieve the lowest cost and simplest operation by overlaying them on the most primitive underlay you can build—a virtual wire.

Virtual wires can be distinguished from optical paths by the presumption that a virtual wire is a Level 1 element that carries traffic but doesn’t participate in any data-plane protocol exchange.  Optical paths can be viewed as an implementation option for virtual wires, but probably not one broadly applicable enough to fulfill their potential.  The problem is that everyone can’t have a piece of an optical pipe serving them; you need to have some electrical-layer tail connectivity that aggregates onto the higher-capacity optical routes.  That’s what Ciena just announced, with the notion of a packet-technology edge function.

“Edge” is important here, because the closer you can get a fiber network—even a “fiber network” that’s including electrical/packet tail connections—to the edge, the more you can absorb into it in terms of features, functions, and benefits.  That absorption is what increases the value of fiber networks, and networking, and raises the revenue potential for vendors in the space.

If we look at edge computing in abstract, it’s tempting to see connectivity requirements as nothing more than a greater number of DCI paths, because edge computing is computer and data center connection if considered on its own.  The thing is, we have to consider it in the context of what else is at the edge.

The majority of edge computing sites will be sites where telecom/cablecom and wireless services converge.  Think telco central office.  There is already considerable traffic in and out of these locations, much of which is concentrated using its own specific equipment.  Historically, the “metro network” was a network created with optics (SONET) and supported through on-ramp add-drop multiplexers that offered operators a way of clumping a variety of traffic sources onto fast fiber paths.  If edge computing comes along, it adds to the stuff that needs clumping, and could potentially further justify the notion of a separate optical-layer network.

Ciena and Infinera already have “metro network” products and strategies, and it seems to me that edge computing is effectively an update to these strategies, a way of providing virtual wires to extend optics, perhaps even virtual-wire services to end users.  Ciena talks about some of the specific value propositions for 10 and 100GigE to the edge, but they really should explore two issues.  First, how do you keep the various higher-speed packet interfaces that the future will demand from being realize as simple glass between boxes and not elements of an optical network?  Second, how can you turn packet-edge into service-virtual-wire?

Virtual private networks can be created without switches/routers in a variety of ways, all of which are likely to offer lower service costs and greater operator profits.  Even things like content delivery networks and mobile packet core can be built that way, and we’re already seeing examples of this.  The logical pathway for operators to achieve better profits is to use cheaper technology—both in capex and opex terms—to create services.  Virtual wires would be a good way to start, because they can link in with SD-WAN, with virtual switch/router instances, and even with NFV-hosted service elements.

Optical players like Ciena and Infinera have an opportunity to anticipate what is likely an inevitable shift in how services like these are created, but it’s not one that will be automatically realized.  Vendors have to sing their own song, and sing effectively, if they want their buyers to listen.  Ciena has taken more positive steps in this direction, but even they’re not quite where they need to be.  Infinera has some hard choices to face.

A good, and sadly deceased, friend of mine, Ping Pan, was an advocate of a virtual wire concept.  He was one of the architects of the IETF effort on “pseudowires”, in fact, and if we’d had all the mature thinking on the cloud, virtual switches, virtual routers, instances of forwarding software, and SD-WAN, that we now have, he’d have seen the connection.  Edge instances of forwarding processes can combine with virtual wires to create all but the largest-scale services.  Interestingly, he was working at Infinera during some of his work on pseudowires.  They should have listened.

 

 

Exploiting the Full Scope of IoT Opportunity

IoT has been contending for the most-hyped technology of our time, and a recent T3C Summit event that cause got a big boost.  According to SDxCentral’s summary of a panel at the event, “…it makes sense that in the Internet of Things (IoT) boom, with its expected 20 billion to 50 billion connected devices by 2020, there’s money to be made by telcos.”  The title of the article characterizes this as a “multi-billion-dollar opportunity.”  Not necessarily, or even probably, unless you look way beyond the obvious.

IoT suffers, as most modern technology developments do, from “bracket creep”.  It gets good ink.  Therefore vendors append the IoT tag to anything that remotely resembles it.  Therefore there’s a constant advance in claims and stories that attract reporters and editors.  Therefore it gets good ink.  You get the picture.  So, yes, we may well end up with 20 to 50 billion connected devices by 2020, but my model says that far less than a tenth of 1% of those devices will be in any way visible to the telcos, much less earning them revenue.

The reason I’m harping on this is that we’re seeing another great market opportunity suffer from stupid positioning.  Any telco who thinks they’ll make their future numbers from IoT is not only doomed to disappointment on that specific opportunity, they’re probably overlooking the real opportunity in IoT.  The wrong focus is not only wrong, it usually precludes having the right focus, which is edge computing.

Another article, this one from Light Reading’s Carol Wilson, quotes the PCCW VP of Engineering as saying that “Competing in the digital services space doesn’t mean going up against web-scalers, it means doing edge-computing….It all comes back to FOG and edge cloud architecture.”  That’s the real point for sure, and IoT would surely be able to earn operators billions if they listened.

Operators have one unique advantage in the fog space—they have real estate.  There are about 70,000 edge offices of telcos worldwide, and another 30,000 deeper-sited offices, for a total of over a hundred thousand locations.  It’s tough to put a data center in the back of a truck and make all the necessary connections; you need permanent real estate, so operators have the place to put over a hundred thousand incremental fog data centers without buying any new buildings.  Amazon, Google, and other OTTs don’t have that advantage, so it would make sense for operators to exploit their real estate assets.

This ties into IoT for two reasons.  First, IoT isn’t about on-the-Internet sensors at all, because the majority of sensors are designed to be used in private applications.  If we put those billions of connected devices directly on the Internet, we’d have billions of hacks and spoofs and spend tens of billions on security making it look like the devices weren’t really there at all.  The fact is that the model of IoT we’ll see dominating is one where the sensors are on a private network that might not even use Internet technology at all (home sensor networks typically don’t).  The sites where they’re located are already connected, so there’s zero revenue associated with connecting those sensors.

Where the revenue comes from is digesting, summarizing, and correlating sensor data.  As I’ve said in other blogs, nobody is going to succeed in IoT if every application has to deal with raw sensor data.  Apart from access, security, and sheer ability to field all the requests, it would be too much work to write something like that and the result would be so sensor-specific it would be brittle.  An army would be needed to keep it up to date.

A better approach would be to presume that there are trusted providers who subscribe to sensor information and do various cuts and splices to create insight.  For example, if there’s a sensor that records ambient temperature in a bunch of places, you could look at patterns of change to detect conditions that range from a sudden cold front to a power failure.  In traffic terms, you could assess traffic patterns at a high level and even predict when a mess of cars in one area was going to translate to a mess in another because of movement along predictable routes.  There are many, many, types of insight that could be drawn, and many applications that would want to take a stab at drawing it.

Who provides all this good stuff, and where is it run?  The second point I talked about is that edge computing is close to the source of telemetry.  Quick access means timely analysis and correlation, which means edge-processed IoT events can lead to more timely insights.  That makes these event-analysis processes valuable in themselves, meaning something others would subscribe to for a fee.  Not only that, edge locations are able to initiate responses with a lower delay, so if the application demands reaction and not just analysis, you could sell the hosting of the reactive process at the edge more easily than somewhere deeper.

Connecting IoT devices is a silly mission.  Sure, operators could offer 5G connectivity (at a cost) to users, but would the users pay when some vendor offered them a free local connection to the same devices by utilizing WiFi or ZigBee or some other protocol?   Picture AT&T going to New York City and telling them they can telemetrize every intersection by adding 5G sensors, while meanwhile ConEd says that they’ll simply carry the traffic on the power connection.  Everyone with a current Internet connection can simply use it to get access to sensors connected to some local control point.  Not a good business for operators to get into, in short.

Turning sensor data into useful, actionable, intelligence?  That’s a whole different story.  Here we have an opportunity to add value, which is the surest way to add revenue.  The challenge is that it’s not at all clear how regulators would treat this kind of telco mission.  Regulatory policy on higher-level services has traditionally said that telcos have to offer such things through a separate subsidiary.  That could preclude their exploiting regulated assets, which in most cases would include real estate.  How that subsidiary was capitalized might also be an issue, and this combination makes it much harder for operators to exploit their advantages.

It also makes it a lot harder for IoT to happen, at least happen in an optimal way.  It’s hard to pick a group that has better assets to develop the market, and enlightened policy would try to encourage them to do that rather than put barriers in place.  I don’t know what other group of companies could even make the kind of investment needed in edge computing, and I don’t know whether we can really get to IoT without it.  Perhaps this is something regulators in major markets need to think about while planning policy changes.

Can NFV Make the Transition from vCPE to “Foundation Services?”

Suppose we decided that it was time to think outside the virtual CPE box, NFV-wise.  The whole of NFV seems to have fixated on the vCPE story, so much so that it’s fair to ask whether there’s anything else for NFV to address, and if so what exactly would the other options look like.

vCPE has two aspects that make it a subset (perhaps a small one) of NFV overall.  One is that it’s been focused on hosting in a general-purpose box that sits on the customer premises, outside the carrier cloud.  The other is that it’s a single-tenant, single-service model.  The first point means that unless NFV moves beyond vCPE, NFV can’t promote the carrier cloud overall.  The second means that it’s very difficult to extend NFV to anything but business services, which limits bottom-line impact.  If these are the limitations, then we should expect that “extended” NFV has to address both.

In theory, there’s nothing to prevent “vCPE” from being virtualized inside the carrier cloud, and many operators and vendors will hasten to say that even as they focus on premises-device-based implementations.  The practical truth is that unless you have a fairly extensive edge-hosted carrier cloud in place, it would be difficult to find a good spot to put the vCPE VNFs other than on premises.  You don’t want to pull traffic too far from the natural point of connection to add features like firewall and encryption, and any extensive new connection requirements would also increase operations complexity and cost.

There’s also an incremental-cost issue to address.  A service has to be terminated in something, meaning that it has to offer a demarcation interface that users can consume, and whatever premises features are expected for the service.  An example is consumer or even small-branch broadband; you need to terminate cable or FiOS, for example, and a WiFi router, which means that you probably have to cover most of the device cost with the baseline features.  Adding in firewalls and other elements won’t add much, so removing them to the cloud won’t save much.

The “tenancy” question is even more fundamental.  Obviously, something hosted on a customer’s premises isn’t likely to be multi-tenant, and it’s very possible that the focus on vCPE has inadvertently created an NFV fixation on single-tenant VNFs.  That’s bad because the great majority of service provider opportunity is probably based on multi-tenant applications.

If you want to host a website, you don’t spin up an Internet to support it.  In many cases you don’t even spin up a new server, because the hosting plan for most businesses uses shared-server technology.  If you believe in wireless, do you believe that every customer gets their own IMS and EPC?  Is 5G network slicing likely to be done on a slice-per-phone basis?  IoT presumes shared sensors, virtual or real.  Almost everything an OTT offers is multi-tenant, and the operators want to reap the service opportunities that OTTs now get almost exclusively.  Might that demand multi-tenant thinking?  Thus, might it demand multi-tenant NFV?

There are huge differences between a vCPE application and virtual IMS or EPC.  The one that immediately comes to mind is that “deployment” is something that’s done once, not something done every time a contract is renewed.  The fact is that multi-tenant VNFs would probably have to be deployed and managed as cloud components rather than through the traditional NFV MANO processes, for the simple reason that the VNFs would look like cloud components.

This raises an important question for the cloud and networking industries, and one even more important for “carrier cloud” because it unites the two.  The question is whether NFV should be considered a special case of cloud deployment, or whether NFV is something specific to per-user-per-service vCPE-like deployments.  Right now, it’s the latter.  We have to look at whether it should or could become the former.

The first step is to ask whether you could deploy a multi-tenant service element using NFV.  At the philosophical level this would mean treating the network operator as the “customer” and deploying the multi-tenant elements as part of the operator’s own service.  There’s no theoretical reason why the basic NFV processes couldn’t handle that.  If we made this first-stage assumption, then we could also presume that lifecycle management steps would serve to scale it or replace faulted components.  The key is to ensure that we don’t let the operator’s customers have control over any aspect of shared-tenant element behavior.  Again, no big deal; users of a corporate network service wouldn’t have control over that service as a shared-tenant process; the network group would control it.

One fly in the ointment that I came across early on is that many of these advanced shared-tenant features are themselves pieces of a larger application.  IMS and EPC go together in 4G networks, for example.  If you deploy them independently, which likely you would do since they are separate pieces of the 3GPP mobile infrastructure model, then you’d have to know where one was put so you could integrate it with the other.  In the original CloudNFV plan, these kinds of features were called “foundation services” because they deployed for building into multiple missions.

Foundation services are like applications in the cloud.  They probably have multiple components and they probably have to be integrated in an access or workflow sense with other applications.  The integration process at the minimum would have to support a means of referencing foundation services from other services, including other foundation services.  In “normal” NFV, you would expect the service elements to be invisible outside the service; not so here.

This relationship between foundation services and NFV may be at the heart of NFV’s future.  Somebody asked me, on my blog yesterday, what the value proposition was for the deployment of cloud elements via NFV.  The only possible answer is improved lifecycle management, meaning management across the spectrum of legacy and software-hosted elements.  That’s not handled by NFV today, though it should be, and so NFV is not clearly useful in foundation service applications.  This, despite people in places like AT&T saying that NFV is fundamental to 5G, means that it’s not clear NFV is needed or even useful there.

You can’t create the future by declaring it, people.  If we want NFV to take the next step, then it has to do what’s necessary.  We have ample evidence of both the truth of this and the direction that step has to be taken.  Is it easier to do nothing?  Sure, but “nothing” is what will result.

Are the “Issues” With ONAP a Symptom of a Broader Problem?

How do you know that software or software architectures are doing the wrong thing?  Answer: They are doing something that only works in specific cases.  That seems to be a problem with NFV software, including the current front-runner framework, ONAP.  The initial release, we’re told by Light Reading, will support a limited set of vCPE VNFs.  One application (vCPE) and a small number of functions not only doesn’t make NFV successful, it begs the question of how the whole project is going together.

Linux is surely the most popular and best-known open-source software product out there.  Suppose that when Linux came out, Linus Torvalds said “I’ve done this operating system that only works for centralized financial applications and includes payroll, accounts receivable, and accounts payable.  I’ll get to the rest of the applications later on.”  Do you think that Linux would have been a success?  The point is that a good general-purpose tool is first and foremost general-purpose.  NFV software that “knows” it’s doing vCPE or that has support for only some specific VNFs isn’t NFV software at all.

NFV software is really about service lifecycle management, meaning the process of creating a toolkit that can compose, deploy, and sustain a service that consists of multiple interdependent pieces, whether they’re legacy technology elements or software-hosted virtual functions.  If every piece of a service has to be interchangeable, meaning support multiple implementations, then you either have to be able to make each alternative for each piece look the same, or you have to author the toolkit to accommodate every current and future variation.  The latter is impossible, obviously, so the former is the only path forward.

To make different implementations of something look the same, you either have to demand that they be the same looking from the outside in, or you have to model them to abstract away their differences.  That’s what “intent modeling” is all about.  Two firewall implementations should have a common property set that’s determined by their “intent” or mission—which in this case is being a firewall.  An intent model looks like “firewall” to the rest of the service management toolkit, but inside the model there’s code that harmonizes the interfaces of each implementation to that abstract intent-modeled reference.

If there’s anything that seems universally accepted in this confusing age of SDN and NFV, it’s the notion that intent models are critical if you want generalized tools to operate on non-standardized implementations of service components.  How did that get missed here?  Does this mean that there are some fundamental issues to be addressed in ONAP, and perhaps in NFV software overall?  Can they be addressed at this point?

From the very first, NFV was a software project being run by a traditional standards process.  I tried to point out the issues in early 2013, and the original CloudNFV project addressed those issues by defining what came to be known as “intent modeling”.  EnterpriseWeb, the orchestration partner in CloudNFV, took that approach forward into the TMF Catalyst process, and has won awards for its handling of the process of “onboarding” and “metamodels”, the implementation guts of intent modeling.  In short, there’s no lack of historicity and support for the right approach here.  Why then are we apparently on the wrong track?

I think the heart of the problem is the combination of the complexity of the problem and the simplicity of ad-sponsored media coverage.  Nobody wants to (or probably could) write a story on the real NFV issues, because a catchy title gets all the ad servings you’re ever going to get on a piece.  Vendors know that and so they feed the PR machine, and their goal is to get publicity for their own approach—likely to be minimalistic.  And you need a well-funded vendor machine to attend standards meetings or run media events or sponsor analyst reports.

How about the heart of the solution?  We have intent-model implementations today, of course, and so it would be possible to collect a good NFV solution from what’s out there.  The key piece seems to be a tool to facilitate the automated creation of the intent models, to support the onboarding of VNFs and the setting of “type-standards” for the interfaces.  EnterpriseWeb has showed that capability, and it wouldn’t be rocket science for other vendors to work out their own approaches.

It would help if we accepted the fact that “type-standards” are essential.  All VNFs have some common management properties, and all have to support lifecycle steps like horizontal scaling and redeployment.  All VNFs that have the same mission (like “firewall”) should also have common properties at a deeper level.  Remember that we defined SNMP MIBs for classes of devices; why should it be any harder for classes of VNF?  ETSI NFV ISG: If you’re listening and looking for useful work, here is the most useful thing you could be doing!

The media could help here too.  Light Reading has done a number of NFV articles, including the one that I opened with.  It would be helpful if they’d cover the real issues here, including the fact that no international standards group or body with the same biases as the NFV ISG has a better chance of getting things right.  This is a software problem that software architectures and architects have to solve for us.

It may also help that we could get a new body working on the problem.  ETSI is setting up a zero-touch automation group, interesting given that the NFV ISG should have addressed that in their MANO work, that the TMF has had a ZOOM (Zero-touch Orchestration, Operation, and Management) project since 2014, and that automation of the service lifecycle is at least implicit in almost all the open-source MANO stuff out there, including ONAP.  A majority of the operators supporting the new ETSI group tell me that they’d like to see ONAP absorbed into it somehow.

These things may “help”, but optimal NFV demands optimal software, which is hard to achieve if you’ve started off with a design that doesn’t address the simple truth that no efficient service lifecycle management is possible if all the things you’re managing look different and require specific and specialized accommodation.  This isn’t how software is supposed to work, particularly in the cloud.  We can do a lot by adding object-intent-model abstractions to the VNFs and integrating them that way, but it’s not as good an approach as starting with the right software architecture.  We should be building on intent modeling, not trying to retrofit it.

That, of course, is the heart of the problem and an issue we’re not addressing.  You need software architecture to do software, and that architecture sets the tone for the capabilities in terms of functionality, composability, and lifecycle management.  It’s hard to say whether we could effectively re-architect the NFV model the right way at this point without seeming to invalidate everything done so far, but we may have to face that to keep NFV on a relevant track.