A Structure for Abstraction and Virtualization in the Telco Cloud

It is becoming clear that the future of networking, the cloud, and IT in general lies in abstraction.  We have an increasing number of choices in network technology, equipment vendors, servers, operating systems (and distros), middleware…you get the picture.  We have open-source software and open hardware initiatives, and of course open standards.  With this multiplicity of options comes more buyer choice and power, but multiplicity has its downsides.  It’s hard to prevent vendor desires for differentiation from diluting choice, and differences in implementation mean difficulty creating efficient and agile operations.

Abstraction is the accepted way of addressing this.  “Virtualization” is a term often used to describe the process of creating an abstraction that can be mapped to a number of different options.  A virtual machine is mapped to a real server, a virtual network to real infrastructure.  Abstraction plus mapping equals virtualization, in other words.

The challenge we have isn’t acceptance of the notion of abstraction/virtualization, but the growing number of things that need to be virtualized and the even-faster-growing number of ways of looking at it.  Complex virtualization really means a modeling system to express the relationships of parts to the whole.  In my ExperiaSphere work on service lifecycle automation, I proposed that we model a service in two layers, “service” and “resource”, and I think we are starting to see some sense of structure in virtualization overall.

The best way to look at anything these days is through cloud-colored glasses, and the cloud offers us some useful insights into that broader virtualization vision.  “Infrastructure” in the cloud has two basic features, the ability to host application components or service features, and the ability to connect elements of applications and services to create a delivered experience.  We could visualize these two things as being the “services” offered by, or the “features” of, infrastructure.

If you decompose infrastructure, you end up with systems of devices, and here we see variations in how the abstraction/virtualization stuff might work.  On the network side, the standard structure is that a network is made up of a cooperative community of devices/elements, and that networks are committed to create connection services.  Thus, devices>networks>connection-services in progression.  On the hosting or computing side, you really have a combination of network devices and servers that collectively frame a data center hardware system, and that hosts a set of platform software tools that combine to create the hosting.

There are already a couple of complicating factors entering the picture.  First, “devices” at the network and hosting levels can be virtualized themselves.  A “router” might be a software feature hosted in a virtual machine assigned to a pool of servers.  Second, the virtual machine hosting (or container hosting) might be based on a pool of resources that don’t align with data center boundaries, so the virtual division of resources would differ from the physical division.  Container pods or clusters or swarms are examples; they might cross data center boundaries.

What we end up with is a slightly more complicated set of layers, which I offer HERE as a graphic to make things easier to follow.  I’ve also noted the parts of the structure covered by MANO and ONAP, and by the Apache Mesos and DC/OS combination that I think bears consideration by the ONAP people.

At the bottom of the structure, we have a device layer that hosts real, nuclear, hardware elements.  On top of this is a virtual-infrastructure layer, and this layer is responsible for mapping between the real device elements available and any necessary or useful abstraction thereof.  One such abstraction might be geographical/facility-oriented, meaning data centers or interconnect farms.  Another might be resource-pool oriented, meaning that the layer creates an abstract pool from which higher layers can draw resources.

One easy illustration of this layer and what it abstracts is the decision by an operator or cloud provider to add a data center.  That data center has a collection of real devices in it, and the process of adding the data center would involve some “real” and “virtual” changes.  On the real side, we’d have to connect that data center network into the WAN that connects the other data centers.  On the virtual side, we would need to make the resources of that data center available to the abstractions that are hosted by the virtual-infrastructure layer, such as cloud resource pools.  The “mapping processes” for this layer might contain policies that would automatically augment some of the virtual-infrastructure abstractions (the resource pools, for example) with resources from the new data center.

Above the virtual-infrastructure layer is the layer that commits virtual resources, which I’ll call the “virtual resource” layer.  This layer would add whatever platform software (OS and middleware, hypervisor, etc.) and parameterization needed to transform a resource pool into a “virtual element”, a virtual component of an application or service, a virtual device, or something else that has explicit functionality.  Virtual elements are the building-blocks for services, which are made up of feature components hosted in virtual elements or coerced behavior of devices or device systems.

If we accept this model as at least one possible layered framework for abstraction, we can also map some current projects to the layers.  ONAP and NFV MANO operate at the very top, converting virtual resources into functional components, represented in MANO by Virtual Infrastructure Managers and Virtual Network Functions.  ONAP operates higher as well, in service lifecycle management processes.

Below the ONAP/MANO activities are the layers that my ExperiaSphere stuff calls the “resource-layer models”.  In my view, the best current framework for this set of features is found in the DC/OS project, which is based on Apache Mesos.  There are things that I think are needed at this level that Mesos and DC/OS don’t provide, but I think they could be added on without too much hassle.

Let’s go back now to DC/OS and Mesos.  Mesos is an Apache cluster management tool, and DC/OS adds in features that abstract a resource cloud to look like a single computer, which is certainly a big step toward my bottom-layer requirements.  It’s also something that I think the telcos should have been looking at (so is Marathon, a mass-scale orchestration tool).  But even if you don’t think that the combination is a critical piece of virtualization and telco cloud, it demonstrates that the cloud community has been thinking of this problem for a long time.

Where I think DC/OS and Mesos could use some help is in defining non-server elements, resource commissioning and data center assignment and onboarding.  The lower layer of my model, the Device Layer, is a physical pool of stuff.  It would be essential to be able to represent network resources in this layer, and it would be highly desirable to support the reality that you onboard entire data centers or racks and not just individual servers or boxes.  Finally, the management processes to sustain resources should be defined here, and from here should be coupled upward to be associated with higher-layer elements.

I think this is a topic that needs to be explored, by the ONAP people, the NFV ISG, and perhaps the Open Compute Project, as well as Apache.  We need to have a vertically integrated model of virtualization, not a bunch of disconnected approaches, or we’ll not be able to create a uniform cloud hosting environment that’s elastic and composable at all levels.  And we shouldn’t settle for less.

The Cloud and the Battle for “Everyware”

Even in an industry, a world, committed to hype, reality always wins in the end.  Cloud computing is an example of this tenant, and what’s interesting is less the validity of the central point than the way that cloud reality is reshaping the industry.  Most interesting of all is the relationship between the cloud and open-source.

When public cloud computing first came along, I did my usual industry surveys and modeling, and what emerged from the process was a couple key points.  First, no more than a maximum of 24% of current applications could be justifiably ported to the cloud.  Second, over 80% of the actual opportunity for public cloud services would come from developing cloud applications that had never run elsewhere.  Finally, public cloud would never displace enterprise data centers to any significant degree.

What we are seeing in cloud computing today is a reflection of these points.  Cloud-specific applications dominate, and hybrid cloud dominates, even now.  Increased competition among cloud providers, and the constant need for differentiation, has generated a whole cloud industry of “web services” that present hosted feature add-ons to basic cloud services.  This is one of the reasons why we’re seeing cloud-specific applications.  Now the same forces are acting in the hybrid cloud area.

Hybrid clouds are a symbiotic relationship between enterprise data centers and public cloud services.  Given that, it’s obvious that somebody with a foot in both spaces would have an advantage in defining the critical connecting features, and that has benefitted Microsoft significantly.  In my surveys, Microsoft’s cloud has outperformed the competition, even though non-enterprise applications have pushed Amazon into the overall lead in public cloud services.  Amazon and Google know this, and both companies have been struggling to create a credible outpost for their own cloud services within the data center.

The obvious way to create the hybrid link to your cloud service is to offer a premises-hosted element that appears to be a part of your cloud.  Amazon has done this with Greengrass.  Google is working with Cisco to develop an open hybrid strategy, and is said to be especially under pressure to make something happen, hybrid-wise, because of Google’s third-place position in the public cloud market.  Amazon is now working its own Linux distro, Linux 2, into the game, and some say that Google is hoping Kubernetes, the popular container orchestrator that Google developed initially, will provide it with hybrid creds.  Unfortunately for Google, everyone supports Kubernetes, including Amazon and Microsoft.

While the competitive dynamic in the public cloud space, and hybrid cloud impact on that dynamic, get a lot of buzz, the biggest and longest-lasting impact of the hybrid cloud may be on “platform software”, meaning the operating system and middleware elements used by applications.  Amazon and Salesforce have made no secret of their interest in moving off Oracle DBMS software to an open platform, something that would lower their costs.  If public cloud platforms gravitate totally to open source, and if public cloud operators continue to add web services to build cloud-specific functionality that has to hybridize with the data center, isn’t it then very likely that the public cloud platform software will become the de facto platform for the hybrid cloud, and thus for IT overall?

What we’re talking about here is “cloudware”, a new kind of platform software that’s designed to be distributable across all hosting resources, offering a consistent development framework that virtualizes everything an application uses.  Hybrid cloud is a powerful cloudware driver, but working against this happy universality is the fact that cloud providers don’t want to have complete portability of applications.  If they don’t have their own unique features, then they can only differentiate on price, which creates the race to the bottom nobody particularly wants to be a part of.

It’s already clear that cloudware is going to be almost exclusively open-sourced.  Look at Linux, at Docker, at Kubernetes, at OpenStack, and you see that the advances in the cloud are already tied back to open source.  A big part of the reason is that it’s very difficult for cloud providers to invent their own stuff from the ground up.  Amazon’s Linux 2 and the almost-universal acceptance of Kubernetes for container cloud demonstrate that.  Open-source platform software is already the rule, and cloudware is likely to make it almost universal.

The biggest question of all is whether “cloudware” will end up becoming “everyware”.  Open-source tools are available from many sources, including giants like Red Hat.  Is it possible that cloudware would challenge these incumbents, and if so what could tip the balance?  It’s interesting and complicated.

At the level of broad architecture, what’s needed is fairly clear.  To start with, you need something that can virtualize hosting, modeled perhaps on Apache Mesos and DC/OS.  That defines a kind of resource layer, harmonizing the underlying infrastructure.  On top of that you’d need a platform-as-a-service framework that included operating system (Linux, no doubt) and middleware.  It’s in the middleware that the issue of cloudware/everyware hits.

Everyone sees mobility, or functional computing, or database, or whatever, in their own unique and competitively biased way.  To create a true everyware, you need to harmonize that middleware, which means that you need an abstraction layer for it just as we have for hardware or hosting.  For example, event-driven functional computing could be virtualized, and then each implementation mapped to the virtual model.

If we are evolving toward a fairly universal hybrid platform, then either that platform has to evolve from current enterprise open-source stuff like Red Hat, or emerge from the cloud.  Proponents from either camp have an opportunity to frame the universal “everyware” of the future, but they also face specific challenges to their moves to do that.

For cloud providers, the problem is lack of unity.  The cloud is not the only place applications run; it’s not even the dominant place.  Not only that, differentiation- and profit-driven moves to enhance web services available to cloud applications creates not one vision of cloudware, but a vision for every provider.  If enterprises who think in terms of hybrid cloud confront the issue of premises data center hosting, those who think in terms of multicloud confront the diversity of implementations for basic cloud service features.

The premises players have their own special challenge, which is that the cloud changes everything, at least with respect to application architectures and developer strategies.  It’s hard to see how you could build an event-driven app in the data center unless you wanted to host stuff all over the world where your events originated.  That means that the premises players have to cede the critical future development trends to the cloud providers.

The battle to control “everyware” may be the defining issue in 2018, because it will not only establish market leadership (and maybe even survival) in both the cloud and platform software spaces, but will influence the pace of cloud adoption and application modernization.  This is the cloud’s defining issue for the coming year, and it will also play a major role in defining how we evolve to carrier cloud and hosted services.  Keep an eye on it; I know I’ll be watching!

How NFV Can Save Itself in 2018

Network Functions Virtualization (NFV) has generated a lot of buzz, but it became pretty clear last year that the bloom was off the rose in terms of coverage and operator commitment.  Does this mean that NFV was a bad idea?  Is all the work that was done irrelevant, or about to become so?  Are vendor and operator hopes for NFV about to be dashed for good?

NFV wasn’t a bad idea, and still isn’t, but the fulfillment of its potential is in doubt.  NFV is at a crossroads this year, because the industry is moving in a broader direction and the work of the ISG is getting more and more detailed and narrow.  The downward direction collides more and more with established cloud elements, so it’s redundant.  The downward direction has also opened a gap between the business case and the top-level NFV definitions, and stuff like ONAP is now filling that gap and controlling deployment.

I’ve noted in many past blogs that the goal of efficient, agile, service lifecycle management can be achieved without transforming infrastructure at all, whether with SDN or NFV.  If we get far enough in service automation, we’ll achieve infrastructure independence, and that lets us stay the course with switches and routers (yes, probably increasingly white-box but still essentially legacy technology).  To succeed in this kind of world, NFV has to find its place, narrower than it could have been but not as narrow as it will end up being if nothing is done.

The first step for NFV is hitch your wagon to the ONAP star.  The biggest mistake the ETSI NFV ISG made was limiting its scope to what was little more than how to deploy a cloud component that happened to be a piece of service functionality.  A new technology for network-building can never be justified by making it equivalent to the old ones.  It has to be better, and in fact a lot better.  The fact is that service lifecycle automation should have been the goal all along, but NFV’s scope couldn’t address it.  ONAP has a much broader scope, and while (as its own key technologists say) it’s a platform and not a product, the platform has the potential to cover all the essential pieces of service lifecycle automation.

NFV would fit into ONAP as a “controller” element, which means that NFV’s Management and Orchestration (MANO) and VNF Manager (VNFM) functions would be active on virtual-function hosting environments.  The rest of the service could be expected to be handled by some other controller, such as one handling SDN or even something interfacing with legacy NMS products.  Thus, ONAP picks up a big part of what NFV doesn’t handle with respect to lifecycle automation.  Even though it doesn’t do it all, ONAP at least relieves the NFV ISG of the requirements of working on a broader area.

The only objections to this step may come from vendors want to push their own approaches, or from some operators who have alternative open-platform aspirations.  My advice to both groups is to get over it!  There can be only one big thrust forward at this point, and it’s ONAP or nothing.

The second step for NFV is probably going to get a lot of push-back from the NFV ISG.  That step is to forget a new orchestration and management architecture and focus on adapting cloud technology to the NFV mission.  A “virtual network function” is a cloud component, period.  To the greatest extent possible, deploying and sustaining them should be managed as any other cloud component would be.  To get to that point, we have to divide up the process of “deployment” into two elements, add a third for “sustaining”, and then fit NFV to each.

The first element is the actual hosting piece, which today is dominated by OpenStack for VMs or Docker for containers.  I’ve not seen convincing evidence that the same two elements wouldn’t work for basic NFV deployment.

The second element is orchestration, which in the cloud is typically addressed through DevOps products (Chef, Puppet, Heat, Ansible) and with containers through Kubernetes or Marathon.  Orchestration is about how to deploy systems of components, and so more work may be needed here to accommodate the policy-based automation of deployment of VNFs based on factors (like regulations) that don’t particularly impact the cloud at this point.  These factors should be input into cloud orchestration development, because many of them are likely to eventually matter to applications as much as to services.

The final element is the management (VNFM) piece.  Cloud application management isn’t as organized a space as DevOps or cloud stacks, and while we have this modern notion of intent-modeled services, we don’t really have a specific paradigm for “intent model management”.  The NFV community could make a contribution here, but I think the work is more appropriately part of the scope of ONAP.  Thus, the NFV people should be promoting that vision within ONAP.

The next element on my to-do-for-NFV list is think outside the virtual CPE.  NFV quickly got obsessed with the vCPE application, service chaining, and other things related to that concept.  This has, in my view, created a huge disconnect between NFV work and the things NFV will, in the long term, have to support to be successful.

The biggest problem with vCPE is that it doesn’t present even a credible benefit beyond business services.  You always need a box at the point of service termination, particularly for consumer broadband where WiFi hubs combine with broadband demarcations in virtually every case.  Thus, it’s difficult to say what you actually save through virtualization.  In most current vCPE business applications, you end up with a premises box that hosts functions, not cloud hosting.  That’s even more specialized as a business case, and it doesn’t drive carrier cloud deployment critical for the rest of NFV.

Service chaining is another boondoggle.  If you have five functions to run, there is actually little benefit in having the five separately hosted and linked in a chain.  You now are dependent on five different hosting points and all the connections between them, or you get a service interruption.  Why not create a single image containing all five features?  If any of the five break, you lose the connection anyway.  Operations and hosting costs are lower for the five-combined strategy than the service-chain strategy.  Give it up, people!

The beyond-vCPE issue is that of persistence and tenancy.  Many, perhaps even most, credible NFV applications are really multi-tenant elements that are installed once and sustained for a macro period.  Even most single-tenant NFV services are static for the life of the contract, and so in all cases they are really more like cloud applications than like dynamic service chains.  We need to have an exploration of how static and multi-tenant services are deployed and managed, because the focus has been elsewhere.

We have actually seen some successful examples of multi-tenant service elements in NFV already; Metaswitch’s implementation of IMS comes to mind.  The thing that sets these apart from “typical” NFV is that you have a piece of service, an interface, that has to be visible in multiple services at the same time.  There has to be some protection against contamination or malware for such services, but there also has to be coordination in managing the shared elements, lest one service end up working against the interests of others.

Nothing on this list would be impossible to do, many wouldn’t even be difficult, and all are IMHO totally essential.  It’s not that a failure to address these points would cause NFV to fail as a concept, but that it could make NFV specifications irrelevant.  That would be a shame because a lot of good thinking and work has gone into the initiative to date.  The key now is to direct both past work and future efforts in a direction where results that move the ball for the industry as a whole, not for NFV as an atomic activity, can be obtained.  That’s going to be a bitter pill for some, but it’s essential.

The Driving Technologies for Network Operators in 2018

If you’re a tech analyst, you just have to do a blog on what to expect in the coming year, no matter how overdone the topic might be.  OK, here’s mine.  What I want to do is look at the most significant trends and issues, the ones that will shape the market for years to come and also establish vendor primacy in key product areas.  I’ll also make note of a couple things that I don’t expect to happen.  Obviously, this is a prediction, but it’s based on modeling and information from the buyers and sellers.

The top issue on my list is SD-WAN.  It’s important because it’s the first broad initiative that separates “services” from “infrastructure”, and that’s critical for transformation of the operator business model.  While the term doesn’t have quite that vague but positive meaning that we’ve come to know and love from other tech developments, there are significant differences in the meaning, application, and implementation of the concept.  In 2018, the pressures of a developing market are likely to start narrowing the field in all respects.

SD-WAN is an “overlay” technology whether or not the specific implementation uses traditional overlay tunnel technology.  You build an SD-WAN over top of some other connection technologies, most often VLANs, VPNs, and the Internet, so it’s not strictly a “transformation” technology with respect to infrastructure.  What an SD-WAN network offers is an independent service layer, a totally controllable address space on top of any supported combination of connections, in any geography.  Because you can support most SD-WAN technologies with a software agent, you can add cloud hosts to the SD-WAN without trying to convince cloud providers to build your boxes into their infrastructure.

The concept of overlay tunnels has been around for a long time, of course, so it’s not so much technology that’s going to make a difference in 2018, it’s application.  Business services are an easier target for competing access and service providers, because you can sell to businesses easier.  Try selling home broadband door to door and you’ll see what I mean.  Managed Service Providers have already gotten the message, but in the last quarter of 2017 it’s become clear that the big news is going to be competitive access providers, including cable companies.  SD-WAN, for them, can generate both an in-area business service model without having to redeploy infrastructure, and a global service footprint exploiting someone else’s access.  This is an irresistible combination.

SD-WAN isn’t just for business services, either.  You can use overlay technology for any connectivity mission, for video delivery, for a form of slicing of infrastructure, and as the basis for cloud multi-tenancy.  At least a couple SD-WAN vendors are already seeing that broader set of opportunities, and I think that’s the tip of the iceberg.

One reason is competitive pressure.  SD-WAN is a sure bet for cable companies or any operator who has national/international service aspirations and a limited access network geography.  We can also already see signs that SD-WAN will be a greater part of telco service plans.  For the telcos, it also offers the opportunity to simplify infrastructure, lower business service costs, and exploit consumer-level broadband access for smaller branch locations, particularly where there’s not a lot of business customers and justifying carrier Ethernet is harder.  By creating an architectural separation between services and infrastructure, SD-WAN makes both more agile, and facilitates a lot of the other market-shaping technologies for 2018.  If even one significant group of operators get the point, everyone else will follow.

Despite the critical value of the technology, winning in the SD-WAN space in 2018 may not be as easy as just tossing a product out there and waiting for someone to notice.  Operators have a different agenda for SD-WAN.  They might want to integrate it with NFV and virtual CPE, for example.  They certainly want to have as much management automation as possible.  They’ll need to be able to link it to current business services, perhaps MPLS, perhaps VLAN, perhaps both.  They will probably want to look at having “interior” network elements that work with edge elements, because that offers them a differentiator.  They may also want to avoid products that have focused on selling direct to enterprises, since these would obviously not offer operators much opportunity.

The next market-shaper in 2018 is zero-touch automation of service and infrastructure processes.  We have been dabbling around the edges of this since 2012, but it’s become clear (again, mostly in the last quarter) that things are finally getting serious.  The TMF has worked on the issue for over a decade, and they have good engagement with the CIOs in operators, but they apparently haven’t been able to move the ball as much as operators want.  If you read the white paper that was issued by the ETSI Zero-touch network and Service Management ISG (ZSM ISG), you’ll see that it overlaps a lot of the TMF ZOOM stuff, and it creates a kind of functional overlay on the NFV ISG.

Technically, zero-touch automation is a collision of a need to support diverse goals and the need to do it with a single technology, a single architecture.  We have operations people who focus on the infrastructure, OSS/BSS people who focus on services and administration, and CTO people who do the standards and propose future technology options.  We somehow have to blend the personalities and the areas they champion, into a single model.  Since we’ve been gradually developing bottom-up stuff in all these areas for years, you can understand how that blending might pose a challenge.

In fact, the biggest challenge the ZSM ISG will have to surmount is standards.  If this is another standards process, it will create media buzz, attract support, spend forever getting tiny details straight, miss the big picture, and eventually lose the audience.  On the other hand, if this body starts looking like a software engineering case study that loses touch with the problem set, it will end up disconnected from the goals that have driven operators to create the group in the first place.  It’s a delicate balance, one that no standards body in two decades has been able to strike.  I can’t promise it will be struck by the ZSM ISG, but by the end of the year we’ll know whether this initiative has failed.  If it does fail, then I think we’ve seen the end of relevant standards for software-centric services.

This is another challenging space for vendors.  Operators have a growing preference for open-source tools in the service lifecycle automation space, which limits commercial opportunity.  They also want full-spectrum solutions rather than components, so it might be wise for any player in the space to look at how they might integrate with ONAP/ECOMP.  That could avoid having to develop a flood of add-on tools and elements, maintain compatibility with vendor offerings, support SDN and NFV…you get the picture.

And speaking of open-source, guess what our next market-shaper is?  Operators have been divided for some time on just how they advance their own cause.  Standards bodies end up dominated by vendors because there are more of them, and because vendors build the products that have to build the networks.  Operators are generally unable, for anti-trust reasons, to form operator-only bodies or even bodies where operators dominate.  There’s been operator interest in open-source software for service management for at least ten years that I’m aware of (I was a member of a TMF open-source group that was launched to address operator interest in the topic back in 2008).  While open-source is a market-shaper, the real insight behind this is AT&T’s “ah-ha!” moment.

AT&T, I believe, recognized that even open-source wasn’t going to do the job, because vendors would dominate open-source projects as easily as standards activities.  Their insight, which we can see in how their ECOMP service management software developed and how their “white-box OS” is currently developing, was to do a big software project internally, then release it to open-source when it’s largely done.  Vendors are then faced with either spending years trying to pervert it, or jumping on board and reaping some near-term benefits.  You can guess what’s going to happen in 2018.

This isn’t going to be a play for the small vendors, unless you want to build into an architecture like ONAP/ECOMP.  The buy-in to participate in the essential industry forums and attend all the trade shows and events is considerable in itself, and then you have to be able to sustain the momentum of your activity.  Historically, most open-source has been driven by vendors who didn’t want to try to sustain critical mass in proprietary software, but recently there has been growing operator interest.  They want to build something internally, then open-source it, which means that it limits software opportunity in the entire space that operators might target with open-source.  Watch this one; it could make you or kill you.

The final market-shaper for 2018 is 5G-and-FTTN broadband services.  While we’ve had a number of technical advances in the last five years that raise the speed of copper/DSL, we can’t deliver hundred-meg broadband reliably from current remote nodes, even fed by fiber.  If there’s going to be a re-architecting of the outside plant for “wireline” broadband, it has to be based on something with better competitive performance.  That’s what makes 5G/FTTN look good.  Early trials show it can deliver 100-meg-or-better in many areas, and probably could deliver at least a half-a-gig with some repositioning or adding of nodes.  It puts telcos on a level playing field with respect to cable CATV technology, even with recent generations of DOCSIS.  Competition, remember?

The important thing about the 5G/FTTN hybrid is that it might be the technical step that spells the end of linear-RF TV delivery.  Cable operators have been challenged for years trying to address how to allocate CATV spectrum between video RF and IP broadband.  5G/FTTN raises the stakes in that trade-off by giving telcos even more broadband access capacity to play with, and if we see significant movement in the space in 2018, then we should expect to see streaming supplant linear RF for TV.

The downside for 5G/FTTN may be the rest of 5G.  Operators I’ve talked with rate the 5G/FTTN-millimeter wave stuff their top priority, followed by the 5G New Radio (NR) advancements.  There’s a lot of other 5G stuff, including federating services beyond connection, network slicing, and so forth.  None of these get nearly the backing in the executive suites of operators, though of course the standards types love them all.  Will the sheer mass of stuff included in 5G standards weigh down the whole process?  It seems to me that the success of any piece of 5G in 2018 will depend in part on how easily it’s separated from the whole.

How do you play 5G?  In 2018, anyone who thinks they can make a bundle on anything other than 5G/FTTN is probably going to be seriously disappointed, but other than the major telco equipment vendors in both the RAN/NR and fiber-node space, vendors will be well advised to look for adjunct opportunities created by 5G.  Video could be revolutionized, and so could business services, and 5G/FTTN could be a major driver for SD-WAN too.  A symbiotic ecosystem might evolve here, in which case that ecosystem could create most of the 2018 and even 2019 opportunity.

Now for a few things that will get a lot of attention in 2018 but won’t qualify as market-shapers.  I’ll give these less space, and we may revisit some of them in 2019.

The first is carrier cloud.  I’m personally disappointed in this one, but I have to call things as they seem to be going.  My model never predicted a land-rush carrier cloud opportunity in 2018; it said we’d add no more than about 1,900 data centers, largely due to the fact that the main drivers of deployment would not have hit.  Up to 2020, the opportunity is driven mostly by video and ad caching, and the big growth in that won’t happen until 5G/FTTN starts to deploy in 2019.  We will see an uptick in data centers, but probably we’ll barely hit my model forecast.  Check back next year!

Next is net neutrality, which the FCC decided it would not play a significant role in enforcing.  There is talk about having the courts reverse the FCC, or Congress changing the legislative framework that the FCC operates under, restoring neutrality.  Possible?  Only very dimly.  The courts have affirmed the FCC’s right to decide which “Title” of the Communications Act applies to ISPs, so that will likely happen here too.  Without Title II, the same courts ruled the FCC lacks the authority to impose the neutrality rules.  Congress has never wanted to take a role in setting telecom policy, and in any event the same party controls Congress as controls the FCC.  The order will likely stand, at least until a change in administration.  How operators will react to it is also likely to be a yawn in 2018; they’ll wait to see whether there’s any real momentum to change things back, and won’t risk adding to it.

Another miss in 2018 is massive SDN/NFV deployment.  Yes, we have some of both today, and yes, there will be more of both in 2018, but not the massive shift in infrastructure that proponents had hoped for.  Operators will not get enough from either SDN or NFV to boost profit-per-bit significantly.  Other market forces could help both SDN and NFV in 2019 and 2020, though.  We’ll get to that at the end of next year, of course.  The fact is that neither SDN nor NFV were likely to bring about massive transformational changes; the limited scope ensures that.  Operators are already looking elsewhere, as I note earlier in this blog.  Success of either SDN or NFV depends on growth in the carrier cloud, and 2018 is too early to expect much in that area.

Were we to see rapid movement in all our market-leading technologies, we could expect 2018 to be a transformation land-rush.  Even just two of the three would likely result in a boom in the industry for at least some of the players, and a single one would be enough to change the game noticeably and set things up for 2019.  In my own view, we should look at 2018 in just those terms—we’re teeing up technology for full exploitation in 2019 and 2020.  Don’t let that futurism lull you into delay, though.  The winners in 2019 and 2020 will almost surely be decided next year, and you’re either in that group by year’s end, or you’re at enormous risk.

I wish all of you a Happy and prosperous New Year.

Service Lifecycle Modeling: More than Just Intent

I blog about a lot of things, but the topic that seems to generate the most interest is service lifecycle automation.  The centerpiece of almost every approach is a model, a structure that represents the service as a collection of components.  The industry overall has tended to look at modeling as a conflict of modeling languages; are you a YANG person, a TOSCA person, a TMF SID person?  We now have the notion of “intent modeling”, which some see as the super-answer, and there are modeling approaches that could be adopted from the software/cloud DevOps space.  How do you wade through all of this stuff?

From the top, of course.  Service lifecycle automation must be framed on a paramount principle, which is that “automation” means direct software handling of service events via some mechanism that associates events with actions based on the goal state of each element and the service overall.  The notion of a “model” arises because it’s convenient to represent the elements of a service in a model, and define goal states and event-to-process relationships based on that.

The problem with this definition as a single modeling reference is that term “service elements”.  A service is potentially a collection of thousands of elements.  Many of the elements are effectively systems of lower-level elements (like a router network), or complex elements like hosted virtual functions that have one logical function and multiple physical components.  The structural reality of networks generates three very specific problems.

Problem number one is what are you modeling?  It is possible to model a service by modeling the specific elements and their relationships within the service itself.  Think of this sort of model as a diagram of the actual service components.  The problem this has posed is that the model doesn’t represent the sequencing of steps that may be needed to deploy or redeploy, and it’s more difficult to use different modeling languages if some pieces of the process (setup of traditional switches/routers, for example) already have their own effective modeling approaches.  This has tended to emphasize the notion of modeling a service as a hierarchy, meaning you are modeling the process of lifecycle management not the physical elements.

The second problem is simple scale.  If we imagine a model as a single structure that represents an entire service, it’s clear in an instant that there’s way too much going on.  Remember those thousands of elements?  You can imagine that trying to build a complete model of a large service, as a single flat structure, would be outlandishly difficult.  The issue of scale has contributed to the shift from modeling the physical service to modeling the deployment/redeployment steps.

Problem three is the problem of abstraction.  Two different ways of doing the same thing should look the same from the outside.  If they don’t, then making a change to how some little piece of a model is implemented could mean you have to change the whole model.  Intent modeling has come to be a watchword, and one of its useful properties is that it can collect different implementations of the same functionality under a common model, and can support hierarchical nesting of model elements, an essential property when you’re modeling steps or relationships not the real structure.

Problem four is suitability and leveraging.  We have many software tools already available to deploy hosted functions, connect things, set up VPNs, and so forth.  Each of these tools has proved itself in the arena of the real market, they are suitable to their missions.  They are probably not suitable for other missions; you wouldn’t expect a VPN tool to deploy a virtual function.  You want to leverage stuff where good stuff is available, meaning you may have to adopt multiple approaches depending on just what you’re modeling.  I think that one of the perhaps-fatal shortcomings of SDN and NFV work to date is the failure to exploit things that were already developed for the cloud.  That can be traced to the fact that we have multiple modeling approaches to describe those cloud-things, and picking one would invalidate the others.

As I noted above, it may well have been the recognition of these points that promoted the concept of intent models.  An intent model is an abstraction that asserts specific external properties related to its functionality, and hides how they’re implemented.  There’s no question that intent models, if properly implemented, offer a major advance in the implementation of service lifecycle automation, but the “properly implemented” qualifier here is important, because they don’t do it all.

Back in the old IPsphere Forum days, meaning around 2008, we had a working-group session in northern NJ to explore how IPsphere dealt with “orchestration”.  The concept at the time was based on a pure hierarchical model, meaning that “service” decomposed into multiple “subservices”, each of which was presumed to be orchestrated through its lifecycle steps in synchrony with the “service” itself.  Send an “Activate” to the service and it repeats that event to all its subservices, in short.  We see this approach even today.

One of the topics of that meeting was a presentation I made, called “meticulous orchestration”.  The point of the paper was that it was possible that the subordinate elements of a given model (an “intent model” in today’s terminology) would have to be orchestrated in a specific order and that the lifecycle phases of the subordinates might not just mimic those of the superior. (Kevin Dillon was the Chairman of the IPSF at the time, hope he remembers this discussion!).

The important thing about this concept, from the perspective of modeling, is that it demonstrated that you might need to have a model element that had no service-level function at all, but rather simply orchestrated the stuff it represented.  It introduced something I called in a prior blog “representational intent.”  If you are going to have to deploy models, if the models have to be intent-based and so contain multiple implementations at a given level, why not consider two thinking in two levels—the model domain and the service domain?

In traditional hierarchical modeling, you need a model element for every nexus, meaning the end of every fork and every forking point.  The purpose of that model element is to represent the collective structure below, allowing it to be an “intent model” with a structured interior that will vary depending on the specific nature of the service and the specific resources available at points where the service has to be delivered or host functionality.  It ensures that when a recovery process for a single service element is undertaken and fails to complete, the recovery of that process at a higher level is coordinated with the rest of the service.

Suppose that one virtual function in a chain has a hosting failure and the intent model representing it (“FirewallVNF” for example) cannot recover locally, meaning that the place where the function was formerly supported can no longer be used.  Yes, I can spin up the function in another data center, but if I do that, will the connection that links it to the rest of the chain not be broken?  The function itself doesn’t know that connection, but the higher-level element that deployed the now-broken function does.  Not only that, it’s possible that the redeployment of the function can’t be done in the same way in the new place because of a difference in technology.  Perhaps I now need a FirewallVNF implementation that matches the platform of a new data center.  Certainly the old element can’t address that; it was designed to run in the original place.

You see how this has to work.  The model has to provide not only elements to represent real service components, but also elements that represent the necessary staging of deployment and redeployment tasks.  Each level of such a structure models context, dependency.

There are other approaches to modeling a service, of course, but the hierarchical approach that defines structure through successive decomposition is probably the most understood and widely accepted.  But even that popular model has to be examined in light of the wide-ranging missions that transformation would be expected to involve, to be sure that we were doing the right thing.

You could fairly say that a good modeling approach is a necessary condition for service lifecycle automation, because without one it’s impractical or even impossible to describe the service in a way that software can be made to manage.  Given that, the first step in lifecycle automation debates should be to examine the modeling mechanism to ensure it can describe every service structure that we are likely to deploy.

There are many modeling languages, as I said at the opening.  There may be many modeling approaches.  We can surely use different languages, even different approaches, at various places in a service model, but somehow we have to have a service model, something that describes everything, deploys everything, and sustains everything.  I wonder if we’re taking this as seriously as we must.

Missions and Architectures: Can the Two Meet?

What do Juniper and Nokia have in common, besides the obvious fact that both are network equipment providers?  The answer may be that the two are both trying to gain traction by making generalized SDN products more mission-specific.  “Laser focus?”  Juniper has announced a multi-cloud application mission for Contrail, and Nokia’s Nuage SDN product is getting a lot of operator traction as an SD-WAN platform.

What do they have in common with the major operator focus?  Apparently not much.  At this very moment, ETSI has formalized its zero-touch automation initiative, which appears to be aimed at broadening the architectural target of automation.  Is this “laser unfocus?”  Is there something going on here that we need to be watching?  I think so.

If you’ve followed the path of SDN and NFV, you know that both concepts burst on the scene to claims and stories that were nothing short of total infrastructure revolution.  Neither of the two has achieved anything like that.  SDN has had success in cloud data centers and in a few other applications, but has had no measurable impact on network infrastructure or operations so far.  NFV has been adopted in limited virtual-CPE applications and in some mobile missions, and in both cases has achieved these limited goals by narrowing its focus.  For vendors who need something to happen, this is a reasonable thing.

The common issue with SDN and NFV is one I’ve blogged about often.  “Transformation” isn’t a half-hearted thing you can tiptoe into.  The definition of the term is “a thorough or dramatic change”, after all.  If you apply it as network operators have for a decade now, it means a revolution in the business model of network operators, created and sustained through a comparable revolution in infrastructure.  In short, it’s big, and in the interest of making progress, neither SDN nor NFV have proved big enough.

Big change demands big justifying benefits to achieve a reasonable return on investment, and the problem with both SDN and NFV is that they have too narrow a scope to deliver those benefits.  In particular, both technologies focus on the infrastructure, not the total business model, and that’s where transformation has to start.  That decision by ETSI to launch a new zero touch automation group (properly called “Zero touch network and Service Management Industry Specification Group” or for short, ZSM ISG) is an acceptance of the need for a wider swath of realizable benefits, and also I think of the fact that the current processes, including both the ETSI NFV ISG and the TMF, are not going to achieve that goal fast enough, if at all.

Vendors aren’t going to throw themselves on the fire, though, so you’d have to assume that there was buyer receptivity for narrower missions, and there is.  Operators want transformation at one level, but at another level they also want to, even have to, do something right now.  Vendors, who are already seeing SDN and NFV take on the trappings of multi-generational abstract research, are certainly interested in making their numbers in the coming quarter.  It’s these two levels of behavior that we’re seeing in the news I cited in my opening, and the “right now” camp is resonating with vendors with that same short-term goal.

That leads to the question of whether an architecture can even work at this point, given the mission-focused, disconnected, progress.  In past blogs, I’ve pointed out that it’s difficult to synthesize a total service automation solution from little disconnected pieces.  Yet, we are seeing one camp moving toward bigger architectures and another moving to smaller applications of current architectures.  Surely having the goals rise while the applications of technology sink isn’t a happy situation.

The need to unify architectures and missions created the “proof-of-concept-low-apple” view of transformation.  You sing a broad and transforming tune, but you focus your attention on a set of limited, clearly useful, impact-conserving pieces of the problem.  The theory is that you can apply what you’ve learned to the problem at large.  We know now that isn’t working; if you focus on small things you tend to design for limited scope, and transformation (you will recall) is revolutionary by definition.  Hence our split, with vendors specificizing even the already-limited-in-scope stuff like SDN and NFV, and operators looking to a new and broader body to handle the real transformation problem.

Is either of these things going to end up in the right place?  Can we, at this point, address the broader goals of proving “necessity and benefit of automating network and service operation in next generation networks”, as the ZSM ISG white paper suggests?  Will vendors, seeking quarterly returns through limited applications, be able to later sum these limited things into a glorious whole?

Those who won’t learn history are doomed to repeat it, so the saying goes.  The operators have now accepted the biggest problem with their earlier NFV initiative—it didn’t take a big enough bite out of even the low apple.  We can assume, since their quote demonstrates the correct scoping of the current effort, that mistake won’t be repeated.  Vendors like Juniper and Nokia should see that enterprises and service providers all want transforming changes, so we can assume that they will frame their local solutions so as to make them extensible.  What we can’t assume is that operators won’t make a different mistake by failing to understand the necessity of a strong technical architecture for their transformed future.  Or that vendors will somehow synthesize an overall approach from steps whose limited size and scope were designed to avoid facing the need for one.

Recall that we have, in the industry today, two camps—the “architecture” camp and the “mission” camp.  Whatever you think the balance between the two should be, there is one crystal truth, which is that only missions can get business cases.  You can’t talk about transformation without something to transform.  What is less accepted as a truth, but is true nevertheless, is that absent an architecture, a bunch of mission-specific advances advance to mission-specific silos.  That’s never efficient, but if what you’re trying to do is transform a whole ecosystem, it’s fatal.  The pieces of your service won’t assemble at all, much less optimally, and you’ll have to constantly convert and adapt.

Right now, we have years of work in technologies for network transformation without comparable thinking on the question of how to combine them to create an efficient operational ecosystem.  We are not going back to redo what’s been done, so we have to figure out a way of retrofitting operations to the sum of the needs the missions create.  This is a very different problem, and perhaps a new body will be able to come at it from a different direction.  It’s not an easy fix, though.  The mission guys don’t speak software and the architecture guys can’t understand why we’re not talking about abstract programming techniques and software design patterns.  The two don’t talk well to each other, because neither really understands the other.

So, do we do kumbaya songfests by the fire to introduce them?  No, because remember it’s the mission guys who have the money.  If we want to get a right transformation answer out of the ZSM ISG, then software people will have to learn to speak mission.  They’ll have to talk with the mission people, frame a set of mission stories with their architecture at the center of each, and convince the mission people that the approach makes the business case for their mission.  Not that it makes all business cases, not that it’s the superior, modern, sexy, way of doing things.  That it makes the business case for each buyer and still, well-hidden under the technical covers, makes it for all and in the right way.

Is there a pathway to doing this?  If there is, then getting on it quickly has to be the goal of the ZSM ISG process, or we’ve just invented NFV ISG 2.0, and vendors will be carving out little missions that match incomplete solutions.

What Does Verizon’s Dropping IPTV FiOS Mean for Streaming Video?

Verizon is reportedly abandoning its streaming video platform, says multiple online technology sources.  That, if true, raises some very significant questions because it could mean that Verizon has abandoned streaming as a delivery strategy for FiOS TV.  If that’s true, then what does it mean for the 5G/FTTN hybrid model of broadband that Verizon has been very interested in?

I can’t confirm the story that FiOS IPTV is dead, but it sure seems from the coverage that there are multiple credible sources.  Verizon has been dabbling with the notion of IPTV for FiOS for some time, and for most of the period it was seen as a way of competing with the AT&T DirecTV franchise, which can deliver video outside AT&T’s wireline footprint, meaning in Verizon’s region.  The best defense is a good offense, so IPTV could have let Verizon take the fight back to AT&T.  I think that AT&T was indeed the primary force in the original Verizon IPTV plan.

AT&T has further complicated the situation since FiOS IPTV was first conceptualized.  DirecTV Now, which for at least some AT&T mobile customers is available cheaply with no charge against data usage, elevates DirecTV competition into the mobile space.  You could argue that what Verizon really needs is a way of offering off-plan viewing of TV shows to its mobile customers, to counter what’s clearly a trend toward special off-plan content deals from competitors.

A true unlimited plan without any video throttling would support streaming without off-plan special deals, of course, but for operators who have content properties in some form, the combination of those content elements and mobile services offers better profit than just allowing any old third-party video streaming player to stream over you.  Even, we point out, a competitor.

On the other side of the equation is the fact that if Verizon really plans to replace DSL tail circuits from FTTN nodes with 5G millimeter wave and much better broadband, would it not want to be able to sell “FiOS” to that customer group?  Some RF engineers tell me that it is theoretically possible to broadcast a full cable-TV channel complement over 5G/FTTN.  However, you use up a lot of millimeter-wave bandwidth with all those RF channels, and remember that viewers are more interested in bundles with fewer channels, even in a la carte video.  Surely it would be easier to just stream shows over IP.  Streaming over IP would also be compatible with mobile video delivery, something that could be helpful if Verizon elected to pair up its 5G/FTTN millimeter-wave stuff with traditional lower-frequency 5G suitable for mobile devices.  Or went traditional 5G to the home.

So does the fact (assuming again the story is true) that Verizon is bailing on IPTV FiOS mean it’s not going to do 5G/FTTN or won’t give those customers video?  I think either is incredibly unlikely, and there is another possible interpretation of the story that could be more interesting.

Look at the home TV when FiOS first came out.  It had an “antenna” and a “cable” input, and the latter accommodated a set-top box that delivered linear RF video.  Look at the same TV today.  It has an Internet connection and increasingly a set of features to let you “tune” to Internet streaming services.  There are a growing number that have features to assemble a kind of streaming channel guide.  The point is that if we presumed that everything we watched was streamed, we wouldn’t need an STB at all unless we didn’t have either a smart TV or a device (Apple TV, Chromecast, Fire TV, Roku, or whatever) that would let us adapt a non-Internet TV to streaming.

In this light, a decision by Verizon to forego any IPTV version of FiOS looks a lot smarter.  Why invent an STB for video technology that probably every new model of TV could receive without such a device?  In my own view, the Verizon decision to drop IPTV FiOS plans is not only non-destructive to its 5G/FTTN strategy, it serves that strategy well.  So well, in fact, that when enough 5G/FTTN rolls out, Verizon is likely to start phasing in the streaming model to its new FiOS customers, then to them all.

Even the competitive situation favors this kind of move.  A pure STB-less streaming model is much easier to introduce out of area, to competitive provider mobile customers, etc.  It has lower capex requirements, it’s more suited to a la carte and “specialized bundle” models, and thus gives the operator more pricing flexibility.  Add to that the fact that the cable operators, who currently have to assign a lot of CATV capacity to linear RF channels, are likely to themselves be forced to adopt IP streaming, and you can see where Verizon would be if they somehow tried to stay with RF.

You might wonder why all of this seems to be coming to a head now, when it was at least a possible model for the last decade.  I think the answer is something I mentioned in a recent blog; mobile video has essentially separated “viewers” from “households.”  If you give people personal video choices, they tend to adopt them.  As they do, they reduce the “watching TV as a family” paradigm, which is what’s sustained traditional viewing.  My video model has suggested that it’s the early-family behavior that sets the viewing pattern for households.  If you give kids smartphones, as many already do, then you disconnect them from family viewing very quickly.

Time-shifting has also been a factor.  The big benefit of channelized TV is that you only have to transport a stream once.  If you’re going to time-shift, the benefit of synchronized viewing is reduced, and probably to the level where caching is a suitable way of optimizing delivery bandwidth.  Anyway, if you presumed that “live” shows were cached at the serving office level, you could multi-cast them to the connected homes.  Remember, everyone needs to have a discrete access connection except where you share something like CATV channels.

I think that far from signaling that Verizon isn’t committed to streaming, the decision to drop the IPTV FiOS platform is a signal that they’re committed to where streaming is heading, rather than to a channelized view of a streaming model.  If channels are obsolete, so for sure are set-top boxes.

Enterprise Budgets in 2018: More Questions but Some Clarity

Network operators obviously buy a lot of network gear, but so do enterprises.  In my past blogs I’ve tended to focus on the operator side, largely because my own practice involves more operators and their vendors than it does enterprises.  To get good enterprise data, I have to survey explicitly, which is too time-consuming to do regularly.  I did a survey this fall, though, and so I want to present the results, and contrast them with what I’ve gotten on the operators this year.

Enterprises have a shorter capital cycle than operators, typically three years rather than five, and they experience faster shifts in revenue than operators usually do.  As a result, their IT spending is more variable.  They also traditionally divide IT spending into two categories—“budget” spending that sustains current IT commitments, and “project” spending that advances the use of IT where a favorable business case can be made.

The biggest question that I’ve tried to answer with respect to enterprise IT has always been where that balance of budget/project spending can be found.  In “boom” periods, like the ‘90s, I found that project spending lead budget spending by almost 2:1.  Where IT spending was under pressure, the ratio shifted back to 1:1, then below, and that’s what has happened since 2008 in particular.  In 2006, which was my last pre-recession survey of enterprises, Project spending was 55% of total IT spending.  It slipped to its lowest level of the recession in 2009, where it was 43%, gained up to 2015, and then began to slip again.

This year, project spending was 49% of total IT spending, and enterprises suggest that it could fall as low as 39% in 2018, which if true would be the lowest level since I surveyed first in 1982.  It could also rise to as much as 54%, which would be good, and this is obviously a fairly broad range of possibilities.  Interestingly, some Wall Street research is showing the same thing, though they don’t express their results in exactly the same terms.  The point one Street report makes is that IT was once seen as an “investment area” and is now seen as a “cost area”, noting that the former state was generated because it was believed that IT investment could improve productivity.

CIOs and CFOs in my survey agreed that 2018 would see more IT spending, but they disagreed on the project/budget balance, with CIOs thinking there would be more project spending to take advantage of new productivity opportunities, and CIOs thinking that they’d simply advance the regular modernization of infrastructure.  It’s this difference in perspective that I think accounts for the wider range of project/budget balance projections for next year.

Where this aligns with network operator thinking is fairly obvious.  I noted that operators had a lot of good technology plans and difficulty getting them budgeted as recommended.  That seems to be true for enterprises too.  CIOs think that there’s more that IT could do, but CFOs aren’t yet convinced that these new applications can prove out in business terms.

That’s the heart of the problem with the enterprise, a problem that in a sense they share with the operators.  Absent a benefit, a business case, you can’t get approval for new tech projects in either sector.  In the enterprise, it’s productivity gains that have to be proved, and with operators it’s new revenue.  In both sectors, we have a mature market where the low apples have been picked, the high-return opportunities.  What’s left either has to have unproven technology or benefits less easily quantifiable.  In either case, approvals are harder now.  That won’t change until a new paradigm emerges.

Tech isn’t a paradigm, it’s the mechanization of one.  You can improve software, software architecture, data mining, or whatever you like, and what you have done is valuable only if you can use that change to make a business case, to improve productivity or revenues.  We’re good at proposing technology changes, less good at validating the benefits of the changes.  Till that improves, we’ll probably under-realize on our key technology trends.

Except, perhaps, in one area.  Technology that reduces cost is always credible, and enterprises tell me that an astounding nine out of ten cost-saving technology projects proposed this year are budgeted for 2018.  This includes augmented cloud computing, some hyperconvergence projects, and in networking the application of SD-WAN.  In productivity-driven projects, only three in ten were approved.

It’s interesting to see how vendor influence interacts with project priority, and here there are some differences between operators and enterprises.  Operators have always tended to be influenced most by the vendors who are the most incumbent, the most represented in their current infrastructure and recent purchases.  Enterprises have tended to shift their vendor focus depending on the balance of future versus past, and the project/budget balance is an indicator there too.  This year, the vendors who gained influence in the enterprise were the ones that the enterprise associated with the cloud—Microsoft, Cisco, and (trailing) IBM.  There’s a different motivation behind each of the three.

Microsoft has been, for some time, the dominant cloud computing influence among enterprises, not Amazon.  I’ve noted in the past that a very large chunk of public cloud revenues come from social media and content companies, not from enterprises.  Microsoft’s symbiotic cloud positioning, leveraging data center and public cloud hybridization, has been very favorably received.  Twice as many enterprises say that Microsoft is a trusted strategic partner in the cloud than name Amazon.

Microsoft has some clear assets here.  First, they have a data center presence and a cloud presence.  Vendors who rely totally on Linux servers have the disadvantage of sharing Linux with virtually every other server vendor, where Microsoft has its own software technology on prem.  They also have, as a result of that, a clear and long-standing hybrid cloud vision.  Finally, they can exploit their hybrid model to use the cloud as a tactical partner for apps that need more elastic resources, faster deployment, more agility.  It’s winning for Microsoft, so far.

Cisco as a leading influence frankly surprised me, but when I looked at the reason behind the choice it makes sense.  To a CIO, the transformation to a hybrid cloud is a given.  That is seen as being first and foremost about the network accommodation of more complex and diverse hosting options, which implicates the corporate VPN, which usually means Cisco.  Cisco is also the only prime network vendor seen as having direct cloud computing credentials.

Cisco doesn’t have the clear public-cloud link that Microsoft has, which means that they can’t reap the full benefit of hybridization in hosting.  Some enterprises think that it makes Cisco pull back from opportunities that need developed at the cloud service level, confining them more to the network than they might.  Others note that Cisco is getting better at cloud advocacy.  Their recent purchase of cloud-management provider Cmpute.io may be a sign of that; it could give them entrée into hybridization deals.

Third-place IBM didn’t surprise me, in large part because IBM has always had very strong strategic account control among enterprises.  IBM did slip in terms of influence, though.  Its major competitors, HPE and Dell, slipped less and thus gained a bit of ground.  Still, both companies have only started to recover from a fairly long slide in terms of enterprise strategic influence.  There’s at least some indication that either or both could displace IBM by the end of the year.

IBM’s assets, besides account control, lie in its software resources, but it’s still struggling to exploit them in a cloud sense.  Having divested themselves of a lot of hardware products, they have the skin-in-the-game problem Cisco has, and unlike Microsoft their own cloud services haven’t exactly blown competition out of the market.  Among the largest enterprises, IBM is still a power.  Elsewhere is another story.

Enterprises will spend more on tech in 2018, largely because they tend to budget more directly in relationship to revenue expectations than operators do.  Their biggest focus will be the modernization of what they had, and that’s what will drive them to things like higher-density servers, and container software to improve hosting efficiency is second.  The cloud is third, and that’s where some potential for productivity enhancement and some focus on cost management collide.  If that collision generates good results in 2018, we can expect a decisive shift to productivity-directed measures, a shift toward the cloud.

Network Operator Technology Plan to Budget Transition: First Look

I continue to get responses from operators on the results of their fall planning cycle, on my analysis of those results, and on their planning for 2018.  The interesting thing at this point is that we’re in the period when technology planning, the focus of the fall cycle, collides with the business oversight processes that drive budgeting, the step that’s now just starting and will mature in January and February.  In fact, it’s this transition/collision that’s generating the most interesting new information.

Typically, the fall technology cycle leads fairly directly to budget planning.  In the ‘90s, the first decade for which I had full data, operators told me that “more than 90% of their technology plans were budgeted as recommended”.  That number fell just slightly in the first decade of this century, and in fact it’s held at over 80% for every year up to last year.  In 2016, only about three-quarters of the technology plans were budgeted, and operators told me this year that the expected that less than two-thirds of their technology plans would be funded next year.

The problem, says the operators, is that it is no longer fashionable or feasible for CFOs to blindly accept technical recommendations on network changes.  In the ‘90s, half the operators indicated that CFOs took technology plans as the primary influence on budget decisions.  By 2009 that had fallen to a third, and today only 17% of operators said that technology plans were the primary driver.  What replaced the technology plans was business case analysis, which in 1985 (when I specifically looked at this issue) was even named as an important factor in just 28% of cases.

Part of the reason for the shift here, of course, was the breakup of the Bell System and the Telecom Act of 1996.  Regulated monopolies don’t really need to worry much about business case, after all.  But remember that the biggest difference in behavior has come since 2015, and through all that period the regulatory position of operators was the same.  The simple truth is that operators are finally transitioning from being de facto monopolies, with captive markets, into simple competitors, and as such they can’t just toss money to their network weenies.

So how did this impact technology plans?  Well, 5G plans were budgeted at a rate over 80%, but nearly all the stuff that’s been approved relates to the millimeter-wave 5G hybridization with FTTN, and the remainder still relates to the New Radio model.  Everything beyond those two topics is just trials at this stage, and in many cases very limited trials at that.  But 5G, friends, is the bright spot.

SDN was next on the list of stuff that got more budget than average.  Operators said that almost half of SDN projects were budgeted, but remember that the majority of these projects involved data center switching.  If we looked outside the data center and restricted ourselves to “SDN” meaning “purist ONF-modeled” SDN, about a quarter of the technology plans were budgeted.

NFV fared worse, to no surprise.  It had less than a third of its planned projects budgeted, and nearly all of those that won were related to virtual CPE and business edge services.  The actual rate of mobile-related (IMS/EPC) success was higher, but the number of these projects was small and the level of commitment beyond a simple field trial was also limited.

Worst on the list was “service lifecycle automation”, which had less than a quarter of the technology plans budgeted.  According to operators, the problem here is a lack of a consistent broad focus for the projects.  The fuzzier category of “OSS/BSS modernization” that I’ve grouped into this item did the best, but the goals there were very narrow and inconsistent.  Three operators had ambitious closed-loop automation technology plans, none of which were approved.

Interestingly, the results in all of these categories could be improved, says the CFO organizations, if CTO, CIO, and COO teams could come up with a better business case, or make a better argument for the benefits being claimed.  According to the operators, these changes could be made with little impact in reducing budgets even as late as May, but if nothing significant was done by then, it would be likely that fixing the justifications would result in limited spending next year and more budget in 2019.

The second thing that came out of the comments I received is that even operators who weren’t among the 77 in my survey base were generally in accord with the results of the fall technology planning survey.  There were a few that were not, and those I heard from were generally associated with atypical marketing opportunities or competitive situations.  National providers with no competition and business-only providers made up the majority of the dissenters.  I suspect, but can’t prove, that those who said their own views/results had been very different were expressing technology optimism more than actual different experiences, but since I don’t have past survey results to validate or invalidate this judgment, I have to let the “why” go.  Still, I do need to say that among non-surveyed operators, the view on SDN and NFV is a bit more optimistic.

A couple of other points that emerged are also interesting.  None of the operators who got back to me after the fall planning cycle thought that they would likely take aggressive action in response to any relaxation in net neutrality rules.  They cited both a fear of a return to current regulations and fear of backlash, with the former a general concern and the latter related mostly to things like paid prioritization and settlement.  Operators need consistency in policy, and so far they don’t see a lot of that in most global regulatory jurisdictions.  I’d point out that in most markets, a commission is responsible for policy but operates within a legislative framework, and thus it would take a change of law to create more constant policy.

Another interesting point that I heard from operators was that they’re becoming convinced that “standards” in the traditional sense are not going to move the ball for them going forward.  In fact, about ten percent of operators seem to be considering reducing their commitment to participation in the process, which means sending fewer people to meetings or assigning them to work specifically on formal standards.  On the other hand, three out of four said they were looking to commit more resources to open-source projects.

Operators have had a love/hate relationship with standards for at least a decade, based on my direct experience.  On the one hand, they believe that vendors distort the formal standards process by pushing their own agendas.  Operators, they point out, cannot in most markets control a standards body or they end up being guilty of anti-trust collusion.  They hope that open-source will be better for them, but they point out that even in open-source organizations the vendors still tend to dominate with respect to resources.  That means that for an operator to advance their specific agenda, they have to do what AT&T has done with ECOMP, which is develop internally and then release the result to open-source.

The final point was a bit discouraging; only one in ten operators thought they’d advance significantly on transformation in 2018, but there was never much hope that 2018 would be a big year.  The majority of operators said in 2016 that transformation will get into high gear sometime between 2020 and 2022.  That’s still what they think, and I hope that they’re right.

Are We Seeing the Sunset of Channelized Live TV?

There is no question that the video space and its players are undergoing major changes.  It’s not clear where those are leading us, at least not yet.  For decades, channelized TV has been the mainstay of wireline service profit, and yet it’s more threatened today than ever before.  Where video goes, does wireline go?  What then happens to broadband?  These are all questions that we can explore, but perhaps not quite answer.

With VoIP largely killing traditional voice service profit growth and Internet revenue per bit plummeting, operators have come to depend on video in the form of live TV to carry their profits forward.  At the same time, the increased reliance on mobile devices for entertainment has radically increased interest in streaming non-live video from sources like Amazon, Hulu, and Netflix.  The combination has also generated new live-TV competition from various sources, including Hulu and AT&T, and more ISPs are planning to offer some streaming options in the future.

At the same time, the fact that streaming means “over the Internet” and the Internet is agnostic to the specific provider identity means that many content sources can now think about bypassing the traditional TV providers.  The same sources are looking to increase profits, and so increase the licensing fees charged to the TV providers.  Those providers, also looking for a profit boost, add their own tidbit to the pricing to users, which makes users unhappy with their channelized services and interested in streaming alternatives.

I’ve been trying to model this for about five years, and I think I’ve finally managed to get some semi-useful numbers.  Right now, in the US market, my research and modeling says that about a third of all TV viewers regularly use some streaming service, and about 12% have a “live” or “now” option.  It appears that 8% or so exclusively stream, meaning they have truly cut the cord.  This data has to be viewed with some qualifiers, because many young people living at home have only streaming service but still get live TV from the household source.  It’s this that accounts for what Neilson has consistently reported as fairly constant household TV viewing and at the same time accounts for a rise in streaming-only users.  Households and users aren’t the same thing.

Wireline services serve households and mobile services serve users.  The fact that users are adopting streaming because of increased mobile dependence isn’t news.  What is news is that this year it became clear that traditional channelized viewing was truly under pressure at the household level.  People seem to be increasingly unhappy with the quantity and quality of original programming on traditional networks, and that translates to being less willing to pay more every year for the service.

In my own limited survey of attitudes, what I found was that about two-thirds of viewers don’t think their TV service offers good value.  That number has been fairly steady for about four years.  Of this group, the number who are actively looking to reduce their cost has grown over that four years from about a fifth to nearly half.  Where TV providers have offered “light” or “a la carte” bundles, users tend to adopt them at a much higher rate than expected.  All of this is a symptom that TV viewing is under growing pressure, and that the “growing” is accelerating.

The most obvious consequence of this is the obvious desire of cable/telco giants and even streaming video players to get their own video content.  Comcast buys NBC Universal, AT&T wants Time Warner, and Amazon and Netflix are spending a boatload on original content.  I don’t think anyone would doubt that this signals the belief of the TV delivery players that content licensing is going to be ever more expensive, so they need to be at least somewhat immune to the cost increases.  Ownership of a TV network is a great way to limit your licensing exposure, and also to hedge cost increases because you’ll get a higher rate from other players.

Another obvious impact of a shift toward streaming is that you don’t need to own the wireline infrastructure that touches your prospects.  You don’t need to own any infrastructure at all, and that means that means that every operator who streams can feed on the customers of all its competitors.  Those who don’t will become increasingly prey rather than predator.  And the more people start to think in terms of finding what they want when they need it, rather than viewing what’s on at a given time, the less value live TV has as a franchise.

I think it’s equally obvious that the TV industry has brought this on themselves, in a way.  For wireline TV players, their quest for mobile service success has promoted a whole new kind of viewing that’s seducing users away from traditional TV, even where it leaves household connections intact.  A family of four would likely select a fairly fat bundle to satisfy everyone, but if half the family is out with friends viewing mobile content, will they need the same package?  Competition for higher access speeds as a differentiator also creates more bits for OTTs to leverage, and encourages home consumption of streaming.

The quality of material is also an issue.  If you want “old” material you clearly can get most of it from somebody like Amazon, Hulu, or Netflix.  If you want new material, you’re facing much shorter production seasons, a reported drop in the quality of episodes, and higher prices.  Every time someone finds that their favorite shows have already ended their fall season (which apparently ran about 2 months this year) and goes to Amazon to find something to view, they are more likely to jump to streaming even when something is new, because they expect more.

To me, the revolutionary truth we’re finally seeing is that “viewers” are increasingly separating from “households”.  We’ve all seen families sitting in a restaurant, with every one of them on their phones, ignoring the others.  Would they behave differently at home?  Perhaps many think they should, but markets move on reality and not on hopes, and it seems that personal mobile video is breaking down the notion of collective viewing, which means it’s breaking down channelized TV bundles, which means it’s eroding the whole model of channelized TV.

If you need to stream to reach mobile users, if mobile users are the ones with the highest ARPU and the greatest potential in producing future profits, and if streaming is going to reshape viewer behavior more to favor stored rather than live shows, then when steaming hits a critical mass it will end up reshaping the industry.  That probably won’t happen in 2018, but it could well happen in 2019, accelerated by 5G/FTTN deployments.

I’ve been mentioning 5G and fiber-to-the-node hybrids in my recent blogs, and I think it’s warranted.  This is the part of 5G that’s going to be real, and quickly, and it has many ramifications, the shift toward streaming being only one of them.  5G/FTTN, if it truly lowers the cost of 100 Mbps and faster “wireline” Internet radically, could even boost competition in the access space.  New York City did an RFI on public/private partnership strategies for improving broadband speed and penetration, and other communities have been interested in municipal broadband.  The 5G/FTTN combination could make it possible in many areas.

Ah, again the qualifying “could”.  We don’t know how much 5G/FTTN could lower broadband cost, in part because we do know that opex is the largest cost of all.  Current players like Verizon have the advantage of an understanding of the total cost picture, and a disadvantage in that they have so many inefficient legacy practices to undo.  New players would have to navigate the opex side in a new way, to be a pioneer in next-gen closed-loop practices that nobody really has proved out yet.  We’ll surely see big changes in video, but the critical transition of “wireline” from channelized to streaming will take some time.