A Structure for Abstraction and Virtualization in the Telco Cloud

It is becoming clear that the future of networking, the cloud, and IT in general lies in abstraction.  We have an increasing number of choices in network technology, equipment vendors, servers, operating systems (and distros), middleware…you get the picture.  We have open-source software and open hardware initiatives, and of course open standards.  With this multiplicity of options comes more buyer choice and power, but multiplicity has its downsides.  It’s hard to prevent vendor desires for differentiation from diluting choice, and differences in implementation mean difficulty creating efficient and agile operations.

Abstraction is the accepted way of addressing this.  “Virtualization” is a term often used to describe the process of creating an abstraction that can be mapped to a number of different options.  A virtual machine is mapped to a real server, a virtual network to real infrastructure.  Abstraction plus mapping equals virtualization, in other words.

The challenge we have isn’t acceptance of the notion of abstraction/virtualization, but the growing number of things that need to be virtualized and the even-faster-growing number of ways of looking at it.  Complex virtualization really means a modeling system to express the relationships of parts to the whole.  In my ExperiaSphere work on service lifecycle automation, I proposed that we model a service in two layers, “service” and “resource”, and I think we are starting to see some sense of structure in virtualization overall.

The best way to look at anything these days is through cloud-colored glasses, and the cloud offers us some useful insights into that broader virtualization vision.  “Infrastructure” in the cloud has two basic features, the ability to host application components or service features, and the ability to connect elements of applications and services to create a delivered experience.  We could visualize these two things as being the “services” offered by, or the “features” of, infrastructure.

If you decompose infrastructure, you end up with systems of devices, and here we see variations in how the abstraction/virtualization stuff might work.  On the network side, the standard structure is that a network is made up of a cooperative community of devices/elements, and that networks are committed to create connection services.  Thus, devices>networks>connection-services in progression.  On the hosting or computing side, you really have a combination of network devices and servers that collectively frame a data center hardware system, and that hosts a set of platform software tools that combine to create the hosting.

There are already a couple of complicating factors entering the picture.  First, “devices” at the network and hosting levels can be virtualized themselves.  A “router” might be a software feature hosted in a virtual machine assigned to a pool of servers.  Second, the virtual machine hosting (or container hosting) might be based on a pool of resources that don’t align with data center boundaries, so the virtual division of resources would differ from the physical division.  Container pods or clusters or swarms are examples; they might cross data center boundaries.

What we end up with is a slightly more complicated set of layers, which I offer HERE as a graphic to make things easier to follow.  I’ve also noted the parts of the structure covered by MANO and ONAP, and by the Apache Mesos and DC/OS combination that I think bears consideration by the ONAP people.

At the bottom of the structure, we have a device layer that hosts real, nuclear, hardware elements.  On top of this is a virtual-infrastructure layer, and this layer is responsible for mapping between the real device elements available and any necessary or useful abstraction thereof.  One such abstraction might be geographical/facility-oriented, meaning data centers or interconnect farms.  Another might be resource-pool oriented, meaning that the layer creates an abstract pool from which higher layers can draw resources.

One easy illustration of this layer and what it abstracts is the decision by an operator or cloud provider to add a data center.  That data center has a collection of real devices in it, and the process of adding the data center would involve some “real” and “virtual” changes.  On the real side, we’d have to connect that data center network into the WAN that connects the other data centers.  On the virtual side, we would need to make the resources of that data center available to the abstractions that are hosted by the virtual-infrastructure layer, such as cloud resource pools.  The “mapping processes” for this layer might contain policies that would automatically augment some of the virtual-infrastructure abstractions (the resource pools, for example) with resources from the new data center.

Above the virtual-infrastructure layer is the layer that commits virtual resources, which I’ll call the “virtual resource” layer.  This layer would add whatever platform software (OS and middleware, hypervisor, etc.) and parameterization needed to transform a resource pool into a “virtual element”, a virtual component of an application or service, a virtual device, or something else that has explicit functionality.  Virtual elements are the building-blocks for services, which are made up of feature components hosted in virtual elements or coerced behavior of devices or device systems.

If we accept this model as at least one possible layered framework for abstraction, we can also map some current projects to the layers.  ONAP and NFV MANO operate at the very top, converting virtual resources into functional components, represented in MANO by Virtual Infrastructure Managers and Virtual Network Functions.  ONAP operates higher as well, in service lifecycle management processes.

Below the ONAP/MANO activities are the layers that my ExperiaSphere stuff calls the “resource-layer models”.  In my view, the best current framework for this set of features is found in the DC/OS project, which is based on Apache Mesos.  There are things that I think are needed at this level that Mesos and DC/OS don’t provide, but I think they could be added on without too much hassle.

Let’s go back now to DC/OS and Mesos.  Mesos is an Apache cluster management tool, and DC/OS adds in features that abstract a resource cloud to look like a single computer, which is certainly a big step toward my bottom-layer requirements.  It’s also something that I think the telcos should have been looking at (so is Marathon, a mass-scale orchestration tool).  But even if you don’t think that the combination is a critical piece of virtualization and telco cloud, it demonstrates that the cloud community has been thinking of this problem for a long time.

Where I think DC/OS and Mesos could use some help is in defining non-server elements, resource commissioning and data center assignment and onboarding.  The lower layer of my model, the Device Layer, is a physical pool of stuff.  It would be essential to be able to represent network resources in this layer, and it would be highly desirable to support the reality that you onboard entire data centers or racks and not just individual servers or boxes.  Finally, the management processes to sustain resources should be defined here, and from here should be coupled upward to be associated with higher-layer elements.

I think this is a topic that needs to be explored, by the ONAP people, the NFV ISG, and perhaps the Open Compute Project, as well as Apache.  We need to have a vertically integrated model of virtualization, not a bunch of disconnected approaches, or we’ll not be able to create a uniform cloud hosting environment that’s elastic and composable at all levels.  And we shouldn’t settle for less.

The Cloud and the Battle for “Everyware”

Even in an industry, a world, committed to hype, reality always wins in the end.  Cloud computing is an example of this tenant, and what’s interesting is less the validity of the central point than the way that cloud reality is reshaping the industry.  Most interesting of all is the relationship between the cloud and open-source.

When public cloud computing first came along, I did my usual industry surveys and modeling, and what emerged from the process was a couple key points.  First, no more than a maximum of 24% of current applications could be justifiably ported to the cloud.  Second, over 80% of the actual opportunity for public cloud services would come from developing cloud applications that had never run elsewhere.  Finally, public cloud would never displace enterprise data centers to any significant degree.

What we are seeing in cloud computing today is a reflection of these points.  Cloud-specific applications dominate, and hybrid cloud dominates, even now.  Increased competition among cloud providers, and the constant need for differentiation, has generated a whole cloud industry of “web services” that present hosted feature add-ons to basic cloud services.  This is one of the reasons why we’re seeing cloud-specific applications.  Now the same forces are acting in the hybrid cloud area.

Hybrid clouds are a symbiotic relationship between enterprise data centers and public cloud services.  Given that, it’s obvious that somebody with a foot in both spaces would have an advantage in defining the critical connecting features, and that has benefitted Microsoft significantly.  In my surveys, Microsoft’s cloud has outperformed the competition, even though non-enterprise applications have pushed Amazon into the overall lead in public cloud services.  Amazon and Google know this, and both companies have been struggling to create a credible outpost for their own cloud services within the data center.

The obvious way to create the hybrid link to your cloud service is to offer a premises-hosted element that appears to be a part of your cloud.  Amazon has done this with Greengrass.  Google is working with Cisco to develop an open hybrid strategy, and is said to be especially under pressure to make something happen, hybrid-wise, because of Google’s third-place position in the public cloud market.  Amazon is now working its own Linux distro, Linux 2, into the game, and some say that Google is hoping Kubernetes, the popular container orchestrator that Google developed initially, will provide it with hybrid creds.  Unfortunately for Google, everyone supports Kubernetes, including Amazon and Microsoft.

While the competitive dynamic in the public cloud space, and hybrid cloud impact on that dynamic, get a lot of buzz, the biggest and longest-lasting impact of the hybrid cloud may be on “platform software”, meaning the operating system and middleware elements used by applications.  Amazon and Salesforce have made no secret of their interest in moving off Oracle DBMS software to an open platform, something that would lower their costs.  If public cloud platforms gravitate totally to open source, and if public cloud operators continue to add web services to build cloud-specific functionality that has to hybridize with the data center, isn’t it then very likely that the public cloud platform software will become the de facto platform for the hybrid cloud, and thus for IT overall?

What we’re talking about here is “cloudware”, a new kind of platform software that’s designed to be distributable across all hosting resources, offering a consistent development framework that virtualizes everything an application uses.  Hybrid cloud is a powerful cloudware driver, but working against this happy universality is the fact that cloud providers don’t want to have complete portability of applications.  If they don’t have their own unique features, then they can only differentiate on price, which creates the race to the bottom nobody particularly wants to be a part of.

It’s already clear that cloudware is going to be almost exclusively open-sourced.  Look at Linux, at Docker, at Kubernetes, at OpenStack, and you see that the advances in the cloud are already tied back to open source.  A big part of the reason is that it’s very difficult for cloud providers to invent their own stuff from the ground up.  Amazon’s Linux 2 and the almost-universal acceptance of Kubernetes for container cloud demonstrate that.  Open-source platform software is already the rule, and cloudware is likely to make it almost universal.

The biggest question of all is whether “cloudware” will end up becoming “everyware”.  Open-source tools are available from many sources, including giants like Red Hat.  Is it possible that cloudware would challenge these incumbents, and if so what could tip the balance?  It’s interesting and complicated.

At the level of broad architecture, what’s needed is fairly clear.  To start with, you need something that can virtualize hosting, modeled perhaps on Apache Mesos and DC/OS.  That defines a kind of resource layer, harmonizing the underlying infrastructure.  On top of that you’d need a platform-as-a-service framework that included operating system (Linux, no doubt) and middleware.  It’s in the middleware that the issue of cloudware/everyware hits.

Everyone sees mobility, or functional computing, or database, or whatever, in their own unique and competitively biased way.  To create a true everyware, you need to harmonize that middleware, which means that you need an abstraction layer for it just as we have for hardware or hosting.  For example, event-driven functional computing could be virtualized, and then each implementation mapped to the virtual model.

If we are evolving toward a fairly universal hybrid platform, then either that platform has to evolve from current enterprise open-source stuff like Red Hat, or emerge from the cloud.  Proponents from either camp have an opportunity to frame the universal “everyware” of the future, but they also face specific challenges to their moves to do that.

For cloud providers, the problem is lack of unity.  The cloud is not the only place applications run; it’s not even the dominant place.  Not only that, differentiation- and profit-driven moves to enhance web services available to cloud applications creates not one vision of cloudware, but a vision for every provider.  If enterprises who think in terms of hybrid cloud confront the issue of premises data center hosting, those who think in terms of multicloud confront the diversity of implementations for basic cloud service features.

The premises players have their own special challenge, which is that the cloud changes everything, at least with respect to application architectures and developer strategies.  It’s hard to see how you could build an event-driven app in the data center unless you wanted to host stuff all over the world where your events originated.  That means that the premises players have to cede the critical future development trends to the cloud providers.

The battle to control “everyware” may be the defining issue in 2018, because it will not only establish market leadership (and maybe even survival) in both the cloud and platform software spaces, but will influence the pace of cloud adoption and application modernization.  This is the cloud’s defining issue for the coming year, and it will also play a major role in defining how we evolve to carrier cloud and hosted services.  Keep an eye on it; I know I’ll be watching!

How NFV Can Save Itself in 2018

Network Functions Virtualization (NFV) has generated a lot of buzz, but it became pretty clear last year that the bloom was off the rose in terms of coverage and operator commitment.  Does this mean that NFV was a bad idea?  Is all the work that was done irrelevant, or about to become so?  Are vendor and operator hopes for NFV about to be dashed for good?

NFV wasn’t a bad idea, and still isn’t, but the fulfillment of its potential is in doubt.  NFV is at a crossroads this year, because the industry is moving in a broader direction and the work of the ISG is getting more and more detailed and narrow.  The downward direction collides more and more with established cloud elements, so it’s redundant.  The downward direction has also opened a gap between the business case and the top-level NFV definitions, and stuff like ONAP is now filling that gap and controlling deployment.

I’ve noted in many past blogs that the goal of efficient, agile, service lifecycle management can be achieved without transforming infrastructure at all, whether with SDN or NFV.  If we get far enough in service automation, we’ll achieve infrastructure independence, and that lets us stay the course with switches and routers (yes, probably increasingly white-box but still essentially legacy technology).  To succeed in this kind of world, NFV has to find its place, narrower than it could have been but not as narrow as it will end up being if nothing is done.

The first step for NFV is hitch your wagon to the ONAP star.  The biggest mistake the ETSI NFV ISG made was limiting its scope to what was little more than how to deploy a cloud component that happened to be a piece of service functionality.  A new technology for network-building can never be justified by making it equivalent to the old ones.  It has to be better, and in fact a lot better.  The fact is that service lifecycle automation should have been the goal all along, but NFV’s scope couldn’t address it.  ONAP has a much broader scope, and while (as its own key technologists say) it’s a platform and not a product, the platform has the potential to cover all the essential pieces of service lifecycle automation.

NFV would fit into ONAP as a “controller” element, which means that NFV’s Management and Orchestration (MANO) and VNF Manager (VNFM) functions would be active on virtual-function hosting environments.  The rest of the service could be expected to be handled by some other controller, such as one handling SDN or even something interfacing with legacy NMS products.  Thus, ONAP picks up a big part of what NFV doesn’t handle with respect to lifecycle automation.  Even though it doesn’t do it all, ONAP at least relieves the NFV ISG of the requirements of working on a broader area.

The only objections to this step may come from vendors want to push their own approaches, or from some operators who have alternative open-platform aspirations.  My advice to both groups is to get over it!  There can be only one big thrust forward at this point, and it’s ONAP or nothing.

The second step for NFV is probably going to get a lot of push-back from the NFV ISG.  That step is to forget a new orchestration and management architecture and focus on adapting cloud technology to the NFV mission.  A “virtual network function” is a cloud component, period.  To the greatest extent possible, deploying and sustaining them should be managed as any other cloud component would be.  To get to that point, we have to divide up the process of “deployment” into two elements, add a third for “sustaining”, and then fit NFV to each.

The first element is the actual hosting piece, which today is dominated by OpenStack for VMs or Docker for containers.  I’ve not seen convincing evidence that the same two elements wouldn’t work for basic NFV deployment.

The second element is orchestration, which in the cloud is typically addressed through DevOps products (Chef, Puppet, Heat, Ansible) and with containers through Kubernetes or Marathon.  Orchestration is about how to deploy systems of components, and so more work may be needed here to accommodate the policy-based automation of deployment of VNFs based on factors (like regulations) that don’t particularly impact the cloud at this point.  These factors should be input into cloud orchestration development, because many of them are likely to eventually matter to applications as much as to services.

The final element is the management (VNFM) piece.  Cloud application management isn’t as organized a space as DevOps or cloud stacks, and while we have this modern notion of intent-modeled services, we don’t really have a specific paradigm for “intent model management”.  The NFV community could make a contribution here, but I think the work is more appropriately part of the scope of ONAP.  Thus, the NFV people should be promoting that vision within ONAP.

The next element on my to-do-for-NFV list is think outside the virtual CPE.  NFV quickly got obsessed with the vCPE application, service chaining, and other things related to that concept.  This has, in my view, created a huge disconnect between NFV work and the things NFV will, in the long term, have to support to be successful.

The biggest problem with vCPE is that it doesn’t present even a credible benefit beyond business services.  You always need a box at the point of service termination, particularly for consumer broadband where WiFi hubs combine with broadband demarcations in virtually every case.  Thus, it’s difficult to say what you actually save through virtualization.  In most current vCPE business applications, you end up with a premises box that hosts functions, not cloud hosting.  That’s even more specialized as a business case, and it doesn’t drive carrier cloud deployment critical for the rest of NFV.

Service chaining is another boondoggle.  If you have five functions to run, there is actually little benefit in having the five separately hosted and linked in a chain.  You now are dependent on five different hosting points and all the connections between them, or you get a service interruption.  Why not create a single image containing all five features?  If any of the five break, you lose the connection anyway.  Operations and hosting costs are lower for the five-combined strategy than the service-chain strategy.  Give it up, people!

The beyond-vCPE issue is that of persistence and tenancy.  Many, perhaps even most, credible NFV applications are really multi-tenant elements that are installed once and sustained for a macro period.  Even most single-tenant NFV services are static for the life of the contract, and so in all cases they are really more like cloud applications than like dynamic service chains.  We need to have an exploration of how static and multi-tenant services are deployed and managed, because the focus has been elsewhere.

We have actually seen some successful examples of multi-tenant service elements in NFV already; Metaswitch’s implementation of IMS comes to mind.  The thing that sets these apart from “typical” NFV is that you have a piece of service, an interface, that has to be visible in multiple services at the same time.  There has to be some protection against contamination or malware for such services, but there also has to be coordination in managing the shared elements, lest one service end up working against the interests of others.

Nothing on this list would be impossible to do, many wouldn’t even be difficult, and all are IMHO totally essential.  It’s not that a failure to address these points would cause NFV to fail as a concept, but that it could make NFV specifications irrelevant.  That would be a shame because a lot of good thinking and work has gone into the initiative to date.  The key now is to direct both past work and future efforts in a direction where results that move the ball for the industry as a whole, not for NFV as an atomic activity, can be obtained.  That’s going to be a bitter pill for some, but it’s essential.

The Driving Technologies for Network Operators in 2018

If you’re a tech analyst, you just have to do a blog on what to expect in the coming year, no matter how overdone the topic might be.  OK, here’s mine.  What I want to do is look at the most significant trends and issues, the ones that will shape the market for years to come and also establish vendor primacy in key product areas.  I’ll also make note of a couple things that I don’t expect to happen.  Obviously, this is a prediction, but it’s based on modeling and information from the buyers and sellers.

The top issue on my list is SD-WAN.  It’s important because it’s the first broad initiative that separates “services” from “infrastructure”, and that’s critical for transformation of the operator business model.  While the term doesn’t have quite that vague but positive meaning that we’ve come to know and love from other tech developments, there are significant differences in the meaning, application, and implementation of the concept.  In 2018, the pressures of a developing market are likely to start narrowing the field in all respects.

SD-WAN is an “overlay” technology whether or not the specific implementation uses traditional overlay tunnel technology.  You build an SD-WAN over top of some other connection technologies, most often VLANs, VPNs, and the Internet, so it’s not strictly a “transformation” technology with respect to infrastructure.  What an SD-WAN network offers is an independent service layer, a totally controllable address space on top of any supported combination of connections, in any geography.  Because you can support most SD-WAN technologies with a software agent, you can add cloud hosts to the SD-WAN without trying to convince cloud providers to build your boxes into their infrastructure.

The concept of overlay tunnels has been around for a long time, of course, so it’s not so much technology that’s going to make a difference in 2018, it’s application.  Business services are an easier target for competing access and service providers, because you can sell to businesses easier.  Try selling home broadband door to door and you’ll see what I mean.  Managed Service Providers have already gotten the message, but in the last quarter of 2017 it’s become clear that the big news is going to be competitive access providers, including cable companies.  SD-WAN, for them, can generate both an in-area business service model without having to redeploy infrastructure, and a global service footprint exploiting someone else’s access.  This is an irresistible combination.

SD-WAN isn’t just for business services, either.  You can use overlay technology for any connectivity mission, for video delivery, for a form of slicing of infrastructure, and as the basis for cloud multi-tenancy.  At least a couple SD-WAN vendors are already seeing that broader set of opportunities, and I think that’s the tip of the iceberg.

One reason is competitive pressure.  SD-WAN is a sure bet for cable companies or any operator who has national/international service aspirations and a limited access network geography.  We can also already see signs that SD-WAN will be a greater part of telco service plans.  For the telcos, it also offers the opportunity to simplify infrastructure, lower business service costs, and exploit consumer-level broadband access for smaller branch locations, particularly where there’s not a lot of business customers and justifying carrier Ethernet is harder.  By creating an architectural separation between services and infrastructure, SD-WAN makes both more agile, and facilitates a lot of the other market-shaping technologies for 2018.  If even one significant group of operators get the point, everyone else will follow.

Despite the critical value of the technology, winning in the SD-WAN space in 2018 may not be as easy as just tossing a product out there and waiting for someone to notice.  Operators have a different agenda for SD-WAN.  They might want to integrate it with NFV and virtual CPE, for example.  They certainly want to have as much management automation as possible.  They’ll need to be able to link it to current business services, perhaps MPLS, perhaps VLAN, perhaps both.  They will probably want to look at having “interior” network elements that work with edge elements, because that offers them a differentiator.  They may also want to avoid products that have focused on selling direct to enterprises, since these would obviously not offer operators much opportunity.

The next market-shaper in 2018 is zero-touch automation of service and infrastructure processes.  We have been dabbling around the edges of this since 2012, but it’s become clear (again, mostly in the last quarter) that things are finally getting serious.  The TMF has worked on the issue for over a decade, and they have good engagement with the CIOs in operators, but they apparently haven’t been able to move the ball as much as operators want.  If you read the white paper that was issued by the ETSI Zero-touch network and Service Management ISG (ZSM ISG), you’ll see that it overlaps a lot of the TMF ZOOM stuff, and it creates a kind of functional overlay on the NFV ISG.

Technically, zero-touch automation is a collision of a need to support diverse goals and the need to do it with a single technology, a single architecture.  We have operations people who focus on the infrastructure, OSS/BSS people who focus on services and administration, and CTO people who do the standards and propose future technology options.  We somehow have to blend the personalities and the areas they champion, into a single model.  Since we’ve been gradually developing bottom-up stuff in all these areas for years, you can understand how that blending might pose a challenge.

In fact, the biggest challenge the ZSM ISG will have to surmount is standards.  If this is another standards process, it will create media buzz, attract support, spend forever getting tiny details straight, miss the big picture, and eventually lose the audience.  On the other hand, if this body starts looking like a software engineering case study that loses touch with the problem set, it will end up disconnected from the goals that have driven operators to create the group in the first place.  It’s a delicate balance, one that no standards body in two decades has been able to strike.  I can’t promise it will be struck by the ZSM ISG, but by the end of the year we’ll know whether this initiative has failed.  If it does fail, then I think we’ve seen the end of relevant standards for software-centric services.

This is another challenging space for vendors.  Operators have a growing preference for open-source tools in the service lifecycle automation space, which limits commercial opportunity.  They also want full-spectrum solutions rather than components, so it might be wise for any player in the space to look at how they might integrate with ONAP/ECOMP.  That could avoid having to develop a flood of add-on tools and elements, maintain compatibility with vendor offerings, support SDN and NFV…you get the picture.

And speaking of open-source, guess what our next market-shaper is?  Operators have been divided for some time on just how they advance their own cause.  Standards bodies end up dominated by vendors because there are more of them, and because vendors build the products that have to build the networks.  Operators are generally unable, for anti-trust reasons, to form operator-only bodies or even bodies where operators dominate.  There’s been operator interest in open-source software for service management for at least ten years that I’m aware of (I was a member of a TMF open-source group that was launched to address operator interest in the topic back in 2008).  While open-source is a market-shaper, the real insight behind this is AT&T’s “ah-ha!” moment.

AT&T, I believe, recognized that even open-source wasn’t going to do the job, because vendors would dominate open-source projects as easily as standards activities.  Their insight, which we can see in how their ECOMP service management software developed and how their “white-box OS” is currently developing, was to do a big software project internally, then release it to open-source when it’s largely done.  Vendors are then faced with either spending years trying to pervert it, or jumping on board and reaping some near-term benefits.  You can guess what’s going to happen in 2018.

This isn’t going to be a play for the small vendors, unless you want to build into an architecture like ONAP/ECOMP.  The buy-in to participate in the essential industry forums and attend all the trade shows and events is considerable in itself, and then you have to be able to sustain the momentum of your activity.  Historically, most open-source has been driven by vendors who didn’t want to try to sustain critical mass in proprietary software, but recently there has been growing operator interest.  They want to build something internally, then open-source it, which means that it limits software opportunity in the entire space that operators might target with open-source.  Watch this one; it could make you or kill you.

The final market-shaper for 2018 is 5G-and-FTTN broadband services.  While we’ve had a number of technical advances in the last five years that raise the speed of copper/DSL, we can’t deliver hundred-meg broadband reliably from current remote nodes, even fed by fiber.  If there’s going to be a re-architecting of the outside plant for “wireline” broadband, it has to be based on something with better competitive performance.  That’s what makes 5G/FTTN look good.  Early trials show it can deliver 100-meg-or-better in many areas, and probably could deliver at least a half-a-gig with some repositioning or adding of nodes.  It puts telcos on a level playing field with respect to cable CATV technology, even with recent generations of DOCSIS.  Competition, remember?

The important thing about the 5G/FTTN hybrid is that it might be the technical step that spells the end of linear-RF TV delivery.  Cable operators have been challenged for years trying to address how to allocate CATV spectrum between video RF and IP broadband.  5G/FTTN raises the stakes in that trade-off by giving telcos even more broadband access capacity to play with, and if we see significant movement in the space in 2018, then we should expect to see streaming supplant linear RF for TV.

The downside for 5G/FTTN may be the rest of 5G.  Operators I’ve talked with rate the 5G/FTTN-millimeter wave stuff their top priority, followed by the 5G New Radio (NR) advancements.  There’s a lot of other 5G stuff, including federating services beyond connection, network slicing, and so forth.  None of these get nearly the backing in the executive suites of operators, though of course the standards types love them all.  Will the sheer mass of stuff included in 5G standards weigh down the whole process?  It seems to me that the success of any piece of 5G in 2018 will depend in part on how easily it’s separated from the whole.

How do you play 5G?  In 2018, anyone who thinks they can make a bundle on anything other than 5G/FTTN is probably going to be seriously disappointed, but other than the major telco equipment vendors in both the RAN/NR and fiber-node space, vendors will be well advised to look for adjunct opportunities created by 5G.  Video could be revolutionized, and so could business services, and 5G/FTTN could be a major driver for SD-WAN too.  A symbiotic ecosystem might evolve here, in which case that ecosystem could create most of the 2018 and even 2019 opportunity.

Now for a few things that will get a lot of attention in 2018 but won’t qualify as market-shapers.  I’ll give these less space, and we may revisit some of them in 2019.

The first is carrier cloud.  I’m personally disappointed in this one, but I have to call things as they seem to be going.  My model never predicted a land-rush carrier cloud opportunity in 2018; it said we’d add no more than about 1,900 data centers, largely due to the fact that the main drivers of deployment would not have hit.  Up to 2020, the opportunity is driven mostly by video and ad caching, and the big growth in that won’t happen until 5G/FTTN starts to deploy in 2019.  We will see an uptick in data centers, but probably we’ll barely hit my model forecast.  Check back next year!

Next is net neutrality, which the FCC decided it would not play a significant role in enforcing.  There is talk about having the courts reverse the FCC, or Congress changing the legislative framework that the FCC operates under, restoring neutrality.  Possible?  Only very dimly.  The courts have affirmed the FCC’s right to decide which “Title” of the Communications Act applies to ISPs, so that will likely happen here too.  Without Title II, the same courts ruled the FCC lacks the authority to impose the neutrality rules.  Congress has never wanted to take a role in setting telecom policy, and in any event the same party controls Congress as controls the FCC.  The order will likely stand, at least until a change in administration.  How operators will react to it is also likely to be a yawn in 2018; they’ll wait to see whether there’s any real momentum to change things back, and won’t risk adding to it.

Another miss in 2018 is massive SDN/NFV deployment.  Yes, we have some of both today, and yes, there will be more of both in 2018, but not the massive shift in infrastructure that proponents had hoped for.  Operators will not get enough from either SDN or NFV to boost profit-per-bit significantly.  Other market forces could help both SDN and NFV in 2019 and 2020, though.  We’ll get to that at the end of next year, of course.  The fact is that neither SDN nor NFV were likely to bring about massive transformational changes; the limited scope ensures that.  Operators are already looking elsewhere, as I note earlier in this blog.  Success of either SDN or NFV depends on growth in the carrier cloud, and 2018 is too early to expect much in that area.

Were we to see rapid movement in all our market-leading technologies, we could expect 2018 to be a transformation land-rush.  Even just two of the three would likely result in a boom in the industry for at least some of the players, and a single one would be enough to change the game noticeably and set things up for 2019.  In my own view, we should look at 2018 in just those terms—we’re teeing up technology for full exploitation in 2019 and 2020.  Don’t let that futurism lull you into delay, though.  The winners in 2019 and 2020 will almost surely be decided next year, and you’re either in that group by year’s end, or you’re at enormous risk.

I wish all of you a Happy and prosperous New Year.