Why SDN and NFV Need an Operationalization Model

We’ve gone through a whole series of industry events that swirl around the notion of the next-gen network.  I’ve blogged a bit about the TMF, NFV, SDN, and fiber conferences, and as people comment on the LinkedIn posts relating to the blogs it’s interesting how often the discussions end up on the topic of “operationalization”.  This is a term I use (I don’t know if it’s fair to say I invented it as some have told me; I never checked) to describe the tuning of network technology to suit modern operations requirements.  Every network revolution is an operations revolution, or it should be.  That’s not been happening, and that’s a major and almost universal disconnect that all our hero technologies have to address, or they fail.

Changes in operations are inevitable when you change what a service is.  In the old days, services were created by deploying service-specific technology.  Manage the boxes, manage the services.  Billing, provisioning, and all of the human and business processes of operators could drive to what was basically a singular destination—a lineman or a truck roll or a provisioning task.  Order of service, order of network.  IP convergence broke the 1:1 notion of services/networks because you now had an increased inventory of generalized infrastructure that handled basic connectivity and transport and then a set of service silos that imposed the features per service.

This is the point where “service management” and “network management” took their separate routes.  Interestingly, OSS/BSS didn’t take either of the service/network paths, it stayed focused on the administrative processes of networking.  This is why, IMHO, the TMF came up with the concepts of “Product Domains” and “Service Domains” and “Resource Domains”; operations processes now needed to be a bit multi-personality at some level because of the diverging notion of service and network.

Most operators have successfully glued the administrative, service, and network processes together in an adequate sense, but nearly all operators have been telling us all along that their accommodations haven’t been efficient.  Some operators take weeks to provision a simple VPN, and most operators will say that the process takes at least ten times as long as it should.  They also say that their overall operations costs are far higher than they can tolerate, and they view them as being far higher than they need to be.  So arguably the same pressures that are driving things like SDN and NFV—which are pressures to reduce management costs at a pace at least as fast as revenue per bit is falling—should be driving operations modernization.  They aren’t, or at least have not been.

All that is coming to a head now because cloud computing services, software-defined networking, and network functions virtualization all incorporate the critical concept of virtualization.  A virtual environment manipulates abstractions that convert to resource assignments as needed.  This breaks another level of coupling between services and networks, and also threatens the administrative operations relationship with both .  This is because services are now defined as relationships of abstract things and only real things can carry traffic and earn revenue.

To me, there’s a logical truth here.  Administrative and business processes for operators are focused on manipulating service relationships.  Service relationships in the virtual network of the future are based on orchestrated high-level functional abstractions that create the services.  These abstractions are then converted into resource commitments by a second-level process.  So there are multiple levels of orchestration implicit in next-gen virtualization-based networks.  SDN defines, or at least should define, how the functional abstractions that represent connectivity and transport are realized on infrastructure.  NFV could, or should, define how functional abstractions of any sort are realized by hosting software components and interconnecting them.

But anything that’s committing shared resources to specific service missions is also going to have a problem of management visibility.  You have to record the resource-to-service relationships you’ve created when you orchestrate something or you can’t provide resource state to service consumers.  Even knowing the address of a shared resource MIB isn’t enough, though, because 1) you have to protect resource MIBS from commands that would alter their functional state relative to other users, and 2) you have to somehow present a MIB for the abstract object that the orchestration created because that’s what the management system thinks is being managed, however the connection is made.  You could never reflect the resource details of an NFV deployment of a firewall to a management system for firewalls; they’d not know what to do with the variables.

When we consider all of this, it’s hard not to assert that there can’t be something like “NFV orchestration” or “NFV management” except in the context of a higher-layer set of orchestration and management processes.  One path to that goal is for the ISG to define a model of orchestration and management that, because it’s “virtual”, can envelop real devices or real management/control interfaces as much as virtual functions.  Another path is for another body to publish a higher-level model that can wrap around NFV.

I think that a higher-level management model has to start with the notion of “objects” that represent our functional abstractions.  These abstractions could represent NFV deployments, legacy control interfaces, even legacy devices.  They could also represent collections of lower-level objects so you could build up a service by assembling functional components at several levels.  The TMF has envisioned that in its notion of “Resource-Facing” or “Customer-Facing” services that can be orchestrated in a structured way—in theory.  This orchestration has to not only decompose the object, it has to record the relationships—the “bindings”—between the components of each object, down to the atomic resource connections.  Then it has to create some management image of each object that makes sense to a management system.  Why?  Because you can’t traverse management nodes in a problem-determination process if some of the places where function becomes structure are totally opaque.  What is the state of a composite object, no matter how that object is created?  It’s the composite state of the components of that object, so that has to be known, and known explicitly, or you are N..O..W..H..E..R..W.”  Which I where I assert that SDN and NFV strategies generally are today.  Till we get this right, we are just dabbling in virtualization and we’re being naïve if we believe anyone will deploy SDN and NFV on a large scale.

NFVI: We Need Something to Run On!

NFV has two “visible” components—the virtual network functions that provide service features and the management and orchestration logic that deploys and sustains them.  It also has a critical, complicated, often-ignored third function, which is the network functions virtualization infrastructure (NFVI).  NFVI is the pool of resources that VNFs and MANO can exploit, and its contents and its relationship with the rest of NFV could be very important.  I’ve blogged a bit on elements of NFVI like data path acceleration, and now I want to look at the topic more broadly.

The first point on NFVI is that it’s not clear what’s in it.  Obviously whatever hosts VNFs must be part of NFVI, and so must the resources to connect among the VNFs in a service.  But even these two statements aren’t what they seem—definitive.

Operators tell me that they intend to host VNFs in the cloud (100%), on virtualized server pools not specifically part of the cloud (94%), and on bare server metal (80%).  All of these hosting resources are generally accepted as being part of NFVI.  However, almost three-quarters of operators also say they expect to host VNFs on network devices—either on boards slipped into device chassis slots or directly on a device or chip.  There’s a bit more ambiguity as to whether these additional hosting resources are valid NFVI, and that could have significant consequences.

One such consequence relates to the “connect among VNFs” mission.  If everything in NFVI is a cloud element then cloud network-as-a-service technology can provide connectivity among the VNFs.  If we presume that some of the stuff we’re “stuffing” into NFVI isn’t cloud-driven, we have to ask at the minimum how we’d drive the connectivity.  OpenStack Neutron connects among cloud elements, but what connects without a cloud?  But “Neutron-or-No-Neutron” is less a question than the question of legacy network elements, a question that hits you right between the eyes as soon as you say that NFVI includes anything that’s on or in a network device.

Even if we were to make an edge router into a cloud element by running Linux on a board, installing KVM, and supporting OpenStack, you’d still have the problem of the gadget being somewhere out there, separated from traditional virtual-network interconnection capabilities available inside a cloud data center.  This configuration, which would be typical for edge-hosted services that extend business VPNs, demands that the connection between edge-hosted VNFs and any interior VNFs be made over “normal” infrastructure.  It’s not a simple matter of making the cloud board a member of the customer VPN, because that would make it accessible and hackable, and it would also raise the problem of how it is managed as a VPN edge device when other service VNFs are presumably inside the carrier cloud.

Even this isn’t the end of it.  It’s almost certain that even cloud-hosted VNFs will have to be integrated with legacy service elements to create a complete user offering.  If VNF MANO is fully automated and efficient and the rest of the service operationalization processes are still stuck in the dark ages, how do we achieve any service agility or operating efficiencies?  There’s a simple truth few have accepted for NFV, and that is that everything that has to be orchestrated has to be managed as a cooperative unit to insure that the cooperation that was set up is sustained through service life.

All of this is daunting enough, but there’s more.  One of the things I learned through two projects building service-layer structures (ExperiaSphere and CloudNFV) is that “cooperative systems” being deployed are darn right uncooperative sometimes.  Linux has versions, Python has versions, OpenStack has versions, we have guest OS and host OS and hypervisors and accelerators.  Every piece of platform software is a puzzle element that has to be assembled, and any time something changes you have to fit things together again.  People involved in software design and deployment know that, and apply structured principles (application lifecycle management or ALM) to this, but systemic applications like NFV add complexity to the problem because their platforms will surely evolve (at least their components will) during service operation.

I had a recent call with Wind River about this, and I think they have a good idea.  Wind River is best known for embedded software technology, but they have this thing they call a “Carrier-Grade Communications Server” that’s a platform for NFV and is designed to three two very important NFVI requirements.  First, stability.  Wind River proposes to advance the whole platform software stack in a coordinated way so that the underlying elements will always stay in sync with each other.  Second, optimization. Things like OpenStack and the data plane are hardened in terms of availability and accelerated in CGCS, which can make a huge difference in both performance and in uptime.  Third, universality.  Wind River supports any guest OS (even Windows), so they don’t impose any constraints on the virtual-function machine images or other management and orchestration components that will have to run on the platform.

You could integrate your own NFV platform, but I suspect that anyone who’s not done this already doesn’t know what they’re in for.  NFVI is going to require the distro strategy be implemented—each component is going to have to have a formal “release level”, whether it’s a software component or a physical device or switch.  That release level has to mate functionally with the things that are expected to use the element, and coordinated changes to release level and partner control functions will be critical if NFV is to run at acceptable levels of performance and availability.

There’s still a major open question on how NFVI relates to the rest of NFV, and one of my questions is whether Wind River might build a layer on top of the integrated platform it low calls CGCS to add in at least some form of what the ETSI activity calls the “Virtual Infrastructure Management” function.  I also wonder if they might provide tools to harmonize the management of each orchestrated, cooperating, set of components in a service.  The further they go with CGSC in a functional sense, the more compelling their strategy will be to operators…and even to enterprises committed to cloud computing as a future IT framework.

Taking a Deeper Look at “Orchestration”

I made a comment in an earlier blog about “orchestration” in the context of service chaining, and a few people messaged me saying they thought the whole orchestration topic was confusing.  They’re right, and since it’s very important for a wide range of things in tech these days this is a good time to try to organize the issues and solutions.  Wikipedia says orchestration is “the automated arrangement, coordination, and management of complex computer systems, middleware, and services.”  In this definition, we can see “orchestration” intersecting with 4 pretty important and well-known trends.

At the highest level, orchestration is the term that’s often applied to any process of automated deployment of application or service elements.  The popular “DevOps” or “development/operations” process integration of application development and lifecycle management is orchestration of components during deployment.  In fact, anyone who uses Linux likely uses scripting for orchestration.  It’s a way of doing a number of complicated and coordinated tasks using a simple command, and it reduces errors and improves efficiency.

Orchestration came into its own with virtualization and the cloud, because any time you increase the dynamism of a system you increase the complexity of its deployment and management.  When you presume applications made up of components, and when you define resources abstractly and then map them as needed, you’re creating systems of components that have nothing much in the way of firm anchors.  That increases complexity and the value of automation.

From a populism perspective, component orchestration, virtualization, and the cloud make up the great majority of orchestration applications.  Despite this, most network people probably heard the term first in connection with the network functions virtualization initiative, now an ETSI industry specification group.  In the published end-to-end model for NFV, the ISG has a component called “MANO” for “management and orchestration”, and even MANO is starting to enter the industry lexicon.  After all, what vendor or reporter is going to ignore a new acronym that has some PR legs associated with it?

Down deep inside, though, this is still about “the automated arrangement, coordination, and management of complex computer systems, middleware, and services” so orchestration can be broken down into three pieces.  First, you have to deploy (“arrange”) the elements of the thing you’re orchestrating, meaning you have to make them functional on a unit level.  Second you have to connect them to create a cooperating system and parameterize them as needed to secure cooperative behavior “(“coordination”).  Finally you have to sustain the operation of your complex system through the period of its expected life (“management”).

Orchestration processes we know about can be divided into two categories—script-based and model-based.  Script-based orchestration is process-based in that it takes the set of steps needed to deploy, connect, and sustain a system and collects them so they can be played back when needed and so that they can reflect dynamic changes along the execution path.  For example, if you deploy a given component to a VM, you must “save” that VM identity to reference the component there at a later point in parameterization and connection.  You can see from this simple point that process-based orchestration is very limited because it’s procedural, and it’s hard to reflect lifecycle handling in procedural form—the script’s organization is too complicated.  That brings us to model-based orchestration.

Model-based orchestration defines relationships and behaviors and not procedures.  The basic notion can be summarized like this:  “If you tell me what the proper system of the complex computer systems, middleware, and services looks like, then I can apply policies for automated arrangement, coordination, and management of that system.”  Model-based orchestration is fairly intuitive in describing multi-component application and service deployments because you can draw a diagram of the component relationships—which is the big step in creating the model.

It’s my view that orchestration in the future must become increasingly model-based because model-based orchestration can be viewed as a process of abstraction and instantiation, which is what virtualization is all about.  The abstraction of a service or application is an “object” that decomposes into a set of smaller objects (the “components”) whose relationships can be described both in terms of message flows (connections) and functionality (parameterization).  By letting orchestrated applications and services build up from “objects” that hide their properties, we have the same level of isolation that’s present in layered protocols.  It’s a proven approach to making complex things manageable

There’s a couple of important points we can now take up about orchestration, and the leading one is horizontal scope.  If we’re looking for “the automated arrangement, coordination, and management of complex computer systems, middleware, and services” you can see that it would defeat the whole benefit case for orchestration if we left out important stuff.  A complex of computer systems, middleware, and services has to be orchestrated as a whole.  Another is vertical scope; we have to take a task we’ve automated through to completion or we leave a human coordinator hanging at the end of an automated process.  We can look at the current orchestration poster-child, NFV, to see why scope matters.

Suppose there’s a resource—a system, middleware component, service, or whatever—that’s outside the orchestration system’s control.  What that means is that you cannot commission the cooperative system of components using orchestration because parts are left out.  Whatever benefit you thought you were getting in error reduction and operations efficiency is now at risk because not only are some of the pieces not connected, the time it will take to coordinate things at the “in-orchestration” and “not-orchestrated” boundaries will waste further time and errors are very likely to be introduced here.  This is why I’ve been arguing that you can’t define orchestration for virtual functions and then assert a goal of improved opex and service agility.  You’re not controlling the whole board, and that’s a horizontal-scope failure.

Vertical scope issues are similarly a potential deal-breaker.  Whatever you model about a service or application to deploy it must also automate the process of lifecycle management.  In fact, logically speaking, deployment is only a step in lifecycle management.  By separating orchestration and management in MANO, the ISG has set itself on the path of taking management of service lifecycle processes out of the orchestration domain.  They simply present some MIBs to an undefined outside process set (EMS, OSS/BSS) and say “handle this!”  Management has to know something about what deployment processes did, and however you “virtualize” resources or components, you have to collaterally virtualize management.  One model to rule them all.

This is my big concern with the ETSI ISG’s work, and the basis for my big hope for TMF intervention.  We need to rethink management in future virtualization-dependent services because we need to make it part of orchestration again.

Does SDN Hype Hurt?

Light Reading has reported that on one of the OFC panels, both vendors and operators expressed concern about the “hype” and even the number of standards bodies associated with SDN.  You all know my view about hype and any new technology; we seem to live in a “bandwagon” age where anything that has momentum gets more support and attention.  That breeds the “piling on” notion that many in the industry believe is creating problems for us all.

Any group of people tends to acquire a kind of self-preservation mindset, and if you happen to be your company’s representative on the HotOpticalForum, you’re probably always casting about to find new and publicized things for that group to do.  Not only does this create a lot of competition and collision among bodies, it also often introduces topics or issues in a way that’s far from constructive, and sometimes buries issues that are important.  Some examples from the current OFC event are helpful here.

We’re hearing a lot about the integration of SDN and optical, usually in the form of integrating OpenFlow and optical.  The framework for this is a set of optical enhancements to OpenFlow that allow a forwarding rule to be defined as a pairing of port/wavelength combinations.  OK, I admit that you could use this to cross-connect optical paths, but what about TDM (no wavelengths there) and in any case aren’t we already interconnecting optical using everything from TL1 to GMPLS?

Why use OpenFlow here?  There is no “forwarding table”, no mask-and-match process.  When a packet arrives at an OpenFlow switch and finds no forwarding rule, it can be kicked to the controller for handling instructions.  We can’t kick a wavelength to a controller and we can’t examine the contents for insight either.  Optical cross-connect is also much more likely to be determined by transport policy than by application needs.  You can let an application control a VPN but not your underlying fiber network; you can imagine what chaos would result from sharing control of resources that support multiple applications, services, and users.

Then we have service chaining.  About half the SDN aficionados think that “orchestration” in NFV can be done by SDN.  Wrong.  SDN could be used by NFV to make connections in a service chain, but so could any sort of tunneling protocol.  Most chaining-dependent services are business services with long-term contracts.  If we wanted to create a chain of software elements on which to base such a service today, in any Ethernet or IP network, we have the tools already.  No SDN need apply.  I’m not saying that SDN couldn’t do the job, but that it’s not necessary for the job, and to drive SDN deployment it has to add something to what we already have.

One specific comment in the LR piece that I particularly empathize with is the one that says we’re focusing so much on “northbound” and “southbound” APIs for SDN controllers that we’re forgetting about “east” and “west”, meaning inter-controller or SDN federation.  Federation is important because SDN domains can’t be enormous or current technology limits on performance and availability for central controllers will be hard to accommodate.  You’re likely to have a bunch of SDN domains, much as you have IP subnets.  Are subnets in the IP sense a suitable SDN federation strategy?  If “Yes!” then we need a controller per subnet.  Does that make cost and management sense?  If “No!” then we need to know what we think links SDN domains.

But the big question raised by all of this is management.  Yes, you know that there’s no shortage of standards groups looking at “SDN management” but let me assure you that there’s a fundamental disconnect because we don’t have an SDN management problem.  We have a management problem, period.  Network and service management practices and processes are unsuitable for the current notion of what a service is and how that service is created through a community of cooperative resources.  There is a critical need for a top-down approach to network and service management as a category.  That approach should decompose management practices down to the level of service/network technology, and that’s where the notion of SDN management rightfully belongs.

OK, who among all the alphabet-soup-united-nations collection of management bodies is responsible for this?  We have “liaison” between the bodies but the fact is that a given body tends to be assigned a role by virtue of being there first.  I’d argue that the TMF is the right place to build these high-level standards, but what happens if new lower-level technologies evolve and the TMF hasn’t put a high-level model in place?  We then have a high-level model evolving in a way that’s either constrained by piece-part thinking or we have a high-level model that does everything it needs but doesn’t work with any of the technology choices coming online.

The biggest casualty of the multiplicity of standards is the top-down approach because it’s never clear who’s supposed to be at the top.  And without top-down definitions of what a network service is, what a resource pool is, what a management model is, we have absolutely no clear path to connect network evolution to an optimized business case, and therefore we have little chance of making our revolutions as big and powerful as they could be.

Vaccinating Yourself Against OFC Hype

We seem to be in the heart of a true avalanche of industry events, and that always creates a parallel avalanche of news.  There’s always the question of whether a given news item is actually relevant, of course, and I thought it might be helpful to look at optical networking before the action gets underway.  Maybe we can identify some of the points that should be addressed.  After all, this is an important space.

The thing to remember about optical is that it’s all about cost.  If you go back to my OSI model blog, you recall that each layer builds on the services of the layer below, and only those services.  What’s deeper in the mix is invisible—abstracted away.  Since few services are directly and exclusively “optical” services (dark fiber or “dark lambdas” would be about it), optical provides underlying carriage to the service layers.  It has to do that at the lowest possible cost, overall.

There are three broad mechanisms that could allow optical technology to reduce cost.  The most obvious is by reducing the cost of transport itself.  If you cut the price of a hundred gigs of transport in half you improve service margins, reduce customer prices, or both.  But here we have to remember that the cost of fiber optics isn’t just the cost of the interfaces; you have to consider the cost of the fiber itself and most importantly the cost of laying the glass in the first place.  Some form of wavelength-division multiplexing has been important for the last decade or so because it allows operators to leverage a single fiber bundle by using multiple wavelengths to support multiple parallel optical paths.

We hear about the “lower-transport-cost” angle mostly in discussions about dense wavelength-division multiplexing (DWDM) because that gives us the most leverage.  However, it’s obvious that 1) putting more fiber strands in the ground will also increase capacity along a path, 2) there are only so many bits you need to transport between points “A” and “B”, and 3) the total cost of the infrastructure needed to support the service determines its cost base, so as fiber cost falls it becomes less a component of total cost and it’s less useful to make it cheaper.

That brings us to the second mechanism, which is the good old “flattening layers” approach.  If you think about an OSI-modeled network, you’d see a set of overlay network technologies/protocols, often supported by layers of devices.  Obviously if you could create a service with a single layer the elimination of the rest would reduce the costs.  The notion of flattening layers through SDN is one of the popular mantras of the optical crowd these days, and there is some sense to it.

But only some.  Truth be told, we’ve been flattening layers ever since we put optical interfaces on routers or switches.  Electrical devices today typically support optical connections directly, which means lower-layer optical devices are required only if we have to support parallel (and presumably different) higher-layer stacks on a common fiber path.  Optical multiplexing is also needed to take full advantage of DWDM, since typically a single router couldn’t fill all the optical pipes.  However, since most fiber is deployed in the metro and since the metro is an aggregation network that concentrates paths toward its core, most of the paths are probably rather thinly traveled and this approach isn’t universally helpful.

Where that leaves us is operations cost management, and this is another place where SDN is presumed to have a mission (even NFV).  If we could visualize a network with agile optics that could be reconfigured easily, we can visualize a network that can restore services, redirect capacity, and do all manner of other useful things.  Agile optics has always been presumed useful because it could replace at least some of the electrical steering functions of higher layers, and if you could operationalize the agility that means you harness at least parts of two of my cost-reducing mechanisms.

The challenge here is defining exactly how this would work.  OSI-modeled protocols don’t expect to see path reconfigurations underneath them, and when physical paths change in most IP networks you’ll still have to have convergence at the IP level, so you have service disruptions.  What would be helpful would be a redefinition of the OSI relationship between Level 3 (IP) and the lower-layer protocols that would accommodate more intelligence at those lower layers.

An alternative approach, the one I think is frankly the right one, is to assume that “SDN” creates a completely new set of services with their own rules of topology management and addressing, and that these new services are built using a single layer that rides above optics.  The optical layer is then designed to deliver “path services” that support this higher layer.  Some of the higher-layer service protocols can look like IP or whatever you like insofar as their user interfaces are concerned, but you redefine the internal control protocols.  This is more in keeping with the notion of SDN, which seeks to eliminate adaptive routing.  My suggestion?  Make that an option, but also create new options for new services that weren’t designed based on the antiquated requirements of networking in the ‘60s and ‘70s.

I’ve always believed that we needed a two-layer model for SDN—an agile software-and-application-centric layer that focuses on service connectivity and a lower layer that’s focusing on reliable transport and suitable service quality.  What links the two of them is policy, which gets me back to the notion of NFV.  It may be that what we need for optics is more a part of NFV than a part of SDN, because in my vision we have “service networks” that absorb user requirements and aggregate them to drive “transport networks” through binding policies.

Every industry show is about revolutions because revolutions sell tickets.  We can expect a bunch of purported revolutions from OFC too, but measure what everyone’s saying against my three points on value creations, and compare them to my model of layered SDN, before you buy into it.  It could be that the story is just an attractive billboard.

SDN: Create Buzz or Create Value?

The closing panel of the ONS raised what I think was the most relevant question of all for SDN—how can its value be promoted?  It’s a critical question for any technology revolution to answer because without the right answer to it, no revolution will really ever occur.  But while the question of value is the right one, the answers can easily head SDN in the wrong direction.

I thought that the most interesting factor in the panel comments was their “we have to make them realize” tone.  The real point, perhaps, should have been “we have to give them something to realize.”  If you dig into the results of my last three surveys of enterprises, you see some interesting points.

Point one:  SDN literacy among enterprises is still a full third below the levels needed to sustain a normal market.  This point could justify a level of realization-building of the sort the panel suggests, but according to my surveys the biggest problem with SDN is the fuzziness.  One respondent said “It would be easier to say what isn’t SDN than to say what it is.”  There are probably four or five purported SDN models out there, and what’s interesting is that most people who are really literate about SDN would recognize that there are perhaps two or three at the most.

Buyers need to be able to understand the features of something, so that they can align those features with benefits to make a business case.  What the features of something like SDN would be, if you presume four or five distinct versions of it, would be understandably difficult to establish.  Some people call hosted routers “SDN”.  Some say that SDN is a software overlay, a new tunnel-based network on top of current networking.  Some say it’s OpenFlow and some say it’s APIs.  Where does this leave the enterprise buyer trying to dig up a benefit to justify an SDN project?

Point two:  The most popular form of “SDN mission” actually being deployed isn’t particularly applicable to the enterprise.  At the core of 85% of the SDN projects we had reported in the last three surveys, we found that multi-tenant segmentation was the driving mission.  OK, how many multi-tenant enterprises do you know of?  Can we reasonably expect enterprises to understand the benefits of SDN in their data centers when what we’re citing as the SDN mission is separating multiple tenants?  Does the enterprise host other businesses, or perhaps hackers, and give each their own virtual network?

In theory, enterprises could use virtual networks to separate applications.  In theory, you could construct a whole new model of security and access control, a completely fresh vision of application delivery control and acceleration, and even a new model of connectivity using the same SDN technology that can support multi-tenant.  We don’t do that.  If we had an ONS event to showcase multi-tenancy to enterprises, how many would find it relevant.  If we had one to showcase application control through segmentation, how many wouldn’t find it relevant.  But the reason we don’t have the event isn’t because of a failure in agenda management, it’s because we don’t have a product.

Point three:  The most popular form of SDN doesn’t save any money.  If we create virtual networks by doing tunnel-based overlays on top of existing routers and switches, we’ve still got the same hardware and we’ve actually added a layer of stuff to manage.  People talk all the time about how SDN is a threat to Cisco, but it’s a threat only if SDN displaces existing switches and routers.  Show me a white-box switch vendor who’s setting the world on fire.  The fact is that what’s true for NFV according to operators is also likely true for SDN; you can save the same amount of money just by beating up a vendor on price.

The model of “SDN” that would probably impact capital costs most is the hosted-router model that I’d argue isn’t SDN except in the mind of the media (and the vendors who push it).  Can you save money hosting router software?  The network operators think that there’s only limited benefit there, in no small part because the big issue with cost is complexity and operationalization, which this form of SDN will surely not reduce and might actually increase.

Point four:  We have trivialized SDN into being trivial.  In an effort to make SDN news, to get URL clicks for publications and PR mentions for vendors, we’ve left every complicated issue out of our processes.  We don’t write or talk about them, or even develop them.  No application segmentation.  No next-gen operations and automation.  No new network services—we just use SDN to produce the same Level 2/3 services we had before it ever came along.

If we can totally control network forwarding we can totally re-imagine how a service would work.  Where’s the imagination?  Wouldn’t a new combined L2/L3 service with secure, controllable, properties make sense?  With total control over packet forwarding we could surely build one, but where is it or even discussions about the possibility?  Nowhere, because it’s easier to shill nonsense than to do something substantive.

So I have a message for ONS.  If you want SDN to be real, make it really useful.  I think it can be.  I can see a half-dozen truly relevant things that SDN could do.  I can see a dozen benefits SDN would be able to harness.  I can see future service models possible with SDN and impossible or impractical without it.  Why not try to talk about actual utility, gang?  Instead of making SDN success look like it’s dependent on making that used-car buyer understand why this car is best for them when it probably isn’t, make it best for them.  Or have we laid off all the product types and hired mediameisters?

NFV Implementations: Are We Anywhere Yet?

I promised some of my blog readers that I’d do a deeper dive into the current state of NFV implementations.  One of the challenges that poses is defining just what an NFV implementation is, because NFV means different things to different people and because there’s an ungodly amount of NFV hype out there.  Another challenge is getting information on offerings, and when you combine these two things there aren’t many that pass the filters.

Network functions virtualization defines the way services can be created by taking software implementations of “network functions” and “virtualizing” them so that they can be orchestrated (deployed and interconnected) on various hardware platforms and managed in some efficient and cohesive way.  Just from this simple statement you can see that there are two primary pieces—the functions themselves and the management and orchestration framework that runs them.  Most NFV announcements to date have been announcements of network functions, but an “NFV implementation” has to do the management and orchestration stuff—what the NFV ISG calls “MANO”.  These implementations are the ones I’ll talk about here, and to be fair I’ll go through the small list in alphabetical order.  If you are one of the vendors listed here, or if you purport to have a complete NFV implementation including MANO and can provide public document references for it, we’ll take a briefing on your solution and report on it in detail.

Don’t expect a nice clean column chart here.  Even MANO is a bit fuzzy.  First, it includes both the processes needed to deploy and interconnect functional components and the processes needed to manage them.  Second, the specifications aren’t final at this point, so just what ETSI might call for here is still a bit up in the air.  Third, and perhaps most important, the mission of NFV as it’s perceived by the operators themselves has evolved from one of simple capex reduction to one of enhancements in service agility and reductions in operations costs.  These points have been true, and largely visible, almost from the first and so “NFV implementations” could be expected to have some variability of features depending on how they deal with each of them.

I’ll start with Alcatel-Lucent CloudBand.  CloudBand is a cloud computing architecture that has been enhanced to support NFV.  One diagram Alcatel-Lucent showed at one point told the story nicely by showing CloudBand as a block, with a right-side component for cloud services and a left-side component for NFV.  The left-side component was based on what Alcatel-Lucent calls “Carrier Platform as a Service”, a set of APIs that provide virtual functions with access to orchestration and management services.

CloudBand uses cloud software (OpenStack or CloudStack) to deploy both cloud components and virtual functions and there appears to be at least a level of dualism possible between them; in theory you should be able to author “cloud applications” that would use CPAS and thus look like virtual functions.  In theory you could also deploy VNFs as cloud components bypassing CPAS, which would appear to open CloudBand to a wider range of VNF resources but at the expense of some management and orchestration integration.

Deployment and orchestration at the high level is the responsible of the CloudBand Management System, which provides for deployment using DevOps-like “recipes”.  These tools can be accessed through a console or via the CPAS API set, so management can be deployed as a part of the VNF service/application or in a more centralized way.  There are no details on the nature of management integration but operators report it to be handled fairly traditionally, consistent generally with the ETSI NFV specifications as they’re evolving.

CloudBand gets a lot of points for credibility from the operators, and also from me, but I’m still not confident in their management approach.  One concern is that for NFV to work, VNFs have to be off-the-shelf and not part of any vendor ecosystem, meaning they can’t use custom APIs.  Another is that I don’t agree with the ISG notion of VNF management, which is what Alcatel-Lucent seems to be following.  I think there are issues with performance, security and integration that CloudBand management still needs to answer.

Cisco, not surprisingly, has a different slant but one that in some ways is harmonious with that of Alcatel-Lucent.  Both companies have what is effectively a PaaS API set that defines their MANO capabilities and provides for integration of service components.  Both have a developer program that’s based on this ecosystem.  Where Cisco has a different slant is in the basic framework.  Alcatel-Lucent’s CloudBand is a cloud architecture adapted to NFV.  Cisco’s Evolved Services Platform is essentially a Cisco-proprietary service-layer hosting framework for “service modules” that can envelop an arbitrary set of logical elements.  You deploy service modules, and that deployment uses whatever tools are necessary.  The modules are a form of a packaged service, with management and orchestration elements built in.  This structure means that Cisco can envelop its current offerings into its ESP/NFV approach without changes.  Since ESP consumes the Evolved Services Network (ESN) which envelops both Cisco’s SDN APIs and legacy APIs, ESP is also compatible with current and evolving networks.

How this fits with NFV is a matter of some interpretation.  It’s our view that Cisco will provide tools that will build service modules from virtual functions and will use sanctioned NFVI to deploy on.  It would appear that Cisco’s approach would allow service modules to deploy on NFVI even if the modules were based on something other than ISG-sanctioned VNFs or used non-standard orchestration.  It would also appear that there is a basic set of orchestration tools that can deploy services based on multiple service modules.  Cisco could likely resolve the management integration issues with their service-module approach, but it might take custom code from someone, so that issue is still a question for me.

A more problematic provider of NFV is Dell, who has recently said it would be offering NFV solutions and is also a sponsor of the CloudNFV project and a PoC (with Red Hat) that’s based purely on OpenStack.  The “problem” part here is that while I obviously believe that CloudNFV is a complete and responsive NFV approach (I architected the concept and served as Chief Architect through late January), Dell does not appear to have productized it yet nor have we found a commitment to productize it at all.  There are, as of today, no mentions of CloudNFV found using Dell’s search function on its website.  There is one press release that indicates that Dell has taken over leadership of the project, but no specific product commitment.  Thus, it’s not clear whether Dell’s commitment to leadership translates into actually offering CloudNFV as a product, and I can’t rate their capabilities without some specific product documentation.  You can refer to the CloudNFV website for information on how it works, and the architecture described is open and available without restriction as the site says, but I still can’t name a commercial source for it and so can’t rate it as an implementation.  Dell is showing signs of wanting to be simply an infrastructure (NFVI) player rather than someone who offers a complete solution.  Am I wrong?  Then set up a briefing, Dell, and tell me what’s up.

The next NFV player on our list is HP, with its OpenNFV architecture.  HP has extensive material on OpenNFV available on its website, and that material suggests that it has the two-level model of functional/structural orchestration that I think is the right answer.  Obviously, HP’s approach is based on OpenStack, and HP provides an HP NFV Director for orchestration and management integration.  The whole story is spelled out in what HP calls its “OpenNFV Reference Architecture (NFV RA), and the architecture includes all of the pieces needed for a credible NFV story, including OSS/BSS integration.  I have some reservations about the dependency of the HP management approach on the ETSI model of “VNF Managers” but HP enhances it by providing some default VNF management tools inside the Director to augment or replace embedded VNF Managers where they aren’t found or are incomplete.  This is better than “naked ISG-style management” but in my view it’s not good enough.  Still, if you roll all of the RA into a product you have the most complete picture of NFV implementation presented by any vendor.

The problem is that the RA is an RA not a product, at least as far as operators have reported to us.  HP doesn’t have all the pieces yet and at least some operators are indicating that HP is looking to ecosystem partners for some of the pieces so they won’t be doing everything themselves.  I’m told that HP will undertake professional services contracts to deploy the RA, but the word is that this won’t be fully available much before year-end.

Oracle also has a potential issue with NFV “productiziation”.  They’ve made at least some presentations on their NFV approach, and I’ve listened to them and find a lot of merit in what they’re proposing.  The problem is that at the moment they don’t have an “NFV product” per se.  Their NFV story is made up of an alignment between ETSI E2E diagrams and Oracle elements, but it’s not clear whether all these elements are available in suitable form, and it’s not clear what the specific mission and features of each are.  There are things in their repertoire that are clearly suitable VNFs, and there are things that are definitely candidates for some element or another of MANO.  What I can’t find after a review of their website is a solution brief on NFV or something that takes the block diagrams they show in their presentations and aligns them with specific Oracle product elements.

On the plus side, Oracle is talking explicitly about both the notion of multi-layer orchestration and that of data collection, abstraction, and distribution.  That kind of discussion isn’t in the material I could find from Alcatel-Lucent, Cisco, or HP and both these points are critical to successful NFV in my view.  You could spin a compliant management story off Oracle’s approach as it stands; the only question would be whether they implemented it.  With the proper pieces implemented, Oracle could be a strong NFV contender.  Operators tell me they believe they will be able to trial Oracle strategies late in 2014 or early in 2015, but I can’t confirm that yet.

So that’s the NFV situation as I see it now.  You can see from this that NFV is still in the tire-kicking stage overall, and so I can’t really evaluate which of these strategies is best or even which is complete.  Management is the big issue for NFV because it’s management that has to bridge current ISG work to current operator goals for operations efficiency and service agility.  I think it’s going to be hard to say that any NFV approach is sound until the management issues are fully resolved.  One driver for that might be the TMF NFV Catalyst activity that’s scheduled to result in demos in Nice in early June, and another could be a TMF draft on an evolved OSS/BSS model for the virtual age—I can’t speculate on when the latter might happen.  Given that service agility and operations efficiency are the two primary operator goals for NFV as of our fall survey, and that operationalization is critical to meeting those goals, it’s also possible that somebody will step up with a solution—maybe even one of the vendors I’ve named here.  I’ll report on things as they develop.

A Tale of Seven Layers (More or Less)

One of the things we always hear about during SDN events is the SDN/optical relationship and the topic of “flattening” the network.  You’d think that network architects everywhere were out there with mallets, eyeing their infrastructure for possible points of attack.  Another thing we hear about is “Level 4/7 networking”, so apparently we’re looking for SDN to add layers above our three traditional ones and remove them below.

All of this references the venerable (mid-1970s) OSI model and its seven layers.  The model was developed to offer a reasonable way of dividing functionality in a network, to accommodate the fact that a route between two addressable points was necessarily the sum of a number of hops between nodes that handled traffic.  The model defines three basic functional groupings; the hop or subnet portion (layers 1 and 2), the network portion (layer 3) and the endpoint portion (layers 4-7).  As a rule it’s the network portion of the model that defines the user’s connectivity service.  Lower layers build infrastructure over which the routes are laid, and higher layers manage user flows between network endpoints and applications at those endpoints.  Read the documents and you’ll see this is (while simplified) essentially the picture.

One thing this means is that “the network” doesn’t route traffic above Level 3.  When somebody says they support “Level 4-7” what they are saying is that they are application-aware in handling, because all those layers share the same network address.  One thing that we’re doing, not just in SDN but in networking in general, is allowing for things like “class of service routing” of traffic that offers different routes between endpoints based on the needs of the application or services.  Being “aware” of higher OSI layers could facilitate this kind of handling.

The “could” qualifier here is based on two questions.  First, does the actual network (the lower layers) support class-of-service routing in the first place?  If you can’t do anything different for Application “A” versus “B” in a handling sense, what difference does it make whether you can tell them apart at the network service level?  Second, can you determine, by inspecting the messages at these higher OSI layers, what a given message actually is?  While we have Levels 4-7 in the OSI model, for example, the usage of these layers is opaque to the network by definition and so there’s no easy way to enforce standardization of how these layers are described or used.  For example, most Internet traffic really goes up to only Level 4 (TCP/UDP), and it’s perfectly possible to generate eight or ten or more layers instead of the formal seven, as long as the endpoints that are using the message stream agree on the handling.

The point here is that more layers on top of Level 3 aren’t necessarily useful.  If you want application-specific routing then you’ll want to explore the question of how applications can be identified and how special handling can be indicated per application, but otherwise you may be better off ignoring those higher layers.

The flattening stuff is also complicated.  Remember, the OSI model presumes that the “route” between two points is made up of a number of “hops”, each of which connect nodal elements that can examine the end-to-end addressing rules to get a packet through a chain of the hops to its destination.  So Level 1 and 2 are typically associated with node-to-node connectivity and endpoint connectivity is handled at Level 3.  What would you get by “flattening”?  It turns out it depends on how you do it.

In the days when data links were very unreliable, imagine for a moment the result of sending a packet over a chain of ten or twenty nodes/links without any error detection and correction along the way.  The endpoints could still recover from errors in theory, but if the error rate is high enough the chances of getting a packet across all the links successfully would reduce to near zero.  Lower-layer protocols are thus very helpful when you have low transport reliability, or when you have a large number of nodal hops between endpoints (because you’ll waste a lot of capacity retransmitting stuff end to end).  We can argue that the importance of Level 1 and 2 as discrete layers is indeed reducing so we might be able to push Layer 3 directly over the media.

The opposite may also be true.  We need Layer 3 to handle “nodes” that make intermediate decisions on packet routing where there are multiple possible exit paths.  In a traditional data network, one that only connects user endpoints, you can see that any node is likely to have to make that decision so you need Level 3 handling.  But look at the Internet for a moment.  We have broadband access, and that access “connects” us to the Internet.  The connection process is in most cases a kind of expressway between the home/business and the Internet point-of-presence.  If you send an email or IM or make a call or access a webside that happens to terminate next door, you’re not going to be routed out your access line and into the destination line out there at the edge of the network.  You’ll be hauled back to the Internet point of presence for the routing.  And as services move from being directly connected (like P2P) toward being services mediated through a server (social networking, email, content delivery, etc.) you’re really not connecting to a user endpoint at all.

In the metro area, where we have wireline access and wireless backhaul, we’re looking at a world that’s really not all that “connective”.  Everything that starts at an Internet access point ends up at an Internet service point before it goes anywhere else.  Why steer along the way?  You can create an opto-electrical pipe from the user to the Internet POP and that’s all you need.

What all this means is that flattening the network is possible.  The flattening is already happening, and in fact has been happening all along with Internet access so you don’t need SDN to do it, but you could flatten networks using SDN and it would be potentially beneficial because any OSI layer you can omit is a layer you don’t have to populate with devices and manage using expensive support resources.  But most of the drive behind flattening isn’t created by SDN, and so optimized metro networking to take advantage of flattening should include a careful architecting of the whole structure, not just the lower layers, so that you can really eliminate handling and error risk that would otherwise force you to keep the functionality you’re trying to flatten out of existence.

So where are we?  We have industry trends that can validate network awareness of additional layers of the OSI model.  We have industry trends that can validate an elimination of some of the traditional network-oriented (as opposed to endpoint-oriented) layers.  SDN can serve a useful mission by supporting either or both these sets of trends, but it’s not itself a driver nor is it sufficient in itself to re-architect the network.  We should expect to see either more-layer or flattened-layer advocates explain how their stuff works within the framework of real applications before we declare it to be valuable.

Exploring the Flavors of Federation

SDN is the senior partner in the reformation of the network, and this week’s ONF event makes it certain that SDN will be getting more attention.  There’s no question that operators are working hard on applying SDN to address their problems and opportunities, but it should be clear to everyone that SDN is also a work in progress.  It shares the management challenges I’ve pointed out as NFV issues, for example.  It also shares NFV’s dependence on a reasonable approach to federation.

There have been many discussions and even some announcements on the topic of SDN federation, but like SDN itself, SDN federation is one of those multi-level things, and it’s also clearly going to have to accommodate which of the multiple (three, at the minimum) SDN models we have.  This is likely a good time to step back and explore the issues of SDN federation so we can assess products and standards as they emerge.

In “classic” SDN, a central controller is linked to a set of SDN-enabled switches and this establishes forwarding tables in those switches, which is what creates the “service”.  The controller could pre-configure switches with forwarding rules, or it could (at least with OpenFlow) wait until a switch reported a packet for which it had no rule, then supply the rule on demand.  Clearly the second of these mechanisms could be process-intensive but few believe that a single, “flat” SDN network based on classic OpenFlow would scale to even the scope of some large private networks today.  However, you could divide a network into “zones” and give each zone a controller, then let the controllers connect upward in some structured hierarchy to build a larger network.  This is the primary objective of SDN federation, though there are also some who are interested in applying controller-level coordination to services that cross administrative boundaries—user to carrier or between carriers.

It’s important to note here that not all SDN concepts involve central control, and that some SDN architectures that do provide central control also provide a means of federation.  This is a good way to start our discussion, in fact, because if we were to view each of the SDN “zones” as “areas” or “subnets” of an IP network, it’s clear that we could make SDN federation work simply by applying the mechanisms for inter-area routing we already have in IP.  BGP, for example, is used by a number of SDN providers (Alcatel-Lucent, Juniper) as at least one means of exchanging addressing information among what I’ve been calling SDN “zones”.  That demonstrates the first of the options for SDN federation, which is what I’ll call functional federation.

In functional federation, SDN zones are black boxes that obey SDN rules inside but that present a traditional IP (or Ethernet, or in theory any established L2/L3 protocol) sub-area structure at their boundary.  The goal of functional federation is to interconnect the service that SDN creates not the technology that creates that service.  No SDN controller in functional federation would need to know anything about its neighbors, and in fact it wouldn’t know its neighbors were even implemented via SDN—they could be legacy enclaves of devices.

This is the big benefit of functional federation, in fact.  We tend to build IP and Ethernet networks today by connecting functional enclaves or zones, and if we use the mechanisms now being used to connect these zones to connect SDN zones, we can transition to SDN in a gradual way based on local benefits.  Also, the largest current application of SDN is in data center networking, and most data center SDN applications create one or more Ethernet subnets that can then be connected to the rest of the network through traditional IP gateways.  This means that functional federation resolves most of the federation needs of enterprises.

Not all of them, of course.  If SDN “zones” interconnect in many places, if they involve complex optimization of traffic or QoS, or both, then it may be necessary to adopt SDN principles end to end across multiple zones.  This requires another level of federation, what I’ll call structural federation because it connects how a service is built and not how a service works.  NEC’s recent announcement of hierarchical SDN control is an example of structural federation.

In structural federation, controllers interact with each other to participate in the creation of routes.  When a packet has to be routed from point A to B, the goal of structural federation is to get a complete picture of how that route has to be threaded across multiple zones so that it can be set up by instantiating each of the segments in each of the zones at one time.  Otherwise, the first packet in the flow has to activate controller intervention in every successive zone if “stimulus” route creation is used.

Structural federation is clearly an “SDN” approach where functional federation is a legacy approach, so for SDN it should be important to get structural federation to work.  There are some barriers, though; issues we need to watch for as vendors announce their approaches.  One is service model coordination.  Logically speaking, structural federation has to either work independent of the L2/L3 service that SDN is creating, or it has to “know” the constraints of the service to insure that its federation processes are compatible.  Another is the security of the processes; federation is a good way of leaking bad information into SDN just like it is into IP.  A third is performance and availability.  The more layers of controller we have, the more have to be operating efficiently (and simply operating) in order for new flows to be accommodated.

Functional federation will likely serve the needs of SDN for some time, particularly because vendors are creating hybrid approaches of BGP, MPLS, and SDN to exploit the model.  For SDN to meet its full potential in building new services with new forwarding rules, though, we’ll need a good structural approach.  I think we’re making progress in that direction, but there’s still plenty of useful work that should, and could, be done.

Who Wins in the New Age of Tech?

Our industry is driven by a lot of almost-hidden forces as well as by the more obvious buyer/seller tension.  One of the most significant is the responsibility that a public company has to its shareholders.  While we talk all the time about how companies need to “listen to their customers”, the fact is that a public company isn’t responsible to customers except in the sense of any warranties and truth-in-representation rules.  It has very broad responsibility to its shareholders, and we may be seeing an example of that right now.

In the last month, both Cisco and Juniper have announced they are going to issue corporate debt (bonds) and use the proceeds in part to buy back shares.  In and of itself this may not seem like it matters, but it is a very strong signal about the future as seen by these two companies and so we need to look closely at what it means and how it works.

The goal of a company is to provide gains for its shareholders; that’s what “investment” is all about.  To do that, the company has to work to increase its stock price.  That stock price is typically based on its earnings per share, meaning the net profit the company makes divided by the number of shares of stock outstanding.  If a company is increasing its profits it would typically see its share price rise, and of course the opposite is also true.  Companies can increase profits and share price by cutting costs or by increasing sales, and these are the mechanisms we hear most about.  They can also increase share price by reducing the number of shares, and that’s what we’re seeing now.

Many companies have “stock repurchase” programs, and little is said about them in the technical press.  These programs, which have to be filed with the SEC, allow a company to buy back and retire shares of stock, reducing the total shares outstanding and increasing the earnings per share as a result.  It’s more unusual for companies to borrow money to fund these programs, but there are reasons (like the fact that a lot of their cash on hand is offshore and would have to be taxed to be repatriated and available) for it.  However, the fact that both Cisco and Juniper feel that they have to buy more shares back is a pretty strong signal that they can’t meet market expectations with normal profit growth alone.  That’s an important truth.

I’ve blogged for some time about how the network industry and the IT industry alike have been, in effect, eating their own young.  By falling into a cost-management mindset, both have accepted a buyer goal of “spend-less-each-year” that inevitably lowers sales industry-wide, and thus spurs first market-share battles and eventually (and inevitably) commoditization and contraction.  I think that Cisco’s and Juniper’s decision to borrow to buy back stock is an indicator that margins in networking are now dead and will never be resurrected.  Even the high end of network equipment, the seemingly resilient IP routing, is doomed to commoditization by the admission of the two prime advocates of that market.

There’s no point crying over this now; I think the Cisco/Juniper move demonstrates that it’s too late in networking, and I also think that the IBM decision to sell of its COTS business shows the same thing is happening in servers.  What we have to do is decide whether technology has run its course and can’t generate more value, and if that’s not true then decide how the new value can be generated.  The answer, I think, is a combination of chips, software and professional services.

Hardware, including network hardware, is essential but it’s not particularly differentiable because the features of hardware today are created by software, even if the software is embedded rather than purchased and run independently.  Inevitably hardware will lose margins, so inevitably you have to view it as a platform for software.  Vendors like Cisco and Juniper had a chance to re-frame their stuff as platforms for hosted features but elected to try to sustain a doomed model instead.  As a result, we have SDN and NFV, which formalize the migration of software features off the non-responsive network hardware onto already-commoditized COTS servers.

Commoditization, of course, is driving down prices all along the food chain.  I remember when there were a few hundred sites with multiple T1(1.5 Mbps) lines and perhaps a dozen with T3 (45 Mbps).  Today you can buy residential Internet at speeds of 100 Mbps and hardly anyone would regard 1.5 Mbps as even marginally adequate for consumer Internet.  You low prices and mass markets are synonymous.  So, of course, is technical illiteracy and mass markets, which is what brings us to professional services.

We can make computers affordable by all, we can make every device smart, we can augment every aspect of our lives with technology…but not if it means we all have to be technologists.  Even relatively high-tech buyers like businesses and network operators are finding it difficult or impossible to sustain the skill levels needed to adopt the state-of-the-art stuff that’s out there.  And as we drive more toward software features we create more complicated integration and management, which means we need more professional skills.  At some point, what a company can do to help its buyers install and use their stuff becomes more important than the stuff itself, and we’re already on the edge of that today.

The other thing that we need to have for this mass-market technology is semiconductors.  We can take a ten-thousand-dollar server and shrink it down to a thousand bucks, but inside it we still need the processor and memory and so forth.  Yes, all this stuff will be under margin pressure but that’s been true for years now so the vendors have learned to live with it.  And unlike network equipment that can’t proliferate almost organically as it gets cheaper, consumer electronics is already demonstrating it can expand almost endlessly.

What all this adds up to is a changing of the guard.  I remember when IBM was the sole tech giant.  I remember Cisco’s rise and what seemed to be a shift away from computing to networking.  What we’re now going to see is chip companies (Intel, AMD) and software players (Oracle, Microsoft) taking the lead if they play their cards well.  Who will lead in SDN?  Chip and software guys.  Same with NFV, same with IT, mobility, everything.  The traditional models of both IT and networking have now fallen too far behind market reality to catch up.  That’s what Cisco and IBM and Juniper are signaling.  The future lies with the new generation.