Making the Most from Intel’s Altera Deal

The decision by Intel to acquire custom-chip-and-FPGA vendor Altera is another indicator that networking may be driving incremental opportunity in the server space.  As the WSJ points out, more than half of Intel’s operating profits come from servers, though personal-system chips account for most of the company’s revenues.  You can’t afford to ignore a change in your profit-cow’s pasture, but what exactly is driving the changes?

There’s a lot you can do with Altera silicon, of course, but the big applications probably relate to networking.  FPGAs are great ways of doing those little complicated communications-oriented functions that tend to bog down traditional servers.  If servers are going to serve in a network mission, you’d expect them to need that special stuff.

There is no question that servers have been getting more network-centric.  That’s been true since we exited the age of batch processing, in fact.  There’s no question that higher-speed connections to servers have started to impact server design, or that pools of servers linked in a virtualized data center or a cloud will impact the role of networks in connecting servers, and impact servers’ network interface design.  Both of those have been true for some time too.  I think we have to look further.

One obvious factor in the shift in server design is the transfer of functionality from network devices to servers and server pools.  Both NFV and SDN would have the effect of making servers more “appliance-like” in what they did, and Altera’s FPGA technology has always been targeted in part at custom appliances and network devices.  The only question is whether the trend is large enough to justify Intel’s M&A decision.

I’ve been modeling NFV impact for a couple years now, and my numbers have been pretty consistently showing that optimal deployment of NFV would generate about 100,000 new data centers worldwide, making NFV the largest single source of new data centers.  It’s hard to say how many servers would be involved here because of the variety of density options, but I think it’s fair to say that we could be looking at tens of millions.  Nobody would sneeze at a market that big.

Intel seems to have grasped the scope of NFV opportunity some time ago.  Wind River’s Titanium Server (formerly Carrier-Grade Communications Server) targets the VNF hosting market and is the basis of a number of partnerships with server vendors.  Intel is also a Platinum member of OPNFV, the group driving an open-source reference implementation for NFV.

If we have all these new servers, we may have something that on the surface seems to defy NFV’s notion of “COTS” because specialization of server needs can justify specialization of server technology if you have enough quantitative needs to fill a bunch of servers.  Resource efficiency is critical in NFV and the cloud and you don’t want to segment resource pools unnecessarily, but a requirement for special data handling that’s pervasive enough justifies its own pool.  Thus, a failure of Intel to address the need could fill a pool of NFV servers with another vendor’s chips.

Whether all this interest, including Altera, will pay off for Intel may depend on the pace and manner of NFV adoption—the question of what an “optimal deployment” of NFV would be.  The simple reality of any technology shift is that it happens to the extent that its benefits pay back on the investment needed to support the shift.  Given that the primary benefits of NFV deployment—operations efficiency and service agility—are not in fact delivered by in-ETSI-scope NFV that may pose some questions for Intel as it digests Altera.

Perhaps the biggest question is whether the SDN track to success could be as valuable as the NFV track, or whether it could hurt Intel and servers in the long term.  I pointed out in a couple earlier blogs that if you took the “manager” concept of NFV and pulled it out to create a plug-and-play version of infrastructure and manager, you could then drive that combination without NFV and direct it at pure NaaS missions.

If you operationalize NaaS using OSS/BSS orchestration from any of the vendors who supply it, you could deliver at least a big piece of the operations efficiency and service agility benefits that NFV needs.  You could boost revenues with NaaS.  Given that, and given the fact that vendors like Cisco might love the idea of proving out network efficiency and revenue benefits with legacy devices, might you reduce the incentive to move to NFV?  It depends.

I think that as services evolve to incorporate more mobile/behavioral elements, network infrastructure will evolve toward the cloud.  Operationalizing the cloud is clearly a mission NFV could support, since VNFs and elements of mobile/behavioral services would look much the same.  The trick is to make this happen, because of those pesky NFV scope issues I’ve talked about.

For Intel, that may be the challenge to be faced.  Legacy network practices have blundered along in the traditional circuit/packet model and they could be on the verge of escaping that mold.  NFV could be instrumental, even pivotal, in making that escape, but I think it’s becoming clear that other stuff could also force the change in the network service and infrastructure model.  That wouldn’t prevent a cloud revolution for operators, but it could divide the transformation process into two phases—operations-driven and service-cloud-driven.  The result might be a delay of the second phase because the first could address (for a time) the converging revenue/cost-per-bit curves.

Intel needs to have the future of networking set by NFV and the cloud.  That means that they need to drive NFV PoCs toward field trials that include service-lifecycle-management, operations efficiency, and agility.  And that may not be easy.

In nearly every operator, NFV PoCs are totally dominated by the standards and CTO people, and the scope of the trials has been set at what the term “PoC” stands for in the first place—proof of concept.  Can “NFV” work?  Somebody will have to promote broader operations integration here, and Intel would be a very logical source.  They could do that in three ways.

The first is to encourage both the NFV ISG and the OPNFV group to formally encourage PoC extension or new PoCs to focus on operations and infrastructure-wide orchestration.  This is a given, I think, but the problem is that there will likely be enormous resistance from network vendors.  I think more will be needed.

The second approach would be to work closely with server vendors to take a broader and more aggressive stance on the scope of NFV.  HP has a very strong NFV suite and is a server partner of Intel and Wind River, for example.  Alcatel-Lucent and Ericsson have their own server strategies based on Intel.  Could Intel promote a kind of NFV-server-vendor group that could develop its own tests and trial strategy that’s aimed at broader orchestration and agility goals?

The final approach would be to actually field the necessary technology elements as Intel or Wind River products.  To have this approach generate any utility, Intel would have to preempt any efforts to encourage standards progress and even perhaps create some channel conflicts with its current server partners.  I think this is a last-resort strategy; they might do this if all else failed.

To my mind, Intel is committed to Door Number Two here whether it realizes it or not, largely by default.  Otherwise it’s exposed to the risk that NFV won’t develop fast enough to pay off on the Altera investment, and also to the risk that NFV itself will either be delayed in realizing those heady server numbers, or fail to realize them altogether.  I don’t think those would be smart risks for Intel to take.

How SDN Models Might Decide the “Orchestration Wars”

One of the interesting things about SDN is that it may be getting clearer because of things going on elsewhere.  We still have a lot of SDN-washing, more models of what people like to call “SDN” than most would like, but there’s some clarity emerging on just how SDN might end up deploying.  I commented on SDN trends in connection with HP’s ConteXtream deal last week, and some of you wanted a more general discussion, so here goes!

There have been three distinct SDN models from the first.  The most familiar to most is the ONF OpenFlow “purist” SDN, the one that seems to focus on SDN controllers and white box switches.  The second is the overlay model, popularized by Nicira, which virtualizes networks by adding a layer on top that’s actually a superlayer of Level 3 not Level 4 as some would like to think.  The final model is the “software-controlled routing/switching” model, represented in hardware by most vendors but in particular Cisco, and in software by virtual-router products like Vyatta from Brocade.

Virtual switching and routing, and overlay SDN, are all the result of a desire to virtualize networks just like we did with computing.  In effect, these create a network-as-a-service framework.  That can be done with ONF-flavored SDN too, but the white-box focus has tended to push this model to a data center switching role, and to a role providing for explicit forwarding control in the other models.

Too many recipes spoil the broth more decisively than too many chefs.  Diversity of approach and even mission isn’t the sort of thing that creates market momentum.  What I think is changing things for SDN is a pair of trends that are themselves related.  The first is rapidly growing interest in explicit network-as-a-service, both for cloud computing and for inclusion in retail service offerings.  The second is NFV, and it may be that NFV is why the NaaS interest was finally kindled in earnest.

NFV postulates infrastructure controlled by a “manager” of some sort.  Initially this was limited to a virtual infrastructure manager, but many in the ISG are now accepting a “connection manager” responsible for internal VNF connectivity, and some are accepting that this might be extended to support connection to the user.  The important notion here is the “manager” concept itself.  You have to presume (the ISG isn’t completely clear here so “presuming” is a given) that a manager gets some abstract service model and converts it into a service.  That’s a pretty good description of NaaS.

If a manager can turn an abstraction into connections under the control of MANO in NFV, it’s not rocket science to extend the notion to applications where there’s no NFV at all.  I could use an NFV VIM, for example, to deploy cloud computing.  I could use a “connection manager” to deploy NaaS as a broad retail and internal service.

In NFV, most would think of the connection manager as controlling SDN, meaning that there’s an SDN controller down below.  That would likely be true for NFV inter-VNF connections, and it could also be true for edge connections to NFV services.  But logically most “connections” beyond the NFVI in NFV would be made through legacy infrastructure, so connection managers should be able to control that too.  Some would use OpenDaylight, and others might simply provide a “legacy connection manager” element.

It’s this that makes things so interesting, because if we can use connection managers to create NaaS and we can have legacy connection managers, we can then use legacy infrastructure for NaaS.  The manager-NaaS model then becomes a general way of addressing infrastructure to coerce it into making a service that can be sold.

In TMF terms, this might mean that “managers” are providers of resource-facing services.  If that’s true, then orchestration at the service level, meaning within OSS/BSS, might be able to do all of the higher-level orchestration involved in NFV.  “Higher-level” here would mean above MANO, above the process of deploying and controlling virtual functions.

Oracle sort-of-positioned itself in this camp with their NFV strategy.  I commented that it was the most operations-centric view, that it had TMF-style customer-facing and resource-facing services, and that it seemed to be positioned not as a limited implementation of VNF orchestration but as a broader approach, perhaps one “above” NFV.

I’ve been saying for quite a while that you need total cross-technology, vertically-integrated-with-OSS/BSS orchestration to make the service agility and operations efficiency business cases for NFV.  There’s always been three options on getting to that.  First, you could extend NFV and MANO principles upward.  Second, you could extend OSS/BSS downward, and third you could put an orchestration stub between the two that did the heavy lifting and matching between the environments of OSS/BSS and NFV.  How would an SDN-and-legacy NaaS model influence which of these options would be best, or most likely to prevail?

It might not change much, even if the NaaS story comes about.  The NFV ISG has taken a very narrow position on its mission—it’s about VNFs.  If you presume that the evolution to NFV comes about because services are converted from appliance/device-based to VNF-based, then the easiest way to orchestrate would likely be to extend MANO upward.  If you presume that NFV deploys to improve service agility and operations efficiency, then orchestration has to provide those things, and even if you orchestrated VNF-based versions of current services you’d still have the same operations problems unless something attacked that area too.

There’s some pressure from operators conducting NFV trials to broaden the trial to include operations, and also some to demonstrate specific efficiency and agility benefits.  However, these trials and PoCs are based on the ISG model of NFV and so they’ve been slow to advance out of the defined scope of that body.  Operators haven’t told me of any useful SDN orchestration PoCs or trials, and most of the operations modernization work in operators is tied up in long-term transformation projects.

That’s what’s creating the race, IMHO.  NFV could win it by growing “up”, literally, toward the higher operations levels and “out” to embrace legacy elements and broader connection-based services.  SDN could win it by linking itself convincingly to an operations orchestration approach, and OSS/BSS could win it by defining strong SDN and NFV connections for itself.

Who will win is probably up to vendors.  OSS/BSS has always moved at a pace that makes glaciers look like the classic roadrunner.  NFV is making some progress on generating a usefully broad mission, but not very quickly.  So I’m thinking that the question will come down to SDN.  Can SDN embrace an orchestration-and-manager model?  The competitive dynamic that might be emerging is what will answer that question.