Making the Most from Intel’s Altera Deal

The decision by Intel to acquire custom-chip-and-FPGA vendor Altera is another indicator that networking may be driving incremental opportunity in the server space.  As the WSJ points out, more than half of Intel’s operating profits come from servers, though personal-system chips account for most of the company’s revenues.  You can’t afford to ignore a change in your profit-cow’s pasture, but what exactly is driving the changes?

There’s a lot you can do with Altera silicon, of course, but the big applications probably relate to networking.  FPGAs are great ways of doing those little complicated communications-oriented functions that tend to bog down traditional servers.  If servers are going to serve in a network mission, you’d expect them to need that special stuff.

There is no question that servers have been getting more network-centric.  That’s been true since we exited the age of batch processing, in fact.  There’s no question that higher-speed connections to servers have started to impact server design, or that pools of servers linked in a virtualized data center or a cloud will impact the role of networks in connecting servers, and impact servers’ network interface design.  Both of those have been true for some time too.  I think we have to look further.

One obvious factor in the shift in server design is the transfer of functionality from network devices to servers and server pools.  Both NFV and SDN would have the effect of making servers more “appliance-like” in what they did, and Altera’s FPGA technology has always been targeted in part at custom appliances and network devices.  The only question is whether the trend is large enough to justify Intel’s M&A decision.

I’ve been modeling NFV impact for a couple years now, and my numbers have been pretty consistently showing that optimal deployment of NFV would generate about 100,000 new data centers worldwide, making NFV the largest single source of new data centers.  It’s hard to say how many servers would be involved here because of the variety of density options, but I think it’s fair to say that we could be looking at tens of millions.  Nobody would sneeze at a market that big.

Intel seems to have grasped the scope of NFV opportunity some time ago.  Wind River’s Titanium Server (formerly Carrier-Grade Communications Server) targets the VNF hosting market and is the basis of a number of partnerships with server vendors.  Intel is also a Platinum member of OPNFV, the group driving an open-source reference implementation for NFV.

If we have all these new servers, we may have something that on the surface seems to defy NFV’s notion of “COTS” because specialization of server needs can justify specialization of server technology if you have enough quantitative needs to fill a bunch of servers.  Resource efficiency is critical in NFV and the cloud and you don’t want to segment resource pools unnecessarily, but a requirement for special data handling that’s pervasive enough justifies its own pool.  Thus, a failure of Intel to address the need could fill a pool of NFV servers with another vendor’s chips.

Whether all this interest, including Altera, will pay off for Intel may depend on the pace and manner of NFV adoption—the question of what an “optimal deployment” of NFV would be.  The simple reality of any technology shift is that it happens to the extent that its benefits pay back on the investment needed to support the shift.  Given that the primary benefits of NFV deployment—operations efficiency and service agility—are not in fact delivered by in-ETSI-scope NFV that may pose some questions for Intel as it digests Altera.

Perhaps the biggest question is whether the SDN track to success could be as valuable as the NFV track, or whether it could hurt Intel and servers in the long term.  I pointed out in a couple earlier blogs that if you took the “manager” concept of NFV and pulled it out to create a plug-and-play version of infrastructure and manager, you could then drive that combination without NFV and direct it at pure NaaS missions.

If you operationalize NaaS using OSS/BSS orchestration from any of the vendors who supply it, you could deliver at least a big piece of the operations efficiency and service agility benefits that NFV needs.  You could boost revenues with NaaS.  Given that, and given the fact that vendors like Cisco might love the idea of proving out network efficiency and revenue benefits with legacy devices, might you reduce the incentive to move to NFV?  It depends.

I think that as services evolve to incorporate more mobile/behavioral elements, network infrastructure will evolve toward the cloud.  Operationalizing the cloud is clearly a mission NFV could support, since VNFs and elements of mobile/behavioral services would look much the same.  The trick is to make this happen, because of those pesky NFV scope issues I’ve talked about.

For Intel, that may be the challenge to be faced.  Legacy network practices have blundered along in the traditional circuit/packet model and they could be on the verge of escaping that mold.  NFV could be instrumental, even pivotal, in making that escape, but I think it’s becoming clear that other stuff could also force the change in the network service and infrastructure model.  That wouldn’t prevent a cloud revolution for operators, but it could divide the transformation process into two phases—operations-driven and service-cloud-driven.  The result might be a delay of the second phase because the first could address (for a time) the converging revenue/cost-per-bit curves.

Intel needs to have the future of networking set by NFV and the cloud.  That means that they need to drive NFV PoCs toward field trials that include service-lifecycle-management, operations efficiency, and agility.  And that may not be easy.

In nearly every operator, NFV PoCs are totally dominated by the standards and CTO people, and the scope of the trials has been set at what the term “PoC” stands for in the first place—proof of concept.  Can “NFV” work?  Somebody will have to promote broader operations integration here, and Intel would be a very logical source.  They could do that in three ways.

The first is to encourage both the NFV ISG and the OPNFV group to formally encourage PoC extension or new PoCs to focus on operations and infrastructure-wide orchestration.  This is a given, I think, but the problem is that there will likely be enormous resistance from network vendors.  I think more will be needed.

The second approach would be to work closely with server vendors to take a broader and more aggressive stance on the scope of NFV.  HP has a very strong NFV suite and is a server partner of Intel and Wind River, for example.  Alcatel-Lucent and Ericsson have their own server strategies based on Intel.  Could Intel promote a kind of NFV-server-vendor group that could develop its own tests and trial strategy that’s aimed at broader orchestration and agility goals?

The final approach would be to actually field the necessary technology elements as Intel or Wind River products.  To have this approach generate any utility, Intel would have to preempt any efforts to encourage standards progress and even perhaps create some channel conflicts with its current server partners.  I think this is a last-resort strategy; they might do this if all else failed.

To my mind, Intel is committed to Door Number Two here whether it realizes it or not, largely by default.  Otherwise it’s exposed to the risk that NFV won’t develop fast enough to pay off on the Altera investment, and also to the risk that NFV itself will either be delayed in realizing those heady server numbers, or fail to realize them altogether.  I don’t think those would be smart risks for Intel to take.