Brocade Says a Lot About NFV: Is it the Right Stuff?

Most of our discussions of the competitive landscape in networking involve the larger firms; Alcatel-Lucent, Cisco, Ericsson, Huawei, Juniper, and NSN.  While it’s true these firms have the most influence and the greatest resources, they also have a powerful incentive to take root in the current market and become trees.  Mammals may be smaller, but mobility makes them more interesting, and in market terms the wolves of the SDN/NFV revolution might well be the smaller more agile firms.

One such company is Brocade, who transformed itself from a storage network vendor to a broader-based network vendor with the Foundry acquisition.  More recently Brocade acquired the Vyatta soft switch and router, and it’s here that they became interesting from the perspective of SDN and NFV.  They’re a big player in OpenDaylight in the SDN space, and last year they took a data-center-centric vision of NFV on the road and got surprising carrier traction.

What Brocade said was essentially what enterprises already know; the network in an IT-dominated world starts with and orbits around the data center.  You have to architect the data center to build the network of the future.  What you host service features in for NFV, make money with in cloud computing, is data center equipment married effectively to networking.  It’s all true of course, and it was a very different story than operators were hearing from the Big Six.  But Brocade wasn’t able to turn the story into actionable insights and they lost the strategic influence they’d gained with their story.  Now they want it back, and their earnings call yesterday gives us a chance to see what they think might work, and gives me a chance to express what I think about their approach.

Brocade saw an uptick in both SAN and WAN—five and nine percent sequentially, respectively.  On the call, they reiterated their key point—it’s about the data center—and augmented it by saying that the industry transformation now underway favored them.  That’s certainly true, and validated by the strategic influence uptick they had last year just by taking their story out to carrier executives.  They can get in to see the big guys, and tell a story there, which is important.

They also made a point about software networking, which is absolutely key.  The SDN and NFV revolutions are dilutive to the network equipment providers, and so there’s little incentive for them to drive things in a software direction.  Yet software is where we need to go.  The best strategy for a little guy against a giant is to lay a trap, which in this case is to accentuate the aspect of the revolutionary future that large players will be most uncomfortable with.  Your hope is to get them to pooh-pooh the thing that’s really important, because for them it’s really inconvenient.

Brocade thinks this formula is going to work, and they’re forecasting IP growth of 6-15% q/q.  That’s growth any of the Big Six would kill for even at the low end.  A big part of their gains are going to come from their marriage of Ethernet fabric and software—from SDN—and that’s an area where Brocade management specifically feels that they’ve got an agility edge over the incumbents.  They mention SDN and most significantly NFV on their call—they talked more about it than any of their larger competitors in fact.  There is absolutely no question that Brocade thinks the network revolutions of our time are their big opportunity.

So where are the issues?  First, I’d have to point out that Brocade got a big pop in credibility last spring when their data-center-centric NFV story hit.  The problem was that they’d lost virtually all of it by the fall, and the reason was that they couldn’t carry the story forward in enough detail.  In the fall, operators said that they thought Brocade had the right idea but didn’t have a complete NFV solution baked yet.  They still felt that way this spring, despite the fact that Brocade plays in some of the NFV proof-of-concepts.  Thus, operators are on the fence with respect to whether Brocade is a wolf in my forest of competitor trees, or just crying wolf.

There’s some justification for that.  Brocade tends to position their Vyatta stuff as “NFV” when in fact it’s a prospective virtual function in the big NFV picture.  Where does that picture come from, if Brocade isn’t yet articulating it?  If the answer is “one of the Big Six” then Brocade is setting itself up to be a small follower to a giant leader.  If one of the major firms can drive NFV, why would they leave any spot for Brocade?  Not to mention the fact that the whole premise here is that those firms won’t drive it because it undermines their revenue base.  But if neither Brocade nor the Big Six offer the NFV solution architecture, Brocade is depending on an outside vendor like an IT giant to set the stage for it.

Might HP, or Oracle, or IBM, or Intel field the total NFV architecture of the future?  Sure, but in their own sweet time.  So here’s my view; Brocade has de-positioned its competitors but now it has to position itself and its allies.  A smart move at this point would be to lay out the right NFV architecture, address the questions, and assign the major roles to friendly players.  If Brocade could do that it could accomplish two critical things.

Thing One would be that it could help drive a business case for NFV.  Operators today, at the executive level, tell me that they are working to prove NFV technology but their trials are not yet comprehensive enough to prove a business case.  That’s because their scope is too limited—service agility and operations efficiency are secured meaningfully only if you can span the network, not just a tiny hosted piece of it.  If Brocade can advance NFV, their victory in positioning there could be meaningful.  Otherwise they’re playing checkers with the Big Six.

Thing Two is that it could make them a kind of “fair broker of NFV”, a player who is seen as understanding the process well enough to fit the pieces together in the right way.  That they have a nice big piece themselves only proves they have some skin in the game.

So that’s where things stand with Brocade, as we head into a critical period.  Operators tend to do strategic technology planning for the coming year in the September-November period.  This would be a good time for Brocade not to just stand, but to stand tall.

Posted in Uncategorized | Comments Off

HP’s and IBM’s Numbers Show a Faceoff–On NFV?

HP reported its results and the numbers were favorable overall, with the company delivering year-over-year revenue growth for the first time in three years.  The only fly in this sweet ointment was that the great majority of the gains came in the PC division, which saw a 12% increase in revenue that management attributes to a gain in market share.  Even HP doesn’t think that sort of gain can be counted on for the future, and so it’s really in the rest of the stuff that HP’s numbers have to be explored.  To do that it’s helpful to contrast them with IBM’s.

HP revenues were up a percent overall, where IBM’s fell by two percent.  At HP, hardware revenues were up overall and software was off, where IBM’s were the opposite.  Both companies saw revenues in services slip.  For HP, the PC gains were a big boost for hardware (and IBM doesn’t have them any more), but the industry-standard server business also improved for HP and IBM is selling that business off to Lenovo.

It’s hard not to see this picture as an indication that IBM is betting the farm on software, and I think that the Lighthouse acquisition and the deal with Apple reinforce that view.  IBM, I think, is banking on mobility generating a big change in the enterprise and they want to lead the charge there.  HP, on the other hand, seems to be staying with a fairly “mechanical” formula for getting itself together, having announced no real strategic moves to counter IBM’s clear bet on mobility.  The major reference to IBM on HP’s call was the comment that HP’s ISA business benefitted from IBM’s sale of that line to Lenovo.

One bets on hardware, one on software.  One focuses on transformation in a tactical sense, and one on the strategic shifts that might transform the market.  Both companies took a dip in share price in the after-hours market, so clearly the Street wasn’t completely happy with either result (no surprise).  Which bet is the better one?

I think that hardware is a tough business these days, no matter where you are.  The margins are thin and they’re going to get thinner for sure.  You can divide systems into two categories—ISA and everything else—and the former is going to get less profitable while the latter dies off completely.  Given this, you can’t call hardware a strategic bet.  But, every cloud sale, every NFV deployment, every application deployment, will need hardware to run on.  If HP can sustain a credible position in the ISA business as IBM exits, it becomes the only compute giant that can offer hardware underneath the software tools that will create the cloud, NFV, maybe even SDN.  HP, who has not only servers but some network gear as well, is in a position to perhaps be a complete solution for any of these new applications.  IBM will have to get its revenue from software and services alone.

But it’s software and services that are likely to drive the revolution.  If IBM is guessing right about the future of mobile devices in empowering workers, they could tap into a significant benefit case that would drive integration and consulting revenues as well as software.  They could shape the next generation IT paradigm.  The question is whether they will do all of that, given that in the near term they’re not likely to gain much from the effort.  It’s always been a challenge to get salespeople to work on building a business instead of making quota in the current quarter.  Can IBM do that?

Interestingly, the 2015 success of both companies might depend on the same thing, something neither of them is truly prepared for—NFV.  Neither IBM nor HP mentioned NFV in their calls, but NFV may be the narrow application of a broad need that both IBM and HP could exploit to improve their positions.

NFV, in its ETSI form, is about deploying cooperative software components to replace a purpose-built appliance.  This has some utility in the carrier space, but most operators say that capex savings from this sort of transformation wouldn’t provably offset the additional complexity associated with orchestrating all these virtual functions.  What’s needed is a broader application of management and orchestration (MANO) that can optimize provisioning and management of anything that mixes IT and network technology—including cloud computing.

IBM and HP are both betting big on the cloud.  If the cloud has to be agile and efficient, it’s hard to see how you could avoid inventing most of the MANO tools that the ETSI ISG is implying for NFV.  Thus, it’s easy to see that a vendor with a really smart NFV strategy might end up being able to improve operational agility and efficiency for all IT/network mixtures, and boost the business case.  NFV might actually be critical for cloud services to support mobile productivity enhancement, both in making the applications themselves agile and efficient and in managing mobility at the device level.  MANO is the central value of NFV; make it generalized and you have a big win.  But it’s probably easier to generalize something you have than to start from bare metal, and it’s hard to say what HP or IBM has already.

HP had a strong NFV story in its OpenNFV positioning, but operators still tell me that there’s not much meat to this beyond OpenStack.  IBM has a much better implementation—SmartCloud Orchestrator is a TOSCA-based model for cloud MANO that could be easily converted to a complete MANO story—but they have been so silent in the space that most operators say they’ve not had an NFV presentation from IBM at all.

It’s my view that MANO is the point of greatest risk for HP, and not just in NFV.  If IBM were to come out swinging with SmartCloud Orchestrator even as it’s currently structured, they could claim better operationalization of all virtual-resource computing.  That gets them a seat at a lot of tables.  Furthermore, it would make it harder for HP to link its own hardware exclusively to network and cloud opportunities.  If you can drive the business case for the cloud, you can probably assume you can sell all the pieces to the buyer.  If somebody else (like IBM) drives the software side, it’s in their interest to commoditize the hardware part—throw it open for all comers to reduce the price, raise the ROI, and marginalize others who might want to control the deal.

I’m not trying to say that NFV is the answer to everyone’s fondest wishes.  I’m saying that wish realization will involve a lot of the NFV pieces, so it would be easier for someone who has NFV to stick quarters under a lot of pillows than it would be for somebody who has nothing in place at all.  Watch these two companies in their positioning of MANO-like tools; it may be the signal of which will emerge as the winner in 2015 and beyond.

Posted in Uncategorized | Comments Off

A Carrier’s Practical View of SDN

Yesterday I talked about the views of a particular operator on NFV trials and evolution, based on a conversation with a very knowledgeable tech guru there.  That same guru is heavily involved in SDN evolution and it’s worthwhile to explore the operator’s SDN progress and directions.

A good place to start is with the focus of SDN interest, and where the operator thinks SDN trials and testing have to be concentrated.  According to this operator, metro, mobile, and content delivery are the sweet spots in the near term.  It’s not that they don’t believe in SDN in the data center or SDN in the cloud or in NFV, but that these applications are less immediately critical and offer less potential benefit.  In the case of data center SDN, obviously, the drive would depend on a large enough data center build-out to justify it, so it’s contingent on cloud and NFV deployment.

The issue the operator wants to address in the metro is that metro networks are in general aggregation networks and not connection networks, but we build them with connection network architectures.  Metro users are connected not to each other, directly, but to points of service where user experiences (including messaging or calling) are provided.  One logical question, asked by my contact here, is “What is the optimum connection architecture for aggregation in the metro?”  Obviously that will be different for residential wireline, wireless backhaul, and CDNs.  With SDN they should be able to create it.

For residential wireline networks, for example, the operator is very interested in using SDN as a means of managing low-layer virtual pipes that groom agile optics bandwidth.  One obvious question is whether emerging SDN-optical standards have any utility, and the operator thinks that will depend on the nature of top-layer management.  “Logically we’d probably control each layer separately, with the needs of the higher layer driving the commitments of the layer below.  But what if there is no top-layer management?”  The operator sees having an SDN controller do everything as a fall-back position should there be no manager-of-managers or policy feedback to link optical and electrical provisioning.

Even here, the operator is changing their view.  At one time they believed that it was essential for optical equipment to understand the ONF OpenFlow-for-optics spec, but now they’re increasingly of the view that having OpenDaylight speak a more convenient optical-control language out of one plugin and OpenFlow out of another would be a more logical approach.

Mobile SDN, as I’ve said in other blogs, seems to cry out for the notion of a new SDN-based service model that would through forwarding control create the agile path from the PGW to the cell where the user is currently located.  But the operator would also like to see some thinking around whether mobile Internet and content in particular don’t suggest a completely different model for forwarding everything to mobile users.  “Why couldn’t I make every mobile user a kind of personal-area network and direct traffic into that network from cache points, gateways, whatever?  We need some outside the box thinking here.”

This particular point raises a question for SDN management, the one that’s the most interesting to this particular operator.  If a collection of devices is designed to provide a well-known service like Ethernet, we have established FCAPS practices that we can draw on, based on the well-understood presumptions of correct behavior and established standards.  How do you represent something that isn’t a well-known service?  What would the “management formula” for it be?  According to my contact here, the utility of SDN may depend on the question of how management interplays with controller behavior when you create something new and different.

Management in SDN is an issue in any event, and at many levels.  First, while it is true that central control of forwarding can create a “service”, can that central point provide a management view?  Obviously what the controller thinks the state of the nodes and paths in an OpenFlow network should be doesn’t mean that the real world conforms to that view.  In fact, if we could assume that sort of thing we’d have declared “management by endorsement” the right answer to all our problems ages ago.  But what is the state of a node?  Absent adaptive behavior on a nodal level, what happens when a node fails?  If the adjacent nodes “see” their trunk to the failed node having dropped, they could poison all the forwarding entries to that trunk, in which case the controller would presumably get route requests for the packets the impacted rules had forwarded.  But will it?  Is there still a path to the controller?  And how about the state of the hardware itself?  Don’t we need to read device MIBs?  If we do, how is the state of a node correlated with the state of a service?

The second level is representing service-independent devices in a service-driven management model where we expect Ethernet services to be built using gadgets that have Ethernet MIBs.  Here’s a specific question from the operator:  Assume that you have a set of white-boxes providing Ethernet and IP forwarding at the same time, for a number of VPN and VLAN services.  These boxes have to look like something to a service management system, so what do they look like?  Is every box both a router and a switch depending on who’s looking?  Is there a big virtual router and switch instance created to manage?  If so, who creates it and parses out the commands that manage it?

This particular operator ran into these questions when considering the question of how NFV would see, use, or create SDN services.  Look at a service chain as an example.  In “virtual terms” it’s a string of pearls, a linear threading of processes by connections.  But what connections in particular?  How does NFV “know” what the process elements in the service chain expect to see in the way of connectivity.  The software has to be written to some communications API, which presumes some communication service below.  What is it?  A “logical string of pearls” might be three processes in an IP subnet or linked with GRE tunnels, or whatever.  How do we describe to NFV what the processes need so we can set them up, and how do we combine the needs of the processes with the actual infrastructure available for connecting them to come up with specific provisioning commands?  And remember, if we say that a given MANO “script” has all the necessary details in it, then how do we make that script portable across different parts of the network, different vendors?

Metro missions seem to dodge many of these issues because the metro network is already kind of invisible in residential broadband, mobile, and CDN applications.  Progress there, this operator hopes, might answer some of the questions that could delay other SDN missions, and management hopes that progress will come—not only from their efforts but from trials and deployments of other operators.  I hope so too.

Posted in Uncategorized | Comments Off

A Look at an Operator’s NFV Position

I had an interesting discussion late last week with a true thought leader in the service provider networking space.  Not a VP but a senior technical person, this individual is involved in a wide range of both SDN and NFV activities for his company, and also involved with other operators in their own efforts.  It was interesting to hear the latest in our “revolutionary” network technologies from someone inside, and I’ll spend a couple blogs recounting the most important points, first for NFV and then for SDN.  I’ve changed a few unimportant points here for confidentiality reasons.

According to my contact, his company is certain to start deploying some “real NFV field trials” and also early customer offerings in 2015 and very likely to be doing something at the field trial level in late 2014.  However, it’s interesting to note that the provider he represents is taking a kind of “NFV by the layers” approach, and perhaps even more interesting to know why.

Early NFV initiatives for the operator are focused on service chaining applications aimed at virtual CPE, with the next priority being metro/mobile and content.  Service chaining is considered a “low apple” NFV opportunity not only because it involves fairly simple technologies, but also because the customer value proposition is simple and the provider’s costs can be made to scale reasonably well.  It can also prove out NFV orchestration.

The service chaining application for the business side is looking at two questions; whether you can really build a strong user value proposition for self-provisioned access-edge service features and whether the best model for the application would be one where a custom box hosts the features on the customer premises, or where a cloud data center hosted them.  The reason for this particular focus is that the provider does not believe that NFV management is anywhere near mature enough to secure a significant improvement in operations efficiency so service agility would have to be the primary driver.

The challenge on the demand side is a debate over whether business users would buy edge services beyond the obvious firewall and other security services if they were offered.  An example of such a service is DHCP per branch, which could at least let branch offices run local network applications if they lose access to a corporate VPN.  Similarly, having some form of local DNS could be helpful where there are branch servers.  Other services might include facilities monitoring, virus scanning, and local network and systems management.

There’s an internal debate on the credibility of on-demand portals.  Some provider sales personnel point out that buyers have not been beating the doors down for these services, but research seems to suggest that may be because they’re not inclined to be thinking about what they might buy were it offered; it’s not offered today and so they don’t have any reason to evaluate the benefit.  There’s also a question of how much these services would have to be integrated with centralized IT support to sell them to larger enterprises, who are the easiest sales targets because of the revenue potential.

On the residential side, the provider is really interested in how “Internet of Things” awareness driven by home control initiatives from players like Apple might open the chances for a home monitoring application.  The reason is that this operator has concluded that residential gateway applications are not a good service chaining opportunity; the devices now used are inexpensive and typically installed for a long period and central hosting would be mandatory if the goal was to replace customer-prem equipment.  If home control could be sold and made credible on a large enough scale and with a high enough level of feature sophistication, could it justify the application?

The next layer of interest for this operator is the management piece.  As I’ve noted, the operator doesn’t think the NFV management story is baked at this point, and they’re not sure how much could be gained in efficiency under a full implementation.  If NFV practices could improve overall management efficiency by 15% or more, then it would be fairly easy to justify using NFV MANO even to operationalize legacy components, but nobody is offering much of a story in that area yet and this operator won’t have an NFV deployment of enough scale to test management practices unless/until service chaining is deployed for both residential and business trials.  My contact is hoping to see NFV management advances that would let them test MANO more broadly than for pure VNFs but isn’t hopeful.  That means the second layer of NFV wouldn’t get wrung out until 2015.

The issue of breadth of MANO also applies in the third layer of NFV testing, which is the way in which NFV might interwork with SDN.  Here the primary area of interest is the metro/mobile network where EPC and CDN at the “logical network” level combine with agile optics and aggregation networks at the physical level.  The issue for the operator in this case has been a lack of clarity on how SDN and NFV interwork, something that they’ve pressed with both the ONF and the NFV ISG.

The particular area of concern is the management intersection.  NFV management, you’ll recall, is something this operator thinks is fairly immature, and they have a similar view on SDN management.  How the two could be combined is the function of two unbounded variables, as they say, and yet somehow there has to be a solution because the most obvious application of both SDN and NFV is the metro intersection of mobile EPC and CDN.  The operator would like to run a trial in this space in 2015 but so far is having issues defining who to work with.

This operator’s view of NFV justification is simple.  The “capex reduction” model offers them limited benefits, to the point that they wonder whether feature-agile CPE and portal-based service chaining would be a better deal.  They are interested in the service agility justification for NFV but they’re not sure whether the buyers really have enough agile demand to justify agile supply.  They are very interested in management/operations efficiency but they don’t think anyone is telling that story thoroughly.

This detailed look at NFV progress seems to show the same problem my survey earlier this year showed.  Operators are still grappling to size the business benefits of NFV, and part of that grappling is simply figuring out what benefits are actually out there.  We are definitely solving NFV problems, answering NFV questions, in early trials.  We’re just not attacking the big ones yet, and until we do we can’t judge just how far NFV can go and what it can do.

Posted in Uncategorized | Comments Off

Finding the One Driver for the Future

Networking has, for decades, seemed to advance based on changes in how we do stuff.  We progressed from TDM to packet, from SNA to IP in business networks, and now we’re moving (so they say) from legacy IP and Ethernet to SDN and NFV and from electrical to optical.  Underneath this seeming consistency is an important point, which is that we had not a whole bunch of shifts in networking but two, and not on the “supply side” but on the demand side.

Starting back in the ‘50s when we began to apply computing to business, we realized that information had to be collected to be optimally useful.  Yes, you can distribute computing power and information to workers, but you have to collect stuff for analysis and distribution from a central point.  If you don’t believe that, consider how well your bank would work if every teller had to keep an independent record of the account of every customer who walked into a branch to make a deposit or withdrawal.

When computing made what was arguably the first of its major advances—the mid-60s with the advent of the IBM System 360 mainframe—we were still pushing bits at about 1200 per second.  Even 20 years later we were still dealing with WAN data rates measured in the kilobits, at a time when we’d already advanced to minicomputers and PCs.  The point is that the fact that the public network was based on relatively low-speed analog and TDM created a kind of network shortfall, and we had a lot of investment to be made simply exploiting the information centralization that had occurred while we were poking around with Bell 103 and 212 modems.

The challenge we have now is that we caught up.  We’ve had startling advances in network technology and so we can now connect and deliver the stuff we’ve centralized.

The second shift came about with the Internet and the intersection of the Internet with our first trend.  The Internet gave us the notion of “hosting”, or “experience networking” where we used communications not to talk with each other but with some centralized resource.  Broadband made that access efficient enough to be valuable, for education, shopping and entertainment.  We’re now pushing broadband to the consumer to the point where bandwidth that would have cost a company ten grand a month (T3 access) twenty years ago is less than a hundred a month today.

Some people, Cisco most notably, postulate in effect that what should happen now is a kind of reversal of the past.  Centralized information and content burst out of its cage by driving network costs downward.  The network was the limiting factor.  Now the idea is that the network’s greater capacity will justify a bunch of new content, new applications, new stuff that will drive up usage and empower greater network investment.

I’m not a fan of this view.  Lower cost of distribution can reduce the barriers to accepting new applications or experiences, but it can’t create the experiences or information.  Videoconferencing is a good example; a decade of promoting videoconferencing has proven that if we give it away people will take it, but they’ll avoid paying for it in the majority of cases.  Networking can’t move forward by doing stuff for free; you can’t earn an ROI on a free service.

What limits the scope, the value, of networking today?  You could argue that it’s not anything in networking at all but something back inside, the information or experience source.  Back in the mid-60s I heard a pioneer IT type in a major corporation tell executives that the computer could double the productivity of their workers.  Twenty years later, my surveys showed that almost 90% of executives still believed that was possible, and only a small percentage less believe it today.  But they believe that information will do the job and not connection.  The networking revolution of the future is dependent on IT, on backfilling the information/experience reservoir with more stuff to deliver.  The cloud, or how the cloud evolves, is more important to networking than SDN or NFV because it could answer the questions Why do we want to do this and How will we make money on it?

That doesn’t mean that we have to sit on our hands.  SDN and NFV represent mechanisms for adapting what the network can do and how cheaply it can do it.  They can change the basic economics of networking so that things that were impossible a decade ago become practical or even easy now.  Mobile networking is that kind of new force, and so what we should be looking to now to transform both networking and IT is how SDN and NFV and the cloud would intersect with the mobile trend.

Back in the mid-60s we were collecting transaction information by keypunching information from retail records.  How much broadband do you think businesses would be consuming now if that application was still the driver of data movement?  At some point in the future, when every worker has a kind of super-personal-assistant in the form of a mobile device and uses this gadget in every aspect of their jobs, we’ll look back on today’s models of business productivity and laugh.  Same with entertainment.  But it’s just as laughable to assume that we’d advance networking without mobility as to assume that punched cards could drive broadband deployment.

The battle for network supremacy and the battle for IT supremacy have always been symbiotic in the past.  Cisco’s success was as much due to the impact of the PC on business networking and the shift away from SNA that created as it was from the Internet—maybe even more.  The question is whether the next big thing will be, as past ones have been, a step by a new player into a niche created by another, or a leap by a player who has both network and IT credentials.  Cisco and IBM, arguably the giants in their respective fields, hope it’s the latter and that they’ll do the leaping.  The standards processes, the VCs, those who want to continue both network and IT populism hope that we can somehow do the former and advance as an industry.

Can we?  None of our past successes in networking or IT were fostered by standards and collective action.  I’d hope, as most of you likely do, that it can be different this time, but great advances in an information age are likely to demand great changes with massive scopes of impact, and it’s not going to be easy to let go of all our little projects and envision a great one.  But only a great change can bring great results.  Somehow we have to fuse IT and networking together, and into mobility.  Otherwise we’re going to cost-manage until we’re promoting accounting degrees instead of computer science degrees.

Posted in Uncategorized | Comments Off

What Would Cisco, IBM, or Others Have to Do to Win at the IT/Network Boundary?

Yesterday, in the wake of earnings calls from both Cisco and IBM, I blogged that IBM was at least working to build fundamental demand for its stuff by engaging with Apple to enhance mobile productivity for enterprises.  I then commented that the challenge would be in converting this kind of relationship into some structured “middleware” that could then be leveraged across multiple business applications.  My closing point was that almost half of the total feature value of new middleware was up for grabs, something that could reside in the network or in IT.  It’s time to dig into that point a bit more.

If you look at normal application deployment, you see what’s totally an IT process.  Even multiple-component applications are normally deployed inside a static data center network configuration, and so it’s possible to frame networking and IT as separate business-support tasks, cooperating and interdependent but still separate.  While most companies unite IT and networking under a CIO, most still have a network head and an IT head.

The cloud, SDN, and NFV potentially change this dynamic.  OpenStack has an API set (Nova) to deploy compute instances and another (Neutron) to connect them.  At least some of the SDN models propose to move network functionality into central servers, and NFV is all about hosting network features.  The broad topic of “network-as-a-service” or NaaS reflects the goal of businesses to make networks respond directly to application and user needs, making them in a true sense subservient to IT.  If you apply virtual switches and components and overlay technology (like VMware) then you can create a world where applications grab all the glory in an overlay component and networking is all about plumbing.

The question, of course, is how NaaS is provided.  NaaS is like any other kind of networking—you have to be able to model what you want, deploy it, and manage it.  Clearly you can’t manage virtual resources alone; real bits get pushed by underlying switches even in the VMware model.  Furthermore, Nova could in theory make “hosting as a service” and be as disruptive to the current data center model as Neutron and NaaS would be to networking.  The point is that there’s a big functional chunk around this “as-a-service” stuff.

And it’s up for grabs.  Virtually no network operators and few enterprises believe that the new model of the cloud is mature and well-understood.  If you focus on the subset of enterprises who are looking for those compelling new productivity benefits—the ones that could drive new tech spending—then no statistically significant portion of the base believes they’re ready to deploy this new model.

The closest we’re approaching reality in our tech evolutions to date is with the cloud and its relationship with NFV.  Cloud computing for enterprises has been mostly about server consolidation; users tend to deploy fairly static application models to the cloud.  While this is helpful to a point, most enterprises agree that point-of-activity empowerment through the marriage of mobility of devices and agility of information is the best hope for new benefit drivers.  This kind of stuff is far more dynamic, which is where NFV could come in.

Service features can also be static, as most of the “service chaining” proofs of concept and hype demonstrate.  A company who buys a VPN is likely to need a pretty stable set of connection adjunct tools—firewall, NAT, DHCP, DNS—and even if they buy some incremental service they’re likely to keep it for a macro time once they decide.  Thus, a lot of the NFV stuff isn’t really much different from server consolidation; it’s a low apple.  The question is whether you can make something dynamic deployable and manageable.  The Siri-like example I’ve used, and the question “What’s that?” illustrate information dynamism, and you could apply the question to a worker in front of a pipe manifold or electrical panel or to a consumer walking down a commercial boulevard.

My point on all of this is that the essential element in agile cloud or NFV deployment is highly effective management/orchestration, or MANO.  IBM’s answer to MANO in the cloud is its “SmartCloud Orchestrator”, which is as far as I know the only commercial MANO tool that’s based on the TOSCA standard that I think is the best framework for orchestrating the cloud, SDN, and NFV.  Some inside IBM tell me that they’re looking at a “Service Orchestrator” application of this tool for NFV and that it’s also possible that NFV and the cloud will both be subsumed into a single product, likely to remain with its current name.

So here’s IBM, explicitly targeting productivity enhancement and having the best current core tool for agile-component MANO.  You see why I say that Cisco has to get on the ball.  It’s far from certain that IBM actually plans to broaden SmartCloud Orchestrator to target the full SDN/NFV/cloud universe, or that they could be successful if they did.  After all, most of you reading this have probably never heard of the product.

Cisco’s ACI is an SDN strategy that says that the current network can be made agile through the addition of APIs that allow applications to manipulate services more effectively and create them from infrastructure.  It’s a more logical approach for most enterprises and even operators because it protects a rather large asset base, but the VMware approach and in particular the partnership with Arista demonstrates there’s another way.  All you have to do is build an overlay network and couple it downward to cheaper boxes.  You get agility immediately, and as you age out your current technology you can replace it with plumbing.  If you connect IBM’s SmartCloud approach to this, you get something that could answer a lot of buyer questions in the cloud, SDN, and NFV.

The big bugaboo for IBM here, and for VMware and Arista and Cisco and everyone else, is the management part.  We still, as an industry, don’t have a validated model for managing transient virtual assets and the transient services or applications created from them.  We are thus asking everyone to dumb down to best efforts at the very moment we’re asking workers to rely on point-of-activity empowerment to make them more productive.

This makes management the boundary point that IT and networking have to vie to reach.  For IBM, coming from the potential strong base of productivity apps designed for tablet/cloud and with the best MANO offering available from a big player, success could be little more than delivering what’s promised.  For Cisco, it’s a matter of creating a complete solution for agile applications and resources that’s credible, not just Chicken-Little-the-Sky-is-Falling PR about how video traffic is destined to swamp networks if everyone doesn’t invest in more bits.  And of course, somebody else might step up.  We’re in the early stages of the future here and there’s plenty of maneuvering room.

Posted in Uncategorized | Comments Off

The Fight at the Network/IT Border–and the Fighters

Anyone who believes in “cyclical spending” or “refresh cycles” or “secular recovery” in tech should take another look at the numbers after both Cisco and IBM reported yesterday.  We are obviously in a general economic recovery and yet tech is stagnant.  As it happens, though, the same two companies’ reports offer some insights into what comes next.  We have people who are thinking strategically, and those who are not.  We have people singing their positioning song too low, and others too stridently.  It’s like a soap opera.

The significant thing about both Cisco and IBM is that both companies are stuck in revenue neutral at best.  IBM has suffered revenue losses for nine quarters, and Cisco also continued its year-over-year decline in the revenue line.  Given that these companies are the market leaders in their respective spaces, there’s really only one possible conclusion, which is that buyers are trying to cut costs and technology is one place they’re cutting.  That shouldn’t surprise these guys; they see themselves cutting (Cisco plans to slash another 6,000 jobs) but somehow everyone doesn’t get the message.

There is nothing complicated about technology deployment.  A company these days has not one tech budget but two.  The first is the money that simply sustains their current capabilities.  The second is money that is added to the pot because there are new business benefits compelling enough to meet the company’s ROI targets—the “project budget”.  It’s this budget that creates longer-term revenue growth because it adds to the pool of deployed technology.  The sustaining budget is always under pressure—do more for less.  Historically, in good times, the project budget is bigger than the sustaining budget.  For the last six years, companies have reported their project budgets shrinking, and now we’re almost at the 60:40 sustaining versus project level.

I know that at least some people in both IBM and Cisco know this, because I’ve had conversations about it.  The interesting thing is that the two companies, facing the same problem, are responding very differently.

IBM’s theory is simple.  We have to reignite long-term technology commitment, and that’s what their focus on mobility is designed to do.  The theory is that mobile empowerment is the largest single opportunity to gain worker productivity, so it brings the largest possible benefit case to the table.  IBM wants to be the leader there, largely by embracing Apple’s aspirations in the enterprise tablet space and combining them with IBM’s software and cloud goals.

This is going to take a while.  The facts about project budgets have been known for a long time, so you have to ask why everyone hasn’t jumped on this.  The reason is that it’s much harder to drive a productivity-justified new project than just to replace an old server or router.  IBM is committing to a shift that will likely take two or three years to play out.  They should have started sooner (the signs have been there for almost a year and a half) but at least they’re starting.

Where IBM is going wrong here is in their positioning.  If you are doing something new, something that nobody else is doing, something that’s probably too hard for others to do easily, you don’t sit with your hands over your mouth, you sing like a bird.  IBM should be touting their new story from the rooftops, but they haven’t managed to get even savvy media to grok what they’re up to.  As usual, IBM is relying on sales channels to drive their story into the market, and that’s not good enough.  The salesforce needs a strong marketing backstop to be productive, and IBM continues to demonstrate it’s lost its game there.

Cisco?  Well here we have almost the opposite situation.  Cisco simply does not want to admit that new benefit paradigms are needed.  They want us to believe that a bunch of teen-agers who are downloading and viewing content at marginal returns that are falling by half year over year should be supported in their appetites no matter what the ROI is.  They want us to believe that all our household appliances and all our business devices are little R2D2s, eager to connect with each other in a vast new network with perhaps the chance of taking over from us humans, but with little else in the way of specific benefits to drive it.  Cisco thinks traffic sucks.  It sucks dollars from their buyers’ wallets into Cisco’s coffers.  All you have to do is demonstrate traffic, and Cisco wins.  Nonsense.  In fact, Cisco’s biggest problem now is that it’s expended so much time positioning drivel that it may be hard to make anyone believe they have something substantive.

To be fair to Cisco, they have a fundamental problem that IBM doesn’t have.  Worker productivity and even network services are driven by experiences now largely created by software at the application layer.  IBM understands applications, and Cisco has never been there.  The comments that came out recently that Cisco needs to embrace software are valid, but not valid where they were aimed.  It’s not about software-defined-networks it’s about software that’s doing the defining.  Cisco has confused the two, and now its fear of the first is barring it from the second.

No vendor is going to invest money or PR to shrink its own market.  SDN and NFV and the cloud—our trio of modern tech revolutions—are all about market shrinkage because they’re all about cost savings.  They’re less-than-zero-sum games, unless you target the revolutions at doing something better and not cheaper.

Cisco wants to be the next IBM, which begs the question of what happens to the current IBM.  IBM has weathered more market storms than any tech company; Cisco is an infant by comparison.  For Cisco to really take over here, they have to take advantage of IBM weakness, which they can’t do by doubling down on their own.  Think software, Cisco, in the real sense.  You have, or had, as much credentials in the mobile space as IBM.  Why didn’t you realize that SDN and NFV and the cloud were going to create opportunities for new benefits, services, and experiences that would drive up the total “R” and thus justify higher “I?”

Cisco has aligned with Microsoft, as IBM has aligned with Apple.  Microsoft is a solid Clydesdale against Apple’s Thoroughbred in terms of market sizzle, and they have the same problem of being locked out of emerging benefits as Cisco does.  But Cisco could still use the Microsoft deal to lock up the middleware and cloud models that would validate mobile empowerment and suck them down into the network layer.

That’s the key here for the whole IT and networking space.  About a quarter of all the value of new technology that new benefits could drive are explicitly going to IT, and another quarter to networking.  The remaining half is up for grabs, clustered around the soft boundary between the “data center” and “the network”.  If IBM can grab the real benefit case, support it fully with both IT and IT/network technology, it can move that boundary downward and devalue Cisco’s incumbency and its engagement model.  If Cisco can grab it, they can move the boundary up.  One of them’s singing a sweet but dumb tune, and the other is playing a great tune in their own mind.  Who fixes the problem first, wins all.

 

Posted in Uncategorized | Comments Off

Is it Time to Consider Private-Cloud-as-a-Service?

Despite the fact that every vendor, editor, and reporter likely thinks that media attention to a concept should be sufficient to drive hockey-stick deployment, in the real world a bit more is needed.  One of the major challenges that all of our current technology revolutions—the cloud, SDN, and NFV—all have is that of operations cost creep.  Savings in capital costs, which are the primary focus of these technology changes, are all too easily consumed by increases in operations costs caused by growing complexity or simple unfamiliarity.  That can poison a business case to the point where the status quo is the only possibility.

Yesterday, a startup called Platform9 came out of stealth with a SaaS cloud-based offering that manages “private clouds” using principles broadly inherited from Amazon’s public cloud.  I use the term “private cloud” in quotes here to suggest that you should also be able to apply the Platform9 tools to virtualized data centers, which are inherently technology ancestors to most private cloud deployments.  The primary target for the company, in fact, seems to be businesses who have adopted virtualization and want to take the next step.  There also seem to be other potential enhancements the company could make that further exploit the flexibility of the term “private cloud”, which I’ll get to presently.

At a high level, the Platform9 is a management overlay on top of hypervisor resources and builds on OpenStack, but is designed to be an operationally more effective and complete way of viewing resources/infrastructure, applications, and users.  Details of the virtualization framework—ranging from containers to VMware, are harmonized through the tools so that users get the same management interface regardless of infrastructure.  That’s helpful to large companies in particular because many have evolved into virtualization in a disconnected way and now have multiple incompatible frameworks to deal with.

The Platform9 services provide for resource registration through a set of infrastructure views, and this is what an IT type would use to build up the private cloud from various virtualization pools.  Application or Enterprise Architects or even end users could then use a self-service portal to obtain compute and storage resources for the stuff they need to run.  The IT side (or anyone else, for that matter) can use panels to get the status of resources and instances allocated.

I’m not totally clear on where Platform9 fits with respect to DevOps tools.  Logically they should be a part of the “inside” processes at the IT level, and the assertion that the APIs are OpenStack compatible suggests that as well.  Presumably higher-level application deployment automation could exercise the user self-service interface, which might provide a second-level orchestration option that I think is the right answer for complex application and service deployment.

The goal here, obviously, is to make it possible for enterprise IT to deploy virtualized resources as private cloud services that have the same convenience as public cloud services would.  Certainly the Platform9 mechanism is likely to be considerably easier than gluing OpenStack onto current virtualized resource pools, and that could facilitate adoption of a true private cloud framework.  I think you could even assume that Platform9 would reduce the operations cost of virtualization, at least where there was a significant level of dynamism in terms of how machine images were assigned to resources.  After all, the boundary between virtualized data centers and private clouds is a bit arbitrary.

There are APIs and developer programs to support third-party extension of the platform and obviously Platform9 intends to add functionality.  Some of the features I’ve cited here (container and VMware, particularly) are future enhancements not available in the current beta release, and I’m sure the company expects to have to enhance the platform over time.  They’ll need to because there’s likely to be a lot of other approaches to the same problem.

As I noted earlier, I think the company should look at further generalization of that “private cloud” term to broaden the range of IT environments it can accommodate.  To offer IT on a self-service basis, it’s probably not optimal to think that all of it is deployed on a private cloud.  Obviously some is likely deployed on a public cloud or would be cloudburst there or failed over.  Equally obviously, some IT operations are neither based on virtualization nor on cloud computing; they’re the old business-as-usual multi-tasking server apps.  The point is that it is very unlikely that everyone will be all private cloud in the strict, explicit, sense and so the benefits of Platform9 would be limited unless it extends itself to cover the range of hosting options actually used.  This kind of expansion could let Platform9 provision PaaS and SaaS cloud services, hybrid cloud, and pretty much cover the opportunity space.

Another area I’d like to see the company address is that of operationalizing the infrastructure itself.  Cloud adoption is a combination of deployment and lifecycle management.  Some of the cost of private cloud is associated with the registration of the available resources and the commitment of those resources, but some is also associated with sustaining those resources using automated tools.  I suspect that Platform9 believes that third parties can enhance its offerings with automated lifecycle management, and if that’s the case I’d prefer they be explicit about that goal and also talk a bit about the APIs and the progress they’re making in having partners use them for this important task.  The company may also have some plans of its own in this area; it lists SLA management as a future.

I think that the Platform9 approach is interesting (obviously or I’d not have blogged about it).  It demonstrates that there’s more to the cloud than capex reduction, and that in fact operational issues can be profound problems for cloud adoption.  It demonstrates that there’s value to abstraction of “the cloud” so that users are more insulated from the technical details of cloud software.  If the company evolves their offering correctly, they have the potential to be successful.

This also demonstrates that the whole opex thing is perhaps one of those in-for-a-penny issues.  Ideally, private cloud deployment shouldn’t be exclusively private, or exclusively cloud, or even exclusively deployment.  It should be cradle-to-grave application lifecycle management, both in the traditional sense of ALM and in the more cloud-specific sense of managing the application’s resources to fulfill the expectations of the users.  We’ve had a tendency in our industry to talk about “opex” in a sort-of-two-faced way.  On the one hand, we say that it’s likely a larger cost than capex, which is true if we count the totality of operations lifecycle management.  On the other, we tend to grab only a piece of that large problem set.  Platform9’s real value will be known only when we know just how far they intend to go.

The timing of this is interesting.  We have clearly plucked most of the low-hanging fruit, in terms of cloud opportunity.  Absent an almost suicidal downward spiraling of costs driven by competition among providers, the IaaS cloud has to draw more on operations efficiency or it will stall.  We will likely see enhancements to cloud stack software to accommodate this, improved management/orchestration coming out of the NFV space, and additional commercial offerings.  All that is good because we need to optimize the benefit case for the cloud or face disappointment down the line.

Posted in Uncategorized | Comments Off

How a Little Generalizing Could Harmonize SDN, NFV, and NGN

I’ve done a couple of blogs on SDN topics, but one of the important questions facing everyone who’s considering SDN is how it would fit in the context of NFV.  For network operators, NFV may well be the senior partner issue-wise, since NFV is explicitly aimed at improving capital efficiency, service agility, and operations efficiency and (at least for now) it doesn’t seem to be advocating a fork-lift for the network overall.  But what is the relationship?  It’s complicated.

The NFV ISG is at this point largely silent about the role of SDN in support of NFV.  In large part, this is because the ISG made a decision to contain the scope of its activities to the specifics of deploying virtual functions.  At some point this will have to spread into how these virtual functions are connected, but the details on that particular process haven’t been released by the body.  Still, we may be able to draw some useful parallels between the way that NFV MANO exercises virtual function deployment processes and how it might exercise SDN.

In the current spec, MANO drives changes to the NFV Infrastructure through the agency of a Virtual Infrastructure Manager or VIM.  In a sense, the VIM is a handler, a manager, that would presumably harmonize different cloud deployment APIs with a standard interface to MANO so that orchestration wouldn’t have to know about the details of the underlying resource pool.  Presumably OpenStack would be one of these options, and presumably things like Neutron would be exercised through a VIM.

The first question here is how the capabilities of resources as cooperative behaviors of functional systems can be represented.  What is it that a network of any sort uses to describe something like a VPN or a VLAN?  In OpenStack, this would be done by referencing a “model” that (while the implementation has evolved from the old Quantum networking to Neutron) is a logical structure with known properties that through custom plugins can be realized on a given infrastructure.  The Neutron approach, then, is to have some high-level abstraction set representing network behaviors, and then provide a plugin to implement them on specific gear using specific interfaces or APIs.

My view is that these models are the key to creating a useful representation of SDN for NFV.  If we assume that a “model” is anything for which we have at least one plugin available and which has some utility at the MANO level, then this approach allows us to define any arbitrary set of network behaviors as models, which unfetters SDN from the current limitations—we think of it as another way of creating Ethernet or IP networks.

The question is where to apply it.  NFV has an explicit requirement for inter-VNF connectivity, just as any cloud deployment architecture does.  If we think of SDN at the most basic NFV level we’d think of it as a way of connecting the VNFs, which would make SDN logically subordinate to the VIM, just as Nova and Neutron are subordinate to OpenStack.  I think many in the ISG (perhaps most) think this way, but in my view there are two problems with the notion.  One is that it doesn’t offer a solution to end-to-end networking and so can’t address the full benefit case operators are tagging as an NFV target.  The other is that applying it would tend to make NFV into nothing more than OpenStack, in which case the effort to date wouldn’t really move the ball much.

The alternative is to presume that there’s a handler, like a VIM, that handles network services.  A VIM could be a specific case of a general Infrastructure Manager (IM) that is responsible for harmonizing various APIs that control resources with a common interface or model that’s manipulated by MANO.  This approach has been suggested in the ISG already, though not fully finalized.  We could still invoke the “Network-as-a-Service” IM from inside a VIM for connectivity among VNFs, but we could also orchestrate the non-NFV service elements likely to surround NFV features in a real network.

This defines a challenge for the ISG, one that has existed from the very first.  There is logically a need to automate service deployment and management overall.  That need has to be filled by something that can orchestrate any service elements into a cooperative relationship, not just VNFs.  If the ISG defines, in its approach to MANO, something that can be generalized to support this high-level super-MANO, then it defines a total solution to service agility and operations efficiency.  It also defines the IM and the model(s) that IM represents as the way that SDN and NFV relate.  If the ISG doesn’t take that bold step, then it cannot define an NFV/SDN role because it doesn’t cover all the places the two technologies have to complement each other.

All this implies that there may be two levels of MANO, one aimed at combining logical service elements and one aimed at coordinating the resources associated with deploying each of those elements.  The same technology could be used for both—the same modeling and object structure could define MANO at all levels—or you could define a different model “below” the boundary between logical service elements and service resource control.  I’m sure you realize that I’m an advocate of a single model, something that works for NFV but works so independently of infrastructure (through the model abstractions of IMs) that it could deploy and manage a service that contained no VNFs at all.

You probably see the dilemma here, and also the fact that this particular ISG dilemma is one that’s shared with other bodies, including the ONF.  There’s a tremendous tendency to use scope control as a means of assuring that the specific needs of a process can be met, but that can create a situation where you know how to do something limited, but can’t address enough of the problem set to develop a compelling benefit case.  No standard is helpful if it solves a problem but can’t develop a business case to justify its own deployment.  Sometimes you have to think bigger or think “irrelevant”.

The ISG contributed what might well be the seminal concept of NGN, which is MANO.  It also contributed the notion of a “Manager” that represents infrastructure behaviors at the service level and allows service architects to build services without pulling infrastructure details into their composition.  What it now has to do is to fully exploit its own contributions.  Unlike SDN work, NFV work is arguably already above the network where services are focused.  If NFV can grow down, via generalizing its handlers and exploiting its notion of models fully, then it could not only drive its own business case, it could drive SDN deployment too.

At the end of the day, there’s only one network.  Somebody has to orchestrate and manage it.

Posted in Uncategorized | Comments Off

What the VMware/Arista Deal May Mean to SDN and Networking

The deal between Arista and VMware may turn out to be one of the pivotal developments in SDN, and one of the pivotal steps in the evolution of networking.  Just how far it will take us is, at this point, not clear because there’s the usual mixture of issues, both tactical and strategic to consider.  And, as my use of the word “may” in the first sentence shows, it’s still possible this will be a flash in the pan.

Everything that’s happening in enterprise networking and a lot of what’s happening in service provider networking is linked to data center evolution.  A big part of that is the notion of multi-tenancy, but for the enterprise the most important driver is the continued use of virtual resources to leverage gains in physical server power and increased componentization of software.  The point is that for a decade now, everything important in enterprise networking has been driven from the data center, and that means the data center is a point of focus for vendor power struggles.

IBM has been the historical giant in data center evolution, but for the whole of the time that the data center has been getting more important, IBM has been losing strategic influence.  This can be attributed to an early withdrawal from networking (which took IBM out of the main event in connectivity), lagging virtualization and cloud positioning (IBM has struggled to be a “fast follower” there) and most recently a proposed withdrawal from x86 servers.  IBM’s loss here put the critical data center space more up-for-grabs than would have been the case normally.

Cisco and VMware have been the two trying hardest to do the grabbing, with HP a close third.  My surveys of enterprises has showed that it’s these three companies who are driving the bus in terms of both tactical and strategic evolution of the data center.  Of the three, obviously, only Cisco really takes things from a pure network perspective, and interestingly Cisco has been the one gaining strategic influence the most.  Cisco can be said to have established a physical ecosystem strategy for the data center, countering the logical ecosystem strategy espoused by VMware.  The conflict between these approaches is at the heart of the Cisco/VMware falling out.

The challenge for VMware, though, is that virtual/logical networking won’t move real packets.  You have to be able to actually connect stuff using copper and fiber, and even the early Nicira white papers always made it clear that there was a real switching network underneath the virtual software-based SDN they promoted.  VMware was leaving Cisco’s camel’s nose free to enter the tent, and I think that’s where Arista comes in.  Arista is both a physical network buffer against Cisco’s so-far success in the data center, and the representative of a position that Cisco doesn’t want to take—that networks are dependent on software even when they’re physical networks.

What all this means is that VMware and Arista will surely become the most significant challenge to Cisco’s continued gains in strategic influence.  If we see Cisco’s numbers fall short this week, it will likely be in part because Cisco has been unable to push a pure-hardware vision for the data center against even the limited VMware/Arista partnership we’ve had up to now.  Expect a full-court press from the pair in coming quarters.

The strategic question here relates to another of my blog points last week.  The best approach for SDN is likely to be a hybrid of physical and logical networking, an overlay network constructed on a more malleable model of physical networking.  The Street thinks that one of the goals of the expanded relationship between VMware and Arista is to create this explicit hybridization.  That’s bad for Cisco because it would validate the software vision of Arista and the hybrid model of SDN that has (IMHO) always been the greatest threat to incumbents.

What VMware/Arista could do is take advantage of the fact that building cloud or virtual data centers tends to build application networks.  In an enterprise, an application is kind of like a cloud tenant in that applications are deployed separately, often through their own ALM/DevOps processes.  Because at the core the networks are application-specific, the network has the potential of gaining specific knowledge of application network policies without any additional steps.  You can figure out what an application’s traffic is by using DPI to pull it from a vast formless flow, but if you’ve already deployed that application using specific tools on what could easily be application-specific subnets, you already know all that you need to know.

The partnership between application deployment and virtual networking, and the extension of that partnership down into the physical layer, is what’s important here.  Because VMware has a handle on application deployment in a way Cisco does not, the alliance forces Cisco to think more aggressively about its ACI positioning.  It also means that we could see other vendors who recognize that logical/physical network hybrids are likely the focus of the biggest marketing contest in the industry, to take their own shot at the space.

All of this is happening as the ONF is trying to push for accelerated deployment of SDN, and they may get their wish in that sense.  However, there aren’t standards in place to create what the Arista/VMware hybrid can produce.  Accelerating “SDN” in the broad sense may well change the focus of SDN to higher-layer functionality and away from diddling with device forwarding one entry at a time.  That would be good for the industry if the change of focus can be accommodated quickly, but it would be bad if what happens is an ad hoc logical/physical dynamic created by competition.  That would almost certainly reduce the chances the next generation of network devices would be truly interoperable, at least in the systemic sense.

That’s the biggest point here.  What Arista/VMware may do is to create a whole new notion of what a network is, a notion that goes deeper into applications, software, deployment than all previous notions.  That new notion could change the competitive landscape utterly, because it changes what everyone is competing for.

Posted in Uncategorized | Comments Off