AT&T and Juniper: Consistent Signals of an Uncertain Future

Juniper reported their numbers yesterday, and so did one of Juniper’s key clients, AT&T.  Just a day before, AT&T had announced that Juniper (and Amdocs) were added to AT&T’s “User-Defined Network Cloud”.  Now, some contrasts between the two companies’ reports create a worthy topic for analysis.

When you read through Street analysis on AT&T, it’s clear that the only thing that’s making much of an impression is wireless growth.  This isn’t surprising given that wireless is the key driver for profit for all the operators, and in fact the bright spot for capex that would be most likely to drive vendor success.  Every analyst who covers AT&T talks about the company’s prospects in continuing to add customers, the notion of reduced churn, and whether AT&T can upsell existing post-pay smartphone customers to tablets.  There isn’t a single word about “infrastructure” or the cloud program Juniper was selected for.

The benchmark comment on Juniper from ISI makes an interesting parallel read.  They say that they’ve “…long viewed Juniper as the ‘network innovator’…” but that like many such companies, Juniper had “reached too far seeking growth in adjacent markets and lost sight of OpEx discipline.”  The company was now coming back to its roots by imposing a stark cost-cutting program and shedding some of those adjacent lines of business.

Juniper’s primary area of growth was switching, which can reasonably be linked to the cloud (and thus to Juniper’s AT&T Domain 2.0 status).  But the number of cloud mentions in AT&T’s call was limited—three.  Juniper mentioned it over 30 times.  What this reflects is the fact that while cloud investment might mean a lot to the vendors, it’s still pretty much background noise to the network operators, who are faced with the same dilemma their suppliers are—making the Street happy about the current quarter.

I’ve known, and worked with, Juniper since literally the week their first CEO started his job.  The company has always had strong technology, and their vision of what was next—which included the first articulation of the cloud opportunity by any vendor—has also been unparalleled.  The problem they’ve faced has been one of marketing/positioning.  Only one Juniper CMO has ever shown the necessary fire to drive the company’s vision forward.  That one left years ago.

The notion that Juniper “reached too far” in seeking growth is valid only when viewed through the lens of this marketing deficit.  Juniper’s MobileNext family could have been a strong contender in the space that the Street is now watching most closely—wireless.  They had a content strategy that could have been one of the top approaches to that other important space.  But these strategies lived only in the technical sense.  Juniper could never explain them to outsiders, never link them to the specific customer business goals that would have validated them with buyers and earned revenue.

There’s no option other than “reaching” in the network equipment role.  Anyone who’s not spinning their own private delusions surely knows that without a Congressional mindset that effectively bars Huawei from selling into the US carrier space, Juniper would not have seen anything good in terms of top-line growth.  Routing and switching are commodities, and you win in either space mostly by being an ecosystemic player.  AT&T’s notion of creating roles and zones of procurement have helped save Juniper from the real risk, which was not reaching enough.  And that is the risk that Juniper now faces.

Juniper cannot win in “the cloud”.  You build clouds with servers and software, and Juniper has none of the former and has never “reached” appropriately in the latter space.  If any network vendor will own the cloud, that vendor is Cisco because of UCS.  The Street’s determination to force strategic contraction on Juniper is simply pushing it into a niche that can never again be defended against commoditization.

The Domain 2.0 win Juniper has achieved is interesting considering who else was named—Amdocs.  If you talk to service providers today (or in fact at any point in the last eight years) you’d hear that they need vendor support for their transformation process.  One area of critical concern is the junction between services and the network, the place where service agility and operations efficiencies are created.  Juniper’s only hope, then and now, is to utterly own that space.  It’s represented by a combination of SDN and NFV, and that combination is another of the things Juniper’s positioning has booted.

Contrail, in its technology, is a responsible solution in the SDN space, but you don’t need an SDN controller win to pull Contrail through, and demanding one as a condition of success will have the inevitable and unfavorable result of promoting the “nothing” path of the “all-or-nothing” choice.  Juniper needed only to look above the northbound APIs to find differentiation.  That’s also where NFV lives.  But to do any of this means both a strong marketing position and an understanding of the software side of networking.  Juniper’s old CEO from Microsoft (Johnson) didn’t understand network software.  The Street has made it clear that they want the new CEO only to understand how to shed businesses and people to cut costs.

Part of Domain 2.0 is aimed at creating operations layers that will pull service value up out of the network and into hosted software.  What AT&T would like more than anything else is for that software to be totally open-source.  A year ago, I think that operators would still have accepted the idea that their vendors can lead.  Now, the problem is that the Street is imposing requirements on those operators as much as on the vendors, and these requirements dictate revolutions in operations.  Amdocs is the real potential winner in Domain 2.0, but there would still be room for Juniper (and others) to build something that acts as the shim between the highly monolithic and historically unresponsive OSS/BSS space and both the agile customer opportunity and the evolving network infrastructure.  But time is running out, unless Amdocs boots their own chance of creating that layer.  And they well may do that.  For Juniper, now, only mistakes by others can save them from themselves and their parasitic financial friends.

Posted in Uncategorized | Leave a comment

NFV Openness: Is it Even Possible?

One of the issues I’ve gotten concerned about regarding NFV is the openness of a vendor’s NFV architecture.  The NFV ISG clearly intends to have a high level of interoperability across NFV elements and solutions, but there are some factors that are hampering a realization of that goal.  Most of these aren’t being discussed, so we may be on track for some NFV grief down the line.

The first issue is incomplete implementation of NFV despite claims of support.  There are three elements to NFV according to the specifications; the MANO or management/orchestration core, the virtual network functions (VNFs) themselves, and the NFV infrastructure (NFVI).  According to my rough count, less than 10% of NFV implementation claims even represent all three functional areas.  It’s not necessary to actually offer everything, but what I think a vendor has to do is express where each of the three pieces are expected to come from and how they’re integrated with their own stuff.

The biggest NFV offenses come from vendors who provide NFVI or VNFs and call it “NFV”.  Virtually anything could be claimed to be part of NFV infrastructure.  For servers and platforms, all you really need to do is be able to run a VNF, after all.  For network connectivity, you need to be able to support SDN or some form of explicit virtual network provisioning.  The question is whether an “NFVI” vendor offers a handler that exposes a logical abstraction of their infrastructure.  NFVI credibility depends on that.

The second issue in openness is supersetting of the VNF-to-MANO interfaces.  I am not a fan of the idea that VNFs even talk directly with MANO, simply because there’s no MANO today to talk with and thus no current code that can call on MANO services could be available.  That implies everything has to be rewritten for NFV.  However, even if we’re willing to do that, it’s already clear that vendors are going to define their own “platform services” for VNFs that are not called for or specified in the NFV documents.  That means that a given vendor’s VNFs could pull their MANO through because nobody else could support its extended VNF-to-MANO interfaces.

Every NFV implementation has to start with something, and that means that there will be a limited number of VNFs to be used.  If those VNFs can create a kind of walled garden by their relationship with proprietary extensions of the NFV specification for VNF-to-MANO connections, then every new service might end up pulling in its own MANO, and any hope of securing efficient operations is lost.

The third issue in openness is that of management integration.  The NFV ISG presumes that there will be a management element integrated with the VNFs, and that this element will then expose management interfaces and capabilities.  There are several issues with this approach, IMHO, but the “openness” problem with that is that the way by which VNF managers manage the subordinate VNFs isn’t standardized.  A management system could be pulled into an NFV deployment by the customized management approach a vendor offers, then that management system could pull in the vendor’s other VNFs and even NFVI.

Of the specific problems we face in NFV openness, two stand out.  One is the challenge of abstract infrastructure.  The ideal option for NFV would be to have a set of “models” that define both hosting of virtual functions and network-building, in such a way that any server and network platform (respectively) could be harnessed to implement the model.  That includes all the current network devices and EMS/NMS service creation interfaces.  I’ve yet to see a truly compliant standard approach to this kind of definition; OpenStack’s Nova and Neutron come close, but that’s all.

The specific issue is the management side.  Here we have to deal with a couple of new wrinkles in modern virtualization-based services.  One is that any “virtual” or “abstract” service is presented to higher layers as an abstraction, and abstractions have to be managed as what they look like and not just by managing the subordinate elements.  A buyer who gets a virtual branch access device needs that virtual device to look like the real thing, yet the NOC that supports it has to see the components.  This kind of management dualism is critical.  The second wrinkle is that unless we have an accepted standard for higher-level orchestration—one that recognizes all the elements of the service end-to-end, we can’t cede management tasks into MANO.  Doing so would mean that an implementation of MANO would have to rely on non-standard mechanisms to present management views and parse a tree of service structure from the user level down to the resource level.  That’s the end of openness.

This raises what might well be the most critical question for NFV openness, which unfortunately may not have anything to do with NFV per se.  NFV is about virtual functions, yet it’s likely that no future service will have more than about a third of its functionality virtualized.  We’ll still be using traditional VPNs to provide connectivity to all those virtual branch access devices.  NFV’s MANO is not directed at solving the management/orchestration problems for these complex services.  Nothing else is either, at the moment.  So the question is whether we can define how part of a service is managed/orchestrated without knowing the big management/orchestration picture into which it must fit.

Some analyst firms have indicated that NFV field trials will start this year, and they’re right.  Many say that no real deployments will happen this year, and they’re sort-of-wrong there.  There will be deployments that vendors and even operators will represent as being “NFV”.  Those deployments will be almost totally lacking in the essential elements of an open strategy.  Part of that is because the ISG specifications for NFV are not complete so they can’t be complied with yet.  A bigger part is that most vendors don’t want open NFV, they want an NFV solution that only they can supply.  That kind of opportunism is to be expected in a profit-driven world, but that doesn’t mean we have to turn a blind eye to it.  If operators want an open NFV approach, it’s going to be up to them to force the issue—or more directly force the resolution of the issues I’ve exposed here.

Posted in Uncategorized | Leave a comment

IBM and Intel Show the Cloud is Complicated

We’re starting to get some earnings reports from tech bellwethers now, and so it’s time to take a reading of the space and try to assess how tech might develop through the balance of 2014.  We’re still early in the game, particularly for networking, but we do have some insights from the IT hardware side available in Intel’s and IBM’s numbers.  We’ll start with Intel, whose quarter was a combination of good, not-bad, and unexpectedly (and maybe seriously) bad.

Data center revenues were up for Intel, particularly for cloud-related stuff, and that’s clearly a good thing for Intel.  Given that the cloud could be one of those rob-Peter-to-pay-Paul things, the fact that data center products overall were strong suggests one of two possibilities.  First, perhaps the cloud isn’t going to rob Peter as much as expected.  Second, perhaps the cloud investment leads the impact of cloud hosting on data center spending by more than expected.  I think it’s a bit of both, but mostly that the real impact of the cloud won’t come from displacing current in-data-center hosting of applications but in creating new applications that are cloud-specific.

PC client chip revenues were stronger than many had expected, though I think that it’s clear that the impact of tablets on PC sales has been overestimated.  There’s a chunk of the market that will stay with the PC for a very long time, perhaps even perpetually, and the biggest impact of tablets will likely be erosion of average selling price for the PCs, something that isn’t likely to hit Intel as hard as it would the PC vendors themselves.  However, it’s likely this is what has caused Intel to take a hit on gross margins.

The big issue for Intel is the big miss in mobile, which was down a whopping 61% year over year.  The company hasn’t been able to gain any real traction in the LTE space, largely because the Windows smartphone designs that would be most logically targets for Intel chips aren’t selling well compared with competition from Apple and Android (and likely won’t do much better any time soon).  While Intel has been successful with its “Internet of Things” activity built around the Atom CPUs, their success has been limited to vehicle entertainment systems and POS terminals.  That Intel has decided to wrap these applications into a glamour title “Internet of Things” suggests to me that they realize they’re not going to gain much in the mobile space and so they’re preparing a substitute strategy.

The challenge for Intel is that there’s no brand recognition with either of their success stories in IofT, and that where brand matters (tablets and phones) they’re not a player.  Given that it’s very doubtful that Google would let Intel do something distinctive in terms of software features to brand an Android device, Intel is again dependent on Windows no matter how many Android designs they spin out, and what Intel needs is a brand of its own, something more updated than “Intel inside”.

And “brand” is something that could come from the software side.  I was disappointed that Intel had no comment on software on their call, especially since Intel’s Wind River activity has arguably the best (and maybe only) NFV server platform story out there, and NFV servers could be the largest incremental server opportunity in the market for the balance of this decade.  Intel needs to be thinking about what IBM did, and of course that leads us to the IBM call.

IBM also had its mixture of news.  Service margins were up, but the most significant upside was that of software and WebSphere in particular.  IBM is essentially admitting to not being a hardware company in the longer term (it’s selling off its x86 and its other hardware lines were all disappointing), and they can’t be a service play alone and differentiate over others like Accenture.  Thus, software.

To me, the problem is the fact that IBM is explicitly betting on the cloud, and yet it’s not linking the cloud convincingly to its software.  In my surveys last fall, IBM was behind HP in terms of customer regard for their cloud story, even in the enterprise sectors where IBM was strongest.  The reason was that IBM didn’t seem to be able to articulate a cloud strategy as much as a series of cloud tactics.  Look at this from the call:  “In the quarter we announced a $1.2 billion investment to globally expand our SoftLayer cloud hubs. We launched BlueMix, our new platform-as-a-service to speed deployment of hybrid clouds. We acquired both Aspera and Cloudant to extend our capabilities in big data and cloud.”  Where is the strategy here?  You can’t say “My cloud strategy is to say ‘cloud” a whole bunch of times.”  I counted 31 “cloud” references on the call, but not a clear statement of why IBM’s approach was better.  OK, you can say that this was a financial call, but I think you can make the same statement about the collateral on IBM’s website.

What’s frustrating is that IBM has all the pieces, at least as many as anyone else and perhaps more.  In what I think is the critical fusion of cloud, SDN, and NFV, IBM has the asset base that maps to the right answer better than anyone else does.  Yet IBM’s position in that critical fusion can’t be explained by carriers or IBM’s customers in general.  Outside of IBM’s base, we found few enterprises who could explain IBM’s cloud position, and that was also true of carriers.

All of this still seems to be rooted in the waltz into sales/technical without taking the detour through marketing.  Marketing is what’s supposed to build your brand, to link you to the trends that matter to your prospects.  Marketing should have articulated the IBM cloud vision, made that vision compelling, and then associated the vision with specific IBM initiatives.  Should have, but hasn’t so far.

It would be risky to say that IBM is in serious trouble here, given the company’s reputation for transforming itself successfully through a half-dozen revolutions in tech.  For now, the problem is that hardware isn’t going to get better, and the loss of x86 systems is going to disengage IBM from a lot of package opportunities.  There is a cloud business for IBM to build, but they need to be able to explain just what that business is, and differentiate their vision from that of HP.  And that, friends, is likely to be the challenge for many IT and networking vendors in this earnings season.

Posted in Uncategorized | Leave a comment

Are We Looking at a Context-Driven Mobile Revenue Model?

You have to love a market that, when faced with seemingly insurmountable contradictions in business and technology trends, proceeds to generate new ones unapologetically.  We had that yesterday with the story that Sprint was considering shutting down the WIMAX assets of Clearwire, then another story that Google might be aiming to be an MVNO.  So how is it that wireless is “good” for Google and bad for Sprint?  Is it just WIMAX or something else?

WIMAX is a licensed wireless technology (at least in Clearwater form) that is likely best to support the access of “migratory” users rather than “mobile” ones, using laptops or tablets and not phones.  You could think of it as a kind of for-fee super-WiFi that could cover a much larger geography.  It’s that angle that seems to make Sprint’s decision odd—if people are jumping all over hospitality WiFi, then why kill off it’s bigger brother?

The probable reason is that hospitality WiFi is killing it.  There are only a finite number of places where yuppies and bored teens can roost to get online, and in most cases the establishments themselves are using WiFi access to attract them.  It’s free.  There are other locations like airports where network operators, cable companies or even municipalities have provided WiFi access that’s free or at least has only a modest episodic cost.  Sure, these choices aren’t going to cover that wonderful park in the city or a nice bench along the river, but they cover enough to dampen the whole WIMAX business model.

And remember, you don’t have to say that there’s no value for WIMAX, only that the spectrum would be more valuable if you used it for 4G, and that’s the deciding factor here I think.  If you look at return-on-cell, LTE is likely to do you more good, given the hospitality WiFi competition I’ve already noted.  So what this says is that the mobile user is more valuable than the migratory user.

But what can we deduce about Google’s purported interest in being an MVNO; there’s no spectrum or spectrum value involved?  For those who don’t know what that is, an MVNO is a Mobile Virtual Network Operator, a retail carrier who buys capacity on somebody’s wireless network to serve their customers instead of building out their own network.  Obviously Google would have to pay the wholesale partner for that capacity and the margins would be thin.  If Google picked only one partner they’d probably alienate every other carrier, who might then punish Google by removing or weakening their support for Android.  It’s said that Google is interested in offering its wireless MVNO service in its “Google Fiber” footprint locations, but that’s kind of impossible unless you think a mobile user in that footprint would never go anywhere else.  Google has to market and support any MVNO service nationally, IMHO.

What Google gains from MVNO status in my view isn’t fiber-footprint competition with the incumbent LECs.  Anyone who thinks Google wants to be a general access provider is delusional.  The current guys would fall into fits of unbridled joy at the mere prospect of such a move from Google because Google’s margins on that kind of business would be razor-thin and they’d have to become opponents of neutrality just to stay alive.  Nor do they want to compete with the wireless carriers; how could you undercut the guy you’re buying from, and anyway who has more wiggle room to operate at thin margins, Google or a LEC?

What’s likely underneath Google’s move is leveraging its brand.  Google wants to have a success in IPTV.  It wants to be a giant in mobile LBS, in advertising in general.  But deep inside Google knows that at some point every OTT player has the same underlying problem, which is that the global budget for advertising won’t keep their stockholders happy even if it all gets turned into online ad revenue.  The fact is that online advertising is probably lowering adspend because advertisers use online capabilities to target more tightly, reducing their costs overall.  If you stay ad-driven you’re a termite in a lifeboat—you inevitably eat your salvation.  Google has to sell something at some point. Selling means exploiting your brand.

But there are two questions arising from this.  First, Google could unquestionably deploy WiFi on a large scale, and has already done deals with retail chains and also “Muni-Fi” deals.  Given that most people do content-viewing and real searching while roosting rather than walking, could Google get more traction from its own WiFi?  Second, what is it that Google might sell to leverage with an MVNO deal?

The answers here could be related.  If you assume Google’s primary target isn’t the migratory user but the true mobile user, then it makes sense to think in terms of MVNO rather than to think about WiFi.  And second, if you assume that Google wants to sell contextual services to mobile users, then everything makes sense.

Contextual services are services that are designed to recognize what the user is doing when they make a request.  A user’s location is part of context, and so is the location of other users who are close socially, and local traffic conditions and weather conditions and perhaps who’s being called or chatted/SMSed with.  Google has a lot of stuff that establishes context in a search or query sense, and they may just be thinking about leveraging all of that stuff to create one or more mobile-paid services.

Microsoft’s Cortana personal assistant is something that just might have generated some Google concern, and of course there’s Apple’s Siri.  Everyone is trying to get to the “Her” stage, a personal agent that can accept speech questions and provide answers.  As I said in an earlier blog, “Her” is a Hollywood dream if you want to get gestalt with the gadget, but you could narrow the interpretive/response range if you could assume context generated the question.  Might Google see contextual agent applications as their money tree?  If they do, then MVNO makes sense for them.

Posted in Uncategorized | Leave a comment

Context: The Missing Link Between Analytics and Utility

We’re hearing a lot about how analytics is going to change networking, how they’re essential in SDN, NFV, the cloud, and maybe also critical in improving your social standing, good looks, financial health, even maybe make you run faster.  How big can “big data” get?  Apparently the sky’s the limit.  As usual, most of this is just crap, and so we need to look at the question of applying “analytics” to network services to sort out any good ideas from the inevitable chaff of marketing claims.

First, “Knowledge is Power” not “data”.  In order for “data” to become knowledge, you need the critical notion of context.  When I know that traffic is heavy, it’s nice.  If I know what road it’s on and when, it’s nicer.  If I know it’s on the road I plan to use, it’s power.  The point here is that collecting information on network behavior does little good if you can’t apply that data contextually in some way, and there are two approaches you can take to get to context.  Then you have to turn that “knowledge” into power separately.

The first approach to data context in network analytics is one of baselining.  If traffic is “heavy” it has to be heavy relative to something, and in baselining you attempt to define a normal state, as a value of variables or more likely a range of value.  When data falls outside the range at any point, you take that as an indication of abnormal behavior, which means that you undertake some action for remediation (the “power” part).  However, getting baselines for variables won’t create context because you can’t relate the conditions across measurement points with anything in particular.  Baselining, or simply range-testing in analytics, isn’t particularly helpful, and most people who do anything with it that’s useful really mean past-state analysis when they say “baselining”.

What some analytics approaches advocate is to look at the state of the network holistically, with all variables considered individually relative to their accepted range of values.  You then essentially pattern-match to decide what past state this present one corresponds to, and you accept the way that past state was interpreted as being the context of the current conditions.  The NOC said this was Friday-and-I-have-no-date traffic last time it happened, so that’s what I’ll call it this time.  Presumably, if we can determine the remedies taken last time and apply them (or at least suggest them) automatically, we can respond correctly.  However, we have to assume that our baseline map has 1) accurately established context based on past-state analysis, and 2) that somebody has created rules for response that can be applied to the current situation.  Most analytics processes don’t really address the latter of the two issues; it’s up to an operations specialist to somehow create general scripts or policies and make them runnable on demand.

The second approach to gaining context is to take a service-driven approach.  A network asserts a service set that’s consumed by its users.  Each service is dependent on resource behaviors that fulfill it, and if you understand what behaviors are associated with a given service you can correlate the state of these behaviors with the services.  Now if “Behavior 4” has a variable out of range, you can presume that means that the services depending on Behavior 4 will be impacted.

The critical requirement in a service-based analytics application is that there be a correlation between the data you collect and “services”.  That means either that you have to measure only service-specific variables and ignore resource state, or that you understand the way resources relate to the services.

Resource relationships to services depend on whether the service network is a provisioned resource or a connection network.  In a provisioned resource, you make a specific connectivity change to accommodate the service and so you presumably have some knowledge of how you did it.  In cloud networking, for example, if you use Neutron to set up connections among hosted elements, you know what connections you set up.  In connection networks, the members of the network are mutually addressable, and so you don’t have to do anything special to let them talk,  Instead you have to know how a given connection would be carried, which means an analysis of the state of the forwarding rules for the nodes.

One thing all of this demonstrates, if you think it through, is that there are really two networks here—a service network and a resource network.  There are also then two ways of looking at network conditions—based on how they impact services directly (the service-network view) and based on the health of the resources, working on the theory that healthy resources would support services as designed.

You might think this means that the service context is useless, but the opposite is true.  That’s because there are two levels of “service” in a network.  One level defines the “functional behavior” of the service and is created and sustained by maintaining functional relationships among elements, and the other defines the “structural behavior” of the service, which is created by that connection network (or networks).  Resources, or infrastructure, asserts its own services.  When we talk about a service view of something in relation to analytics we’re not talking about the retail functional relationships but rather the structural relationships—which is good because it’s the resources we have data from.

For new technologies like SDN and NFV I think this dualism is critical, both to allow analytics to be used effectively and to make operations of a network practical.  Where a “service” is coerced from “multi-tenant” resources by network/service policies, you can’t spend a lot of time fiddling with individual connected users and applications because you set up the multi-tenant connection network to avoid that.  In that case, you have to consider the whole connection network as a service.

The final point here, the “power” part of knowledge, is making something happen with what you now know.  The service-based framing of network analytics means that you have something ecosystemic you can use as context—the connection-network service you defined.  Logically, if that’s your framework then you have to be able to take service experience and pull resource conditions out of it to create your analysis, which means that analytics has to be related to some sort of service, and in a way that allows you to collect resource data for that service on demand.  This is the thing you need to look for when somebody talks “network analytics” to you.

Posted in Uncategorized | Leave a comment

And the IT Giants’ Prospects?

I’ve talked about the fortunes of the service providers and the network equipment vendors in past blogs, and so it’s logical now to talk about the IT giants who are players in the networking space.  None of these firms are likely to be targets of M&A in at least the traditional sense, and Dell has already gone private.  Others in the space aren’t likely to follow suit, so the question is really not one of survival as much as of prospering, and perhaps acquiring some other players.  Even network guys?  We’ll see.

Let’s start with the obvious giant, IBM.  There is no technology company I respect more, because there’s no other technology company that has shown it can weather major shifts in the market and technology.  I learned programming on an IBM computer, worked with the first true IBM mainframe.  They launched the mainframe and the PC in a very real sense.  Their Systems Network Architecture (SNA) was the foundation of enterprise networking.  They have patents and R&D in every area of IT.  You have to take these guys seriously.

The problem IBM has is that old guys like me don’t make up the market.  There was a time when IBM as a brand was solid gold, but IBM doesn’t have the brand they used to because their stuff is hardly populist.  Selling off the PC (laptop, desktop) business to Lenovo made financial sense but not brand sense.  But the big problem is marketing.  If you sell to the Fortune 500 you don’t normally need it, but if you want to shift your focus and image you certainly do.  I

BM has to become a cloud company, and in the networking space it has to become an SDN and NFV company.  I was at a European networking show where IBM had a booth, consisting of a sad little flag, two bored people, and nothing else.  It was worse than not being there, and yet OpenDaylight had a nice booth.  And in NFV, I don’t see IBM having a real story at all even though they have virtually all the assets.  I could build NFV from the ground up more easily using IBM products than from the products of any other vendor, but you’d never know it unless you dug in more than anyone is likely to.

Might they do some M&A to fix their problem?  No, not because the M&A part isn’t likely but because you can’t fix a problem with the choir director by hiring more singers.  IBM needs to orchestrate in more ways than one.

HP is in a much better place in the singing sense.  In the cloud space, HP is a player with servers and software and even a public cloud offering.  They also have an SDN story and one of the better (maybe even the best) NFV story.  The problem HP has is not that they don’t have a story but that they may still be a tad short on substance.  In the cloud, SDN, and NFV space HP is still following a roadmap instead of sitting happily at the destination.

NFV is IMHO the critical test for HP.  NFV is the near-term application for a level of orchestration and management that will eventually touch everything in the networking and IT space.  It used to be that HP OpenView was almost the household word of network management, and HP needs to have HP Open-something the household word of orchestration down the line.  The only way to insure that happens is for them to be a player in NFV as soon as possible.  They are working on it, working better than any other IT player is, but they might still do some canny M&A to get there faster.  It’s hard to say what would be best for them to pick up because I can’t read their technology trend line with management and orchestration yet.  Watch these guys though, because if any IT player moves in this space HP is one who might.

My next name is Intel and that may surprise you.  Intel is known as a chip player, but they’ve been quietly looking more and more into software.  The Wind River Carrier Grade Communications Server is impressive; arguably the best open-source platform for carrier cloud, SDN, and NFV.  If you were to add orchestration and management to it, the combination might be so powerful it would establish Intel as the kingpin in that space, giving them a completely killer NFV approach.

The challenge for Intel in management and orchestration is that they don’t have anything going there in an open-source sense.  They’ve had a relationship with Tail-f but I’m not convinced that’s going to provide them what they need, and the relationship might actually discourage both Intel and other partners from cohabiting to create a better solution.  If Intel does any relevant M&A I think that the management and orchestration space is where it would make sense.

Dell is another potential network-market server/IT vendor.  Like HP, Dell has a pretty decent portfolio of stuff for the networking space, and like HP it’s largely based on open source (Dell/Red Hat for example).  The challenge is that open source is inherently non-differentiating, and Dell needs to have a differentiated strategy if they want to compete in the space with the likes of HP or Cisco (who we covered among the network vendors).

HP has management distractions, and so obviously does Dell, who needs to figure out how to run as a private company so they can do a re-IPO later on and make everyone who participated in the privatization rich.  That means doing a lot better than expected, which means doing more than push servers and PCs the same way as always.  The carrier market could be huge for Dell, and it could be that Dell has to pull partner Red Hat’s carrier train against Intel/Wind River too.  A big order, one that Dell may have to do a lot of M&A to fill, but they have no stock currency to buy with.  We’ll see what happens.

So there we have it.  The IT giants are safe in their core markets for now.  They have some M&A incentive, but not to buy network equipment players.  This is all about software now.  Open source software in areas where you can’t differentiate yourself easily, to poison the well for others who might try.  Special sauce where something special is possible.  That’s what to look for.

Posted in Uncategorized | Leave a comment

Consolidation Risks among Network Vendors

When I did my review of the Street’s view of consolidation in the service provider space, some of you wondered about the network equipment vendors.  After all, it’s hard to imagine how a buyer industry so pressed for profits it has to collapse into itself via consolidation could avoid putting some price pressure on its vendors.  If that happens (which clearly it is already) then the vendors come under consolidation pressure as well, as a target or as someone looking to acquire to bulk up or build up.  But some more than others.

Who is “safe?”  Obviously Huawei doesn’t have anything to worry about.  It’s not going to be bought and it’s unlikely it would go out and grab up one of the other network names as long as there are issues in Congress with selling to the big US operators.  They could do M&A in the enterprise space or in software, and I think that’s likely.  The equipment guys really don’t have much that Huawei needs; they need ammunition in the NMS, OSS/BSS, and orchestration spaces.  This is where I think Huawei should focus their own M&A telescopes.

Cisco is similarly immune from being acquired, but they do have a risk.  For a long time activist investors have considered jumping into Cisco (as they have with rival Juniper), but this time to force a breakup.  Cisco has a bunch of fast-growing but small product areas and a behemoth legacy switch/routing business that has nowhere to go, profit-wise, but down.  They could do some more M&A, but the fact is that Cisco is torn right now between “buying revenue” and “buying R&D” (an issue for a lot of vendors).  They may wait a bit to see which would do them more good.  They really have a good asset set; I think their challenge is just one of priorities.

Another player I think is unlikely to be bought is Ericsson.  The company has a good thing going right now, reducing its exposure to commoditizing hardware and focusing more on professional services.  The open-source pressure in the network space is likely to help Ericsson, since most operators see themselves either buying integrated packages from open source vendors like Red Hat or Wind River, or integrating with a partner.  I think Ericsson may stand back on acquiring something in the near term, though.  Their primary assets are in the OSS/BSS space and other than picking up some software players to add technology value in orchestration or other network-related areas, I think they’ll ride the fence.

The “well…maybe” players start (alphabetically) with Alcatel-Lucent.  I think Alcatel-Lucent has a strong product portfolio, but they have a pretty high level of expense and they are also a bit too monolithic and glacial to contend with a fast-moving market.  I don’t think that they are at imminent risk, but they would be vulnerable to a major shift in technology like SDN or NFV if they couldn’t harness it to their benefit.  Their positioning is particularly vapid, and that’s been an ongoing problem for them.  It’s simply too early to say whether they can track trends or look exciting.  If they’re looking for M&A I’d suggest that management/orchestration might be the place to focus.  That would give them more opportunity per invested dollar I think.

Next in the “maybe” group is NSN.  They have a good but narrow product portfolio, something that can create some very significant risks.  Mobile infrastructure has been a kind of “stay-the-course-fools-paradise” because margins there have circled the drain closer to the rim than the rest of networking.  That doesn’t end the downward slurp, though.  Not only that, Huawei clearly sees mobile as its own big priority and there’s nobody you want less in a competitive situation than them.  NSN’s question is whether it takes a risk by holding to current product boundaries or takes a risk in expanding them.  As long as they’re on the fence there, they won’t acquire anything big, I think.  If they make a big move, watch to see if it steps outside the mobile box.  If it doesn’t then NSN may be looking to be adopted instead of having a single parent.

In the “could-be-acquired-or-worse” category we’ll again go in alphabetical order and start with Brocade.  While the company had a significant blip in strategic traction last year because of the Vyatta deal and some semi-good-if-perhaps-accidental NFV positioning, they lost all of it by the fall survey because they just couldn’t seem to follow up with a cohesive story.  The problem Brocade has is that they are really a data-center player without much of a cloud or NFV strategy and those are what will drive data center networking.  Their spring 2013 success showed that being stridently different will get you attention, so they need to do that again, but also follow up by doing something stridently useful.

Next on the list is obvious; Juniper.  The company just announced staff cuts as their new CEO tries to make friends with activist investors.  The problem is that, as a US company, you can’t sustain yourself in a commoditizing market by trying to fight Huawei on price and if you cut costs you can’t ramp up R&D or M&A like you need to.  The problem Juniper has is its price/sales ratio and P/E.  The former is about 2.8 and the latter about 31 as of yesterday; Brocade’s is 2.14 and about 16.  That would suggest to most that Juniper’s stock is pricy, discouraging M&A.  And if you buy back stock and cut costs, your near-term ratios are probably going to move even higher.  The worst problem is that while the company has many strong things it could do, it’s too preoccupied with cost management to do them.

For anyone who’s a potential acquisition target, the big question is who would take the buyer role.  I don’t think there would be much value in any of the network vendors buying another network vendor.  The computer vendors are the obvious play, and here we have Dell, HP, IBM, Microsoft, and Oracle.  But I don’t think any of these companies would move to acquire one of the network players.  HP and Dell already have some networking gear.  IBM is selling off x86 server business because the margins stink, and networking would truly suck for them.  OEM is better.  Oracle is I think smart enough to see that they don’t want to be in the commodity hardware business.  So…I think all of the possible acquisition targets are likely stuck in 2014, which means they’d better be buffing themselves up for either a rosier life as an independent or a more attractive tidbit for a bigger player to acquire.

Posted in Uncategorized | Leave a comment

Cisco’s OpFlex: We Have Sound AND Fury

Cisco has never been shy about taking a different (and often frankly opportunistic) path with respect to “revolutions” like the cloud, SDN, and NFV.  I’d be the last guy to say that Cisco was all for an open-happy-band-of-brothers approach to competition but I’d also be last to expect that they would be.  We’re all in business to make money, and if Cisco takes a position in a key market like SDN that seems to favor…well…doing nothing much different, you have to assume they have good reason to believe that their approach will resonate with buyers.  Even if their story is confusing.  So it is with OpFlex.

Classical OpenFlow SDN uses a central controller to manage the routes in a network.  This controller uses OpenFlow to communicate forwarding rules to the network devices, and this process can be supported either in “reactive” or “proactive” mode.  In the reactive model, a switch tries to find a rule for something and if it fails, kicks the “something” back to the controller to get a rule.  In the proactive mode the controller is expected to pre-load the devices with complete and consistent forwarding tables.

So how aout OpFlex?  OpFlex isn’t “an alternative” to OpenFlow as some have suggested; it has nothing to do with the forwarding tables in a direct sense.  It’s my view that you could still use OpenFlow inside a network that was controlled by OpFlex at a high level.  You could also use traditional devices.  Like other Cisco initiatives, OpFlex appears to be aiming higher, working to translate application needs into policies and then communicate those policies to the places in the network where traffic control is applied.

Policies, and policy exchanges, are what OpFlex is about.  It’s convenient to visualize OpFlex as creating a kind of “policy network” that exists in parallel with the real network.  This network has three tiers—a Policy Controller/Repository, Policy Elements, and Managed Objects, which are “objects” more in the abstract or software sense.  The goal of OpFlex is to create a tree of policy distribution that ends with Policy Elements that can “resolve” a policy.  It’s the resolution of policies that control the devices.  Policy Elements are linked to one or more MOs, and these MOs are representations of (abstractions of) real/virtual network elements.

Where this connects with user reality is in the notion of “Endpoints”.  Endpoints are devices, virtual or real, and when they “connect” they are registered and assigned a policy.  It’s that policy and the handling it represents that is distributed using OpFlex.  It appears that you can also define “roles” or application structures within an Endpoint, giving them separate policies.  All that adds up to a way of doing application-based handling in an open and distributed way, presuming that everyone implements OpFlex consistently.

If I’m reading the draft RFC right (Cisco, feel free to send me a correction if I’m not) then this whole process has the effect of creating a kind of policy-domain set that overlays on a normal IP or Ethernet network.  This means that the topology management and basic device forwarding stays as it is except to the extent that a Managed Object behavior is applied to the device and that MO responds to a policy.  If there were no policies, the network would presumably function as it is today.  If central policy control is lost it would appear that a default policy could be applied within the policy tree, even down to the device level.

The OpFlex concept is consistent with Cisco’s larger vision of SDN, which has been that it’s about application or software control, starting with APIs and going down to whatever you put in the data path.  That doesn’t have to be anything much different, though it can be.  Cisco seems to imply that there will be an OpFlex link to OpenDaylight; certainly they’re not trashing OpenDaylight or Insieme.  I see OpFlex as being an intermediary layer in the Cisco approach, a means of allowing an API-driven SDN vision that’s always been kind of about policy into an “open” approach.  Is it open?  Yes; Cisco is going to do a reference open-source policy controller and the protocol is being submitted as an IETF standard.

You can now see why I say this isn’t really much related to OpenFlow, though Cisco positions it as being an example of a “declarative” versus OpenFlow’s “imperative” model.  Actually Cisco is describing the “reactive” OpenFlow model I described earlier when it says “imperative”, and I’m not convinced that model is ever a good idea outside the lab.  Most SDN users would want precomputed routes and failure modes.  A better comparison would be to say that OpFlex as an architecture would allow policy-based application control with current devices.  In that regard, it’s really not that different from an OpenDaylight controller with a legacy ACI plugin on the bottom.

Part of Cisco’s approach appears to be applicable to real current problems.  MOs can have statistics and state indicators that can be observed, and it appears to me that an MO hierarchy might be able to support “derived state” accumulated up an MO chain from the individual devices (real or virtual).  It also appears that you could view OpFlex as a kind of SDN federation approach since it seems that a Policy Element could “represent” a complete SDN/OpenFlow network and thus OpFlex and the policy tree could mediate traffic handling across multiple SDN domains.  But the big problem here is my need to continually use qualifiers like “cloud” or “appears”.  By presenting such an obvious OpenFlow counterpunch as the centerpiece of OpFlex PR material, Cisco has hidden any real mission statement and we’re left trying to dredge it out.

One thing I don’t like about this, at least as I’m interpreting it, is that what we’re doing here is establishing application networking policies that are distributed down to the lower network layers.  It appears to me that you could establish policy hierarchies where lower-level networking recognized three or four classes of service and then map application policies to transport policies, but I’m not sure why you’d want to define higher-level policies that wouldn’t be implemented granularly.  I favor the notion of an SDN network that’s layered, with the top level a pure overlay structure (like Nicira) and the bottom level a pure-policy transport-grade-of-service structure.  This I think would be a simpler model because connectivity and grade of service consumption are mapped only at one place—the boundary between the layers.

There’s nothing in the announcement that makes it clear how things like orchestration and management would work in a policy-driven world, though as I indicated you could infer some hooks in OpFlex if you look closely.  I think this is the biggest technical downside for Cisco and the biggest business downside too.  You can’t create network value with protocols, and at the end of the day that’s what OpFlex is.  Since I can easily map a concept like this into the broad “structured intelligence and derived operations” model I’ve been talking about, I have to believe Cisco could have done that too.

I see a bit of a rush-job positioning here, too.  “OpFlex” is a term that’s widely used in other industries so if you search on it you get a bunch of extraneous hits.  The OpenFlow jabs seem furiously defensive, and the state of the material is deficient even for a company like Cisco who values inspiration way more than buyer education.  I think Cisco is seeing the handwriting on the wall (and it’s Chinese), and they’re leaping to the attack here.  Maybe they should have waited a bit.

Posted in Uncategorized | Leave a comment

Playing Offense or Defense: Technology versus Consoldiation

One of the inevitable results of commoditization is consolidation, and Wall Street (Oppenheimer in particular) has started predicting who among the “providers” in telecom might be acquired by somebody else.  It’s worth looking at their list of eaters and “eatees” to see what we can learn about industry direction.

Top of everyone’s list is T-Mobile, and most realize the challenges that the company faces—mostly competitive but also challenges in marketing and churn.  The problem for mobile operators is less that of roaming (as some have suggested) and the need to build giant service areas as it is about advertising scope.  Mobile services are promoted and sustained by brand recognition and that’s achieved largely through TV ads.  If you buy ad space on a primo show and half or more of the audience can’t get service from you because you don’t have towers in their area, you’re wasting your time.

That makes the cost of running an aspiring mobile telco about the same as that of running an established one, and obviously the revenue is a lot less if you’re trying to fight your way up.  But I’m not sure that T-Mobile is an easy grab for somebody for that same reason—return on investment for the buying company is going to be harder to achieve.  T-Mobile can’t likely be acquired by Verizon or AT&T, so if there’s going to be any action, look for it to come from a cable company who wants to leverage mobile services for quad-play.  I’m skeptical even there.

Level 3 is another company that gets an acquisition target nod, and here I think the thesis is better though still complicated.  From a profit and ROI perspective the company isn’t exactly sterling, and I don’t see that changing at this point.  What Level 3 could offer is national backbone capacity and peering with the major access ISPs.  That could be valuable for either 1) mobile backhaul if somebody like T-Mobile really wanted to push their cells into everyone’s territory or 2) if the overturning of neutrality sticks or if a new rule permits inter-provider settlement, provider-pays content delivery, or QoS peering.  I don’t think that Verizon or AT&T need Level 3, though, so again we’re left looking at either offshore players or a cable MSO to be the acquirer.  If the FCC and Congress cooperate on neutrality and settlement, this could happen.

DirecTV is second on Oppenheimer’s list and third overall among the broader Street players.  I think there’s value in video delivery for sure—it’s actually about the only thing that can be truly profitable in terms of consumer network services.  The problem with M&A here is less value than “valuable to whom?”  A company with a video franchise already has little reason to pick up DirecTV because they already have content services so that rules out the cable companies and AT&T and Verizon.  The only thing that DirecTV could add is fringe coverage, where customer density and opportunity were too thin in a geography to make any wireline delivery of content pay.  I think that’s a marginal game, so I think DirecTV is on the block only for an offshore player.

Probably the most “interesting” speculation on provider M&A is somebody buying Rackspace.  The thesis here ranges from the sublimely stupid (“Telcos have to get into the cloud”) to the semi-sensible (“Operators could wring a lot better margins out of Rackspace’s infrastructure, and quickly become players in the cloud”).  This is actually a tough call, because it is true that as former public utilities the telcos could surely deliver better results and also tolerate low ROIs better.  However, it’s not clear how much a telco would have to pay for Rackspace, and unless the telco has specific technical options to reduce costs and specific service objectives beyond IaaS, it’s hard for me to see how this works. Still, Rackspace was the impetus behind OpenStack, which is the favored telco cloud.  Cable companies might be a better bet since they typically have less invested in the cloud than the telcos do (both AT&T and Verizon already have cloud services).  They’d also need to have some specific technical strategy to raise service agility and drive down costs, though.

I think this last point is the most interesting, because in truth the M&A prospects for all the companies I’ve named would be better if we assumed that the prospective buyer had a good idea of how to raise revenue and lower cost that goes beyond consolidative economies, which are always potential drivers for M&A.  Here I’d point to AT&T and their Domain 2.0 strategy, which promises to create a new and more agile/efficient telecom.  If that could be done, then AT&T could benefit from any of these acquisitions except that of Level 3, and it’s possible they could use Level 3 CDN capability to deliver content from U-verse out of area.

The key point about consolidation driven by commoditization is that it can be both promoted by and defended against using measures that drive up margins.  If the buying player can wrestle better profits from what they acquire, then they are more likely to take the plunge.  If the target company can create better margins internally, they can perhaps sustain themselves independently or make the price of acquisition unattractive.  SDN and NFV are both shifts that could boost efficiency and agility.  The cloud could improve revenues.  At the technology level we have path toward better profitability, if we can harness these trends.

Necessity is the mother of invention.  The same forces that encourage consolidation also promote the need for operational efficiency and service agility, those two common buzzwords of our time.  As we’re considering whether the industry will commoditize and consolidate, we need to consider whether it could fight its way out of both painful consequences by simply doing a better and more efficient job.  I think that’s very possible, and referencing again AT&T’s initiatives, I think people are really trying to do that right now.

Posted in Uncategorized | Leave a comment

Beating Huawei’s 20% Game

It’s probably no surprise to anyone that Huawei turned in record earnings in the last quarter, and I’m sure that the other network vendors have even more to worry about now than before.  So do network operators, whose own revenue and profit pressures have been driving them to reduce costs.  Nobody in the whole of the network equipment market can possibly have missed the drive for “transformation” by operators.  You can transform by radically cutting capex, radically cutting opex, radically raising revenue, or a combination of these factors.  With three choices to work with, why is it that Huawei’s competitors have been unable to frame a challenge to the Chinese giant?

SDN and NFV have been driven by capex considerations, by the simple notion that if you want to lower your cost, spend less on equipment.  The problem is that operators know in their hearts that capex reduction is the worst of the three transformation approaches.  One of the thought-leader giants of NFV, in an open meeting, made the comment that capex wasn’t the real driver for NFV; “If we want 20% capex reduction we’ll just beat up Huawei on price.”  That’s a telling comment because it shows both the high hurdle that capex-driven transformation would have to clear, and also why Huawei is winning.

Could either SDN or NFV realize a greater-than-20% capex reduction?  Overall, meaning network-wide, I think the answer is clearly “No!”  Both technologies have strong capex benefits but in relatively specialized missions.  Service Chaining, the poster-child application for both SDN and NFV, is actually a very difficult application to justify at all, given that the profitable applications are limited to business services and could likely be supported by simple cloud-hosted multi-tenant elements because the services themselves are sold on long-term contracts.  It would be possible to redo networking completely to optimize both SDN and NFV, but to build a totally new networking model and evolve to it successfully from where we are is a very big problem.  Too big for vendors to bother with, and so they present narrow and half-hearted solutions, which Huawei can trump on price alone.

How about opex reduction?  Well, there we have a similar issue.  Right now we have operators investing in cloud computing, where the cloud community has a growing list of orchestration/DevOps tools available.  They invest in SDN, which has yet to settle on a true management model, and they’re starting to deploy NFV even though there’s no indication that its own MANO processes will fully address even local management needs (how do you represent a service chain as a MIB when the user wants to see only a virtual device and the operator needs to see all the hosted elements and connections?)  Opex reduction is, in my view, very feasible but it’s not going to happen unless everyone accepts that you can’t gain anything from managing opex in ten percent or less of your infrastructure.  However far you think SDN or NFV or the cloud might go, it’s darn sure going to start off at less than 10% of infrastructure, so early benefits will never be significant.  That means it’s back to pressing Huawei in price, and they win.

Increasing revenue is the last element, and that can in turn be divided into two categories—improving time-to-revenue on current services and offering new services.  “Service agility” (meaning the time from conceptualizing a service to making it available to deploy and then the time from order to deployment) is one of the operator hot buttons.  But again a “service” is more than a cloud- or NFV-hosted element or an SDN data center.  How agile are we going to be if we tie the Road Runner of SDN/NFV to one of those big rocks that keep falling on our coyote friend?

New services is also problematic.  The majority of “new services” people talk about are things like social networking, which are ad funded.  The total ad spending worldwide is less than the revenues of one big carrier so even if operators got all of that (which they won’t because less than half is likely to be even addressable online) they don’t really do much.  What operators need is new services that people will pay for, and that means either for-fee content services (like Netflix) or business services that drive major productivity gains and so can justify paying a nice fee for the service itself.  But equipment vendors settle for saying “Internet traffic is exploding” or touting the “Internet of Everything” and don’t do anything to prepare operators for what for-fee new services might look like.  Huawei’s price-leader approach gives operators at least an assured path to higher profit, so Huawei wins again.

This is a big complicated industry, but you can take comfort in statistics at the high level.  Right now, operators worldwide spend about 18 cents of every revenue dollar on capex.  We can transform them only by making that 18% number smaller by increasing revenue or by reducing the 18% to something like 14% (our hypothetical 20% price reduction by “beating up Huawei”.  Operators would love to do better than that in cost and increase revenue too, but they need solutions and not market platitudes.  They need, as they’ve said in my surveys for 8 years or so, for vendors to step up and support their transformation needs.

Vendors need that too, because our carrier’s 20% push on pricing to Huawei is working.  And because Huawei is at least as likely to See the Light in terms of opex and service revenue increases as other network vendors are.  Huawei eight years ago when operators were signaling their distaste for their vendors’ transformation support, was nothing in software, nothing in management.  They were simple box-pushers, and now they are becoming not only competitive but dominant in things like mobile infrastructure where there’s a big software element.  They’re jumping into OSS/BSS (their Fastwire acquisition just this year proves that).  They’re active in NFV.  These guys mean business, and business beyond being that player who’s beat up for the extra 20% price concession.  Imagine how well they’ll do if their competitors hunker down on legacy technology and vapid positioning.  We may see, and soon.

Posted in Uncategorized | Leave a comment