Are We Looking at a Context-Driven Mobile Revenue Model?

You have to love a market that, when faced with seemingly insurmountable contradictions in business and technology trends, proceeds to generate new ones unapologetically.  We had that yesterday with the story that Sprint was considering shutting down the WIMAX assets of Clearwire, then another story that Google might be aiming to be an MVNO.  So how is it that wireless is “good” for Google and bad for Sprint?  Is it just WIMAX or something else?

WIMAX is a licensed wireless technology (at least in Clearwater form) that is likely best to support the access of “migratory” users rather than “mobile” ones, using laptops or tablets and not phones.  You could think of it as a kind of for-fee super-WiFi that could cover a much larger geography.  It’s that angle that seems to make Sprint’s decision odd—if people are jumping all over hospitality WiFi, then why kill off it’s bigger brother?

The probable reason is that hospitality WiFi is killing it.  There are only a finite number of places where yuppies and bored teens can roost to get online, and in most cases the establishments themselves are using WiFi access to attract them.  It’s free.  There are other locations like airports where network operators, cable companies or even municipalities have provided WiFi access that’s free or at least has only a modest episodic cost.  Sure, these choices aren’t going to cover that wonderful park in the city or a nice bench along the river, but they cover enough to dampen the whole WIMAX business model.

And remember, you don’t have to say that there’s no value for WIMAX, only that the spectrum would be more valuable if you used it for 4G, and that’s the deciding factor here I think.  If you look at return-on-cell, LTE is likely to do you more good, given the hospitality WiFi competition I’ve already noted.  So what this says is that the mobile user is more valuable than the migratory user.

But what can we deduce about Google’s purported interest in being an MVNO; there’s no spectrum or spectrum value involved?  For those who don’t know what that is, an MVNO is a Mobile Virtual Network Operator, a retail carrier who buys capacity on somebody’s wireless network to serve their customers instead of building out their own network.  Obviously Google would have to pay the wholesale partner for that capacity and the margins would be thin.  If Google picked only one partner they’d probably alienate every other carrier, who might then punish Google by removing or weakening their support for Android.  It’s said that Google is interested in offering its wireless MVNO service in its “Google Fiber” footprint locations, but that’s kind of impossible unless you think a mobile user in that footprint would never go anywhere else.  Google has to market and support any MVNO service nationally, IMHO.

What Google gains from MVNO status in my view isn’t fiber-footprint competition with the incumbent LECs.  Anyone who thinks Google wants to be a general access provider is delusional.  The current guys would fall into fits of unbridled joy at the mere prospect of such a move from Google because Google’s margins on that kind of business would be razor-thin and they’d have to become opponents of neutrality just to stay alive.  Nor do they want to compete with the wireless carriers; how could you undercut the guy you’re buying from, and anyway who has more wiggle room to operate at thin margins, Google or a LEC?

What’s likely underneath Google’s move is leveraging its brand.  Google wants to have a success in IPTV.  It wants to be a giant in mobile LBS, in advertising in general.  But deep inside Google knows that at some point every OTT player has the same underlying problem, which is that the global budget for advertising won’t keep their stockholders happy even if it all gets turned into online ad revenue.  The fact is that online advertising is probably lowering adspend because advertisers use online capabilities to target more tightly, reducing their costs overall.  If you stay ad-driven you’re a termite in a lifeboat—you inevitably eat your salvation.  Google has to sell something at some point. Selling means exploiting your brand.

But there are two questions arising from this.  First, Google could unquestionably deploy WiFi on a large scale, and has already done deals with retail chains and also “Muni-Fi” deals.  Given that most people do content-viewing and real searching while roosting rather than walking, could Google get more traction from its own WiFi?  Second, what is it that Google might sell to leverage with an MVNO deal?

The answers here could be related.  If you assume Google’s primary target isn’t the migratory user but the true mobile user, then it makes sense to think in terms of MVNO rather than to think about WiFi.  And second, if you assume that Google wants to sell contextual services to mobile users, then everything makes sense.

Contextual services are services that are designed to recognize what the user is doing when they make a request.  A user’s location is part of context, and so is the location of other users who are close socially, and local traffic conditions and weather conditions and perhaps who’s being called or chatted/SMSed with.  Google has a lot of stuff that establishes context in a search or query sense, and they may just be thinking about leveraging all of that stuff to create one or more mobile-paid services.

Microsoft’s Cortana personal assistant is something that just might have generated some Google concern, and of course there’s Apple’s Siri.  Everyone is trying to get to the “Her” stage, a personal agent that can accept speech questions and provide answers.  As I said in an earlier blog, “Her” is a Hollywood dream if you want to get gestalt with the gadget, but you could narrow the interpretive/response range if you could assume context generated the question.  Might Google see contextual agent applications as their money tree?  If they do, then MVNO makes sense for them.

Posted in Uncategorized | Leave a comment

Context: The Missing Link Between Analytics and Utility

We’re hearing a lot about how analytics is going to change networking, how they’re essential in SDN, NFV, the cloud, and maybe also critical in improving your social standing, good looks, financial health, even maybe make you run faster.  How big can “big data” get?  Apparently the sky’s the limit.  As usual, most of this is just crap, and so we need to look at the question of applying “analytics” to network services to sort out any good ideas from the inevitable chaff of marketing claims.

First, “Knowledge is Power” not “data”.  In order for “data” to become knowledge, you need the critical notion of context.  When I know that traffic is heavy, it’s nice.  If I know what road it’s on and when, it’s nicer.  If I know it’s on the road I plan to use, it’s power.  The point here is that collecting information on network behavior does little good if you can’t apply that data contextually in some way, and there are two approaches you can take to get to context.  Then you have to turn that “knowledge” into power separately.

The first approach to data context in network analytics is one of baselining.  If traffic is “heavy” it has to be heavy relative to something, and in baselining you attempt to define a normal state, as a value of variables or more likely a range of value.  When data falls outside the range at any point, you take that as an indication of abnormal behavior, which means that you undertake some action for remediation (the “power” part).  However, getting baselines for variables won’t create context because you can’t relate the conditions across measurement points with anything in particular.  Baselining, or simply range-testing in analytics, isn’t particularly helpful, and most people who do anything with it that’s useful really mean past-state analysis when they say “baselining”.

What some analytics approaches advocate is to look at the state of the network holistically, with all variables considered individually relative to their accepted range of values.  You then essentially pattern-match to decide what past state this present one corresponds to, and you accept the way that past state was interpreted as being the context of the current conditions.  The NOC said this was Friday-and-I-have-no-date traffic last time it happened, so that’s what I’ll call it this time.  Presumably, if we can determine the remedies taken last time and apply them (or at least suggest them) automatically, we can respond correctly.  However, we have to assume that our baseline map has 1) accurately established context based on past-state analysis, and 2) that somebody has created rules for response that can be applied to the current situation.  Most analytics processes don’t really address the latter of the two issues; it’s up to an operations specialist to somehow create general scripts or policies and make them runnable on demand.

The second approach to gaining context is to take a service-driven approach.  A network asserts a service set that’s consumed by its users.  Each service is dependent on resource behaviors that fulfill it, and if you understand what behaviors are associated with a given service you can correlate the state of these behaviors with the services.  Now if “Behavior 4” has a variable out of range, you can presume that means that the services depending on Behavior 4 will be impacted.

The critical requirement in a service-based analytics application is that there be a correlation between the data you collect and “services”.  That means either that you have to measure only service-specific variables and ignore resource state, or that you understand the way resources relate to the services.

Resource relationships to services depend on whether the service network is a provisioned resource or a connection network.  In a provisioned resource, you make a specific connectivity change to accommodate the service and so you presumably have some knowledge of how you did it.  In cloud networking, for example, if you use Neutron to set up connections among hosted elements, you know what connections you set up.  In connection networks, the members of the network are mutually addressable, and so you don’t have to do anything special to let them talk,  Instead you have to know how a given connection would be carried, which means an analysis of the state of the forwarding rules for the nodes.

One thing all of this demonstrates, if you think it through, is that there are really two networks here—a service network and a resource network.  There are also then two ways of looking at network conditions—based on how they impact services directly (the service-network view) and based on the health of the resources, working on the theory that healthy resources would support services as designed.

You might think this means that the service context is useless, but the opposite is true.  That’s because there are two levels of “service” in a network.  One level defines the “functional behavior” of the service and is created and sustained by maintaining functional relationships among elements, and the other defines the “structural behavior” of the service, which is created by that connection network (or networks).  Resources, or infrastructure, asserts its own services.  When we talk about a service view of something in relation to analytics we’re not talking about the retail functional relationships but rather the structural relationships—which is good because it’s the resources we have data from.

For new technologies like SDN and NFV I think this dualism is critical, both to allow analytics to be used effectively and to make operations of a network practical.  Where a “service” is coerced from “multi-tenant” resources by network/service policies, you can’t spend a lot of time fiddling with individual connected users and applications because you set up the multi-tenant connection network to avoid that.  In that case, you have to consider the whole connection network as a service.

The final point here, the “power” part of knowledge, is making something happen with what you now know.  The service-based framing of network analytics means that you have something ecosystemic you can use as context—the connection-network service you defined.  Logically, if that’s your framework then you have to be able to take service experience and pull resource conditions out of it to create your analysis, which means that analytics has to be related to some sort of service, and in a way that allows you to collect resource data for that service on demand.  This is the thing you need to look for when somebody talks “network analytics” to you.

Posted in Uncategorized | Leave a comment

And the IT Giants’ Prospects?

I’ve talked about the fortunes of the service providers and the network equipment vendors in past blogs, and so it’s logical now to talk about the IT giants who are players in the networking space.  None of these firms are likely to be targets of M&A in at least the traditional sense, and Dell has already gone private.  Others in the space aren’t likely to follow suit, so the question is really not one of survival as much as of prospering, and perhaps acquiring some other players.  Even network guys?  We’ll see.

Let’s start with the obvious giant, IBM.  There is no technology company I respect more, because there’s no other technology company that has shown it can weather major shifts in the market and technology.  I learned programming on an IBM computer, worked with the first true IBM mainframe.  They launched the mainframe and the PC in a very real sense.  Their Systems Network Architecture (SNA) was the foundation of enterprise networking.  They have patents and R&D in every area of IT.  You have to take these guys seriously.

The problem IBM has is that old guys like me don’t make up the market.  There was a time when IBM as a brand was solid gold, but IBM doesn’t have the brand they used to because their stuff is hardly populist.  Selling off the PC (laptop, desktop) business to Lenovo made financial sense but not brand sense.  But the big problem is marketing.  If you sell to the Fortune 500 you don’t normally need it, but if you want to shift your focus and image you certainly do.  I

BM has to become a cloud company, and in the networking space it has to become an SDN and NFV company.  I was at a European networking show where IBM had a booth, consisting of a sad little flag, two bored people, and nothing else.  It was worse than not being there, and yet OpenDaylight had a nice booth.  And in NFV, I don’t see IBM having a real story at all even though they have virtually all the assets.  I could build NFV from the ground up more easily using IBM products than from the products of any other vendor, but you’d never know it unless you dug in more than anyone is likely to.

Might they do some M&A to fix their problem?  No, not because the M&A part isn’t likely but because you can’t fix a problem with the choir director by hiring more singers.  IBM needs to orchestrate in more ways than one.

HP is in a much better place in the singing sense.  In the cloud space, HP is a player with servers and software and even a public cloud offering.  They also have an SDN story and one of the better (maybe even the best) NFV story.  The problem HP has is not that they don’t have a story but that they may still be a tad short on substance.  In the cloud, SDN, and NFV space HP is still following a roadmap instead of sitting happily at the destination.

NFV is IMHO the critical test for HP.  NFV is the near-term application for a level of orchestration and management that will eventually touch everything in the networking and IT space.  It used to be that HP OpenView was almost the household word of network management, and HP needs to have HP Open-something the household word of orchestration down the line.  The only way to insure that happens is for them to be a player in NFV as soon as possible.  They are working on it, working better than any other IT player is, but they might still do some canny M&A to get there faster.  It’s hard to say what would be best for them to pick up because I can’t read their technology trend line with management and orchestration yet.  Watch these guys though, because if any IT player moves in this space HP is one who might.

My next name is Intel and that may surprise you.  Intel is known as a chip player, but they’ve been quietly looking more and more into software.  The Wind River Carrier Grade Communications Server is impressive; arguably the best open-source platform for carrier cloud, SDN, and NFV.  If you were to add orchestration and management to it, the combination might be so powerful it would establish Intel as the kingpin in that space, giving them a completely killer NFV approach.

The challenge for Intel in management and orchestration is that they don’t have anything going there in an open-source sense.  They’ve had a relationship with Tail-f but I’m not convinced that’s going to provide them what they need, and the relationship might actually discourage both Intel and other partners from cohabiting to create a better solution.  If Intel does any relevant M&A I think that the management and orchestration space is where it would make sense.

Dell is another potential network-market server/IT vendor.  Like HP, Dell has a pretty decent portfolio of stuff for the networking space, and like HP it’s largely based on open source (Dell/Red Hat for example).  The challenge is that open source is inherently non-differentiating, and Dell needs to have a differentiated strategy if they want to compete in the space with the likes of HP or Cisco (who we covered among the network vendors).

HP has management distractions, and so obviously does Dell, who needs to figure out how to run as a private company so they can do a re-IPO later on and make everyone who participated in the privatization rich.  That means doing a lot better than expected, which means doing more than push servers and PCs the same way as always.  The carrier market could be huge for Dell, and it could be that Dell has to pull partner Red Hat’s carrier train against Intel/Wind River too.  A big order, one that Dell may have to do a lot of M&A to fill, but they have no stock currency to buy with.  We’ll see what happens.

So there we have it.  The IT giants are safe in their core markets for now.  They have some M&A incentive, but not to buy network equipment players.  This is all about software now.  Open source software in areas where you can’t differentiate yourself easily, to poison the well for others who might try.  Special sauce where something special is possible.  That’s what to look for.

Posted in Uncategorized | Leave a comment

Consolidation Risks among Network Vendors

When I did my review of the Street’s view of consolidation in the service provider space, some of you wondered about the network equipment vendors.  After all, it’s hard to imagine how a buyer industry so pressed for profits it has to collapse into itself via consolidation could avoid putting some price pressure on its vendors.  If that happens (which clearly it is already) then the vendors come under consolidation pressure as well, as a target or as someone looking to acquire to bulk up or build up.  But some more than others.

Who is “safe?”  Obviously Huawei doesn’t have anything to worry about.  It’s not going to be bought and it’s unlikely it would go out and grab up one of the other network names as long as there are issues in Congress with selling to the big US operators.  They could do M&A in the enterprise space or in software, and I think that’s likely.  The equipment guys really don’t have much that Huawei needs; they need ammunition in the NMS, OSS/BSS, and orchestration spaces.  This is where I think Huawei should focus their own M&A telescopes.

Cisco is similarly immune from being acquired, but they do have a risk.  For a long time activist investors have considered jumping into Cisco (as they have with rival Juniper), but this time to force a breakup.  Cisco has a bunch of fast-growing but small product areas and a behemoth legacy switch/routing business that has nowhere to go, profit-wise, but down.  They could do some more M&A, but the fact is that Cisco is torn right now between “buying revenue” and “buying R&D” (an issue for a lot of vendors).  They may wait a bit to see which would do them more good.  They really have a good asset set; I think their challenge is just one of priorities.

Another player I think is unlikely to be bought is Ericsson.  The company has a good thing going right now, reducing its exposure to commoditizing hardware and focusing more on professional services.  The open-source pressure in the network space is likely to help Ericsson, since most operators see themselves either buying integrated packages from open source vendors like Red Hat or Wind River, or integrating with a partner.  I think Ericsson may stand back on acquiring something in the near term, though.  Their primary assets are in the OSS/BSS space and other than picking up some software players to add technology value in orchestration or other network-related areas, I think they’ll ride the fence.

The “well…maybe” players start (alphabetically) with Alcatel-Lucent.  I think Alcatel-Lucent has a strong product portfolio, but they have a pretty high level of expense and they are also a bit too monolithic and glacial to contend with a fast-moving market.  I don’t think that they are at imminent risk, but they would be vulnerable to a major shift in technology like SDN or NFV if they couldn’t harness it to their benefit.  Their positioning is particularly vapid, and that’s been an ongoing problem for them.  It’s simply too early to say whether they can track trends or look exciting.  If they’re looking for M&A I’d suggest that management/orchestration might be the place to focus.  That would give them more opportunity per invested dollar I think.

Next in the “maybe” group is NSN.  They have a good but narrow product portfolio, something that can create some very significant risks.  Mobile infrastructure has been a kind of “stay-the-course-fools-paradise” because margins there have circled the drain closer to the rim than the rest of networking.  That doesn’t end the downward slurp, though.  Not only that, Huawei clearly sees mobile as its own big priority and there’s nobody you want less in a competitive situation than them.  NSN’s question is whether it takes a risk by holding to current product boundaries or takes a risk in expanding them.  As long as they’re on the fence there, they won’t acquire anything big, I think.  If they make a big move, watch to see if it steps outside the mobile box.  If it doesn’t then NSN may be looking to be adopted instead of having a single parent.

In the “could-be-acquired-or-worse” category we’ll again go in alphabetical order and start with Brocade.  While the company had a significant blip in strategic traction last year because of the Vyatta deal and some semi-good-if-perhaps-accidental NFV positioning, they lost all of it by the fall survey because they just couldn’t seem to follow up with a cohesive story.  The problem Brocade has is that they are really a data-center player without much of a cloud or NFV strategy and those are what will drive data center networking.  Their spring 2013 success showed that being stridently different will get you attention, so they need to do that again, but also follow up by doing something stridently useful.

Next on the list is obvious; Juniper.  The company just announced staff cuts as their new CEO tries to make friends with activist investors.  The problem is that, as a US company, you can’t sustain yourself in a commoditizing market by trying to fight Huawei on price and if you cut costs you can’t ramp up R&D or M&A like you need to.  The problem Juniper has is its price/sales ratio and P/E.  The former is about 2.8 and the latter about 31 as of yesterday; Brocade’s is 2.14 and about 16.  That would suggest to most that Juniper’s stock is pricy, discouraging M&A.  And if you buy back stock and cut costs, your near-term ratios are probably going to move even higher.  The worst problem is that while the company has many strong things it could do, it’s too preoccupied with cost management to do them.

For anyone who’s a potential acquisition target, the big question is who would take the buyer role.  I don’t think there would be much value in any of the network vendors buying another network vendor.  The computer vendors are the obvious play, and here we have Dell, HP, IBM, Microsoft, and Oracle.  But I don’t think any of these companies would move to acquire one of the network players.  HP and Dell already have some networking gear.  IBM is selling off x86 server business because the margins stink, and networking would truly suck for them.  OEM is better.  Oracle is I think smart enough to see that they don’t want to be in the commodity hardware business.  So…I think all of the possible acquisition targets are likely stuck in 2014, which means they’d better be buffing themselves up for either a rosier life as an independent or a more attractive tidbit for a bigger player to acquire.

Posted in Uncategorized | Leave a comment

Cisco’s OpFlex: We Have Sound AND Fury

Cisco has never been shy about taking a different (and often frankly opportunistic) path with respect to “revolutions” like the cloud, SDN, and NFV.  I’d be the last guy to say that Cisco was all for an open-happy-band-of-brothers approach to competition but I’d also be last to expect that they would be.  We’re all in business to make money, and if Cisco takes a position in a key market like SDN that seems to favor…well…doing nothing much different, you have to assume they have good reason to believe that their approach will resonate with buyers.  Even if their story is confusing.  So it is with OpFlex.

Classical OpenFlow SDN uses a central controller to manage the routes in a network.  This controller uses OpenFlow to communicate forwarding rules to the network devices, and this process can be supported either in “reactive” or “proactive” mode.  In the reactive model, a switch tries to find a rule for something and if it fails, kicks the “something” back to the controller to get a rule.  In the proactive mode the controller is expected to pre-load the devices with complete and consistent forwarding tables.

So how aout OpFlex?  OpFlex isn’t “an alternative” to OpenFlow as some have suggested; it has nothing to do with the forwarding tables in a direct sense.  It’s my view that you could still use OpenFlow inside a network that was controlled by OpFlex at a high level.  You could also use traditional devices.  Like other Cisco initiatives, OpFlex appears to be aiming higher, working to translate application needs into policies and then communicate those policies to the places in the network where traffic control is applied.

Policies, and policy exchanges, are what OpFlex is about.  It’s convenient to visualize OpFlex as creating a kind of “policy network” that exists in parallel with the real network.  This network has three tiers—a Policy Controller/Repository, Policy Elements, and Managed Objects, which are “objects” more in the abstract or software sense.  The goal of OpFlex is to create a tree of policy distribution that ends with Policy Elements that can “resolve” a policy.  It’s the resolution of policies that control the devices.  Policy Elements are linked to one or more MOs, and these MOs are representations of (abstractions of) real/virtual network elements.

Where this connects with user reality is in the notion of “Endpoints”.  Endpoints are devices, virtual or real, and when they “connect” they are registered and assigned a policy.  It’s that policy and the handling it represents that is distributed using OpFlex.  It appears that you can also define “roles” or application structures within an Endpoint, giving them separate policies.  All that adds up to a way of doing application-based handling in an open and distributed way, presuming that everyone implements OpFlex consistently.

If I’m reading the draft RFC right (Cisco, feel free to send me a correction if I’m not) then this whole process has the effect of creating a kind of policy-domain set that overlays on a normal IP or Ethernet network.  This means that the topology management and basic device forwarding stays as it is except to the extent that a Managed Object behavior is applied to the device and that MO responds to a policy.  If there were no policies, the network would presumably function as it is today.  If central policy control is lost it would appear that a default policy could be applied within the policy tree, even down to the device level.

The OpFlex concept is consistent with Cisco’s larger vision of SDN, which has been that it’s about application or software control, starting with APIs and going down to whatever you put in the data path.  That doesn’t have to be anything much different, though it can be.  Cisco seems to imply that there will be an OpFlex link to OpenDaylight; certainly they’re not trashing OpenDaylight or Insieme.  I see OpFlex as being an intermediary layer in the Cisco approach, a means of allowing an API-driven SDN vision that’s always been kind of about policy into an “open” approach.  Is it open?  Yes; Cisco is going to do a reference open-source policy controller and the protocol is being submitted as an IETF standard.

You can now see why I say this isn’t really much related to OpenFlow, though Cisco positions it as being an example of a “declarative” versus OpenFlow’s “imperative” model.  Actually Cisco is describing the “reactive” OpenFlow model I described earlier when it says “imperative”, and I’m not convinced that model is ever a good idea outside the lab.  Most SDN users would want precomputed routes and failure modes.  A better comparison would be to say that OpFlex as an architecture would allow policy-based application control with current devices.  In that regard, it’s really not that different from an OpenDaylight controller with a legacy ACI plugin on the bottom.

Part of Cisco’s approach appears to be applicable to real current problems.  MOs can have statistics and state indicators that can be observed, and it appears to me that an MO hierarchy might be able to support “derived state” accumulated up an MO chain from the individual devices (real or virtual).  It also appears that you could view OpFlex as a kind of SDN federation approach since it seems that a Policy Element could “represent” a complete SDN/OpenFlow network and thus OpFlex and the policy tree could mediate traffic handling across multiple SDN domains.  But the big problem here is my need to continually use qualifiers like “cloud” or “appears”.  By presenting such an obvious OpenFlow counterpunch as the centerpiece of OpFlex PR material, Cisco has hidden any real mission statement and we’re left trying to dredge it out.

One thing I don’t like about this, at least as I’m interpreting it, is that what we’re doing here is establishing application networking policies that are distributed down to the lower network layers.  It appears to me that you could establish policy hierarchies where lower-level networking recognized three or four classes of service and then map application policies to transport policies, but I’m not sure why you’d want to define higher-level policies that wouldn’t be implemented granularly.  I favor the notion of an SDN network that’s layered, with the top level a pure overlay structure (like Nicira) and the bottom level a pure-policy transport-grade-of-service structure.  This I think would be a simpler model because connectivity and grade of service consumption are mapped only at one place—the boundary between the layers.

There’s nothing in the announcement that makes it clear how things like orchestration and management would work in a policy-driven world, though as I indicated you could infer some hooks in OpFlex if you look closely.  I think this is the biggest technical downside for Cisco and the biggest business downside too.  You can’t create network value with protocols, and at the end of the day that’s what OpFlex is.  Since I can easily map a concept like this into the broad “structured intelligence and derived operations” model I’ve been talking about, I have to believe Cisco could have done that too.

I see a bit of a rush-job positioning here, too.  “OpFlex” is a term that’s widely used in other industries so if you search on it you get a bunch of extraneous hits.  The OpenFlow jabs seem furiously defensive, and the state of the material is deficient even for a company like Cisco who values inspiration way more than buyer education.  I think Cisco is seeing the handwriting on the wall (and it’s Chinese), and they’re leaping to the attack here.  Maybe they should have waited a bit.

Posted in Uncategorized | Leave a comment

Playing Offense or Defense: Technology versus Consoldiation

One of the inevitable results of commoditization is consolidation, and Wall Street (Oppenheimer in particular) has started predicting who among the “providers” in telecom might be acquired by somebody else.  It’s worth looking at their list of eaters and “eatees” to see what we can learn about industry direction.

Top of everyone’s list is T-Mobile, and most realize the challenges that the company faces—mostly competitive but also challenges in marketing and churn.  The problem for mobile operators is less that of roaming (as some have suggested) and the need to build giant service areas as it is about advertising scope.  Mobile services are promoted and sustained by brand recognition and that’s achieved largely through TV ads.  If you buy ad space on a primo show and half or more of the audience can’t get service from you because you don’t have towers in their area, you’re wasting your time.

That makes the cost of running an aspiring mobile telco about the same as that of running an established one, and obviously the revenue is a lot less if you’re trying to fight your way up.  But I’m not sure that T-Mobile is an easy grab for somebody for that same reason—return on investment for the buying company is going to be harder to achieve.  T-Mobile can’t likely be acquired by Verizon or AT&T, so if there’s going to be any action, look for it to come from a cable company who wants to leverage mobile services for quad-play.  I’m skeptical even there.

Level 3 is another company that gets an acquisition target nod, and here I think the thesis is better though still complicated.  From a profit and ROI perspective the company isn’t exactly sterling, and I don’t see that changing at this point.  What Level 3 could offer is national backbone capacity and peering with the major access ISPs.  That could be valuable for either 1) mobile backhaul if somebody like T-Mobile really wanted to push their cells into everyone’s territory or 2) if the overturning of neutrality sticks or if a new rule permits inter-provider settlement, provider-pays content delivery, or QoS peering.  I don’t think that Verizon or AT&T need Level 3, though, so again we’re left looking at either offshore players or a cable MSO to be the acquirer.  If the FCC and Congress cooperate on neutrality and settlement, this could happen.

DirecTV is second on Oppenheimer’s list and third overall among the broader Street players.  I think there’s value in video delivery for sure—it’s actually about the only thing that can be truly profitable in terms of consumer network services.  The problem with M&A here is less value than “valuable to whom?”  A company with a video franchise already has little reason to pick up DirecTV because they already have content services so that rules out the cable companies and AT&T and Verizon.  The only thing that DirecTV could add is fringe coverage, where customer density and opportunity were too thin in a geography to make any wireline delivery of content pay.  I think that’s a marginal game, so I think DirecTV is on the block only for an offshore player.

Probably the most “interesting” speculation on provider M&A is somebody buying Rackspace.  The thesis here ranges from the sublimely stupid (“Telcos have to get into the cloud”) to the semi-sensible (“Operators could wring a lot better margins out of Rackspace’s infrastructure, and quickly become players in the cloud”).  This is actually a tough call, because it is true that as former public utilities the telcos could surely deliver better results and also tolerate low ROIs better.  However, it’s not clear how much a telco would have to pay for Rackspace, and unless the telco has specific technical options to reduce costs and specific service objectives beyond IaaS, it’s hard for me to see how this works. Still, Rackspace was the impetus behind OpenStack, which is the favored telco cloud.  Cable companies might be a better bet since they typically have less invested in the cloud than the telcos do (both AT&T and Verizon already have cloud services).  They’d also need to have some specific technical strategy to raise service agility and drive down costs, though.

I think this last point is the most interesting, because in truth the M&A prospects for all the companies I’ve named would be better if we assumed that the prospective buyer had a good idea of how to raise revenue and lower cost that goes beyond consolidative economies, which are always potential drivers for M&A.  Here I’d point to AT&T and their Domain 2.0 strategy, which promises to create a new and more agile/efficient telecom.  If that could be done, then AT&T could benefit from any of these acquisitions except that of Level 3, and it’s possible they could use Level 3 CDN capability to deliver content from U-verse out of area.

The key point about consolidation driven by commoditization is that it can be both promoted by and defended against using measures that drive up margins.  If the buying player can wrestle better profits from what they acquire, then they are more likely to take the plunge.  If the target company can create better margins internally, they can perhaps sustain themselves independently or make the price of acquisition unattractive.  SDN and NFV are both shifts that could boost efficiency and agility.  The cloud could improve revenues.  At the technology level we have path toward better profitability, if we can harness these trends.

Necessity is the mother of invention.  The same forces that encourage consolidation also promote the need for operational efficiency and service agility, those two common buzzwords of our time.  As we’re considering whether the industry will commoditize and consolidate, we need to consider whether it could fight its way out of both painful consequences by simply doing a better and more efficient job.  I think that’s very possible, and referencing again AT&T’s initiatives, I think people are really trying to do that right now.

Posted in Uncategorized | Leave a comment

Beating Huawei’s 20% Game

It’s probably no surprise to anyone that Huawei turned in record earnings in the last quarter, and I’m sure that the other network vendors have even more to worry about now than before.  So do network operators, whose own revenue and profit pressures have been driving them to reduce costs.  Nobody in the whole of the network equipment market can possibly have missed the drive for “transformation” by operators.  You can transform by radically cutting capex, radically cutting opex, radically raising revenue, or a combination of these factors.  With three choices to work with, why is it that Huawei’s competitors have been unable to frame a challenge to the Chinese giant?

SDN and NFV have been driven by capex considerations, by the simple notion that if you want to lower your cost, spend less on equipment.  The problem is that operators know in their hearts that capex reduction is the worst of the three transformation approaches.  One of the thought-leader giants of NFV, in an open meeting, made the comment that capex wasn’t the real driver for NFV; “If we want 20% capex reduction we’ll just beat up Huawei on price.”  That’s a telling comment because it shows both the high hurdle that capex-driven transformation would have to clear, and also why Huawei is winning.

Could either SDN or NFV realize a greater-than-20% capex reduction?  Overall, meaning network-wide, I think the answer is clearly “No!”  Both technologies have strong capex benefits but in relatively specialized missions.  Service Chaining, the poster-child application for both SDN and NFV, is actually a very difficult application to justify at all, given that the profitable applications are limited to business services and could likely be supported by simple cloud-hosted multi-tenant elements because the services themselves are sold on long-term contracts.  It would be possible to redo networking completely to optimize both SDN and NFV, but to build a totally new networking model and evolve to it successfully from where we are is a very big problem.  Too big for vendors to bother with, and so they present narrow and half-hearted solutions, which Huawei can trump on price alone.

How about opex reduction?  Well, there we have a similar issue.  Right now we have operators investing in cloud computing, where the cloud community has a growing list of orchestration/DevOps tools available.  They invest in SDN, which has yet to settle on a true management model, and they’re starting to deploy NFV even though there’s no indication that its own MANO processes will fully address even local management needs (how do you represent a service chain as a MIB when the user wants to see only a virtual device and the operator needs to see all the hosted elements and connections?)  Opex reduction is, in my view, very feasible but it’s not going to happen unless everyone accepts that you can’t gain anything from managing opex in ten percent or less of your infrastructure.  However far you think SDN or NFV or the cloud might go, it’s darn sure going to start off at less than 10% of infrastructure, so early benefits will never be significant.  That means it’s back to pressing Huawei in price, and they win.

Increasing revenue is the last element, and that can in turn be divided into two categories—improving time-to-revenue on current services and offering new services.  “Service agility” (meaning the time from conceptualizing a service to making it available to deploy and then the time from order to deployment) is one of the operator hot buttons.  But again a “service” is more than a cloud- or NFV-hosted element or an SDN data center.  How agile are we going to be if we tie the Road Runner of SDN/NFV to one of those big rocks that keep falling on our coyote friend?

New services is also problematic.  The majority of “new services” people talk about are things like social networking, which are ad funded.  The total ad spending worldwide is less than the revenues of one big carrier so even if operators got all of that (which they won’t because less than half is likely to be even addressable online) they don’t really do much.  What operators need is new services that people will pay for, and that means either for-fee content services (like Netflix) or business services that drive major productivity gains and so can justify paying a nice fee for the service itself.  But equipment vendors settle for saying “Internet traffic is exploding” or touting the “Internet of Everything” and don’t do anything to prepare operators for what for-fee new services might look like.  Huawei’s price-leader approach gives operators at least an assured path to higher profit, so Huawei wins again.

This is a big complicated industry, but you can take comfort in statistics at the high level.  Right now, operators worldwide spend about 18 cents of every revenue dollar on capex.  We can transform them only by making that 18% number smaller by increasing revenue or by reducing the 18% to something like 14% (our hypothetical 20% price reduction by “beating up Huawei”.  Operators would love to do better than that in cost and increase revenue too, but they need solutions and not market platitudes.  They need, as they’ve said in my surveys for 8 years or so, for vendors to step up and support their transformation needs.

Vendors need that too, because our carrier’s 20% push on pricing to Huawei is working.  And because Huawei is at least as likely to See the Light in terms of opex and service revenue increases as other network vendors are.  Huawei eight years ago when operators were signaling their distaste for their vendors’ transformation support, was nothing in software, nothing in management.  They were simple box-pushers, and now they are becoming not only competitive but dominant in things like mobile infrastructure where there’s a big software element.  They’re jumping into OSS/BSS (their Fastwire acquisition just this year proves that).  They’re active in NFV.  These guys mean business, and business beyond being that player who’s beat up for the extra 20% price concession.  Imagine how well they’ll do if their competitors hunker down on legacy technology and vapid positioning.  We may see, and soon.

Posted in Uncategorized | Leave a comment

Coming Soon: An Open Architecture for Orchestration and Management

There have been a number of commitments by network operators to new technologies like the cloud, SDN, and NFV.  Last week, Metaswitch earned its carrier stripes with a win in Europe, one of the first (of many, I’m sure) non-traditional IMS deployments.  Their stuff has been used in at least one NFV PoC. Verizon and AT&T are both committed to the cloud and operators are deploying SDN too.  But I’m sure you agree that all these deployments are islands—no operator has committed to a complete infrastructure refresh based on next-gen technology.

The benefits operators hope for largely center on “service agility” and “operations efficiency” and yet the “island” nature of these early trials makes it impossible to realize these goals because there just hasn’t been enough of a change in infrastructure to drive agility up or opex down overall.  Truth be told, we didn’t need these revolutions to meet the agility/opex goals, we needed a revolution in management in general, and in particular in that new wonderful thing called “orchestration”.

Many of you have followed my discussions on management and orchestration models, and even engaged in a number of lively dialogs on LinkedIn on one or more of the blogs.  Some have asked whether I’ll be presenting a complete view of my management/orchestration model, something that starts at the top where services start and ends with resources.  People want something that works with the cloud, SDN, NFV and legacy network and IT technology, that does federation among technologies and operators, and that’s compatible with an open-source implementation.

Well, the answer is that I’m going to be publishing a complete orchestration model later this summer.   I’ll be releasing a complete vision based on the two key principles I’ve blogged about—Structured Intelligence and Derived Operations, and it’s based in large part on my ExperiaSphere open-source project, though it expands on the scope considerably.  The presentation will be made available as a YouTube video on my channel and as a PDF on SlideShare.  The material will be free, links can be freely distributed for non-commercial attributed purposes, and all the concepts I’ll illustrate are contributed into the public domain for all to use with no royalties or fees.  I’ll be using the ExperiaSphere website to communicate on this material as it’s released, so check there for news.

I want to stress that I’m not starting a project here; I can’t contribute that kind of time.  What I’m doing is making a complete picture of a suitable orchestration-layer architecture available, in fact making it public-domain.  If standards groups want to use it, great.  If somebody wants to launch an open-source project for it, likewise great.  Vendors can implement it or pieces of it if they like, and if they actually conform to the architecture I’ll give them a logo they can use to brand their implementation with.  None of this will cost anything, other than private webinars or consulting that a company elects to do on the model.


That’s a key point.  Some people already want to do a webinar or get a briefing, and as I said I can’t donate that kind of time any longer.  I will make a complete video and slide tutorial (likely two, an hour each) available when I can get it done.  Meantime I want to get the idea exposed where it counts, with the network operators and some key standards bodies.  Therefore I’m going to start by offering service providers who are members of either the TMF or the NFV ISG the opportunity to attend a single mass webinar at no charge.  This will be scheduled in May 2014 at a specific date and time to be announced.  I’m asking that service providers who are interested in signing on contact me by sending an email to Say that you’re interested in the “SI/DO Model” and please note your membership in the TMF or NFV ISG, your name, company, and position.  I promise not to use this for any purpose than to contact you for scheduling.  Slots are limited, so I can’t accept more than five people per operator for now, and even that may have to be cut back.  You’ll have to promise not to let others outside your company sit in.

At some point in June I’ll be offering for-fee private webinars to network equipment vendors (and to service providers who want a private presentation), billed as a consulting service and prepaid unless you’re already a CIMI client.  These sessions will be exclusive to the company that engages them but you’ll still have to provide email addresses of the attendees, and you may not invite people outside your company.  If you like you can host these private webinars on a company webinar site rather than mine, but if you record the sessions you must not allow access outside your company without my permission and under no circumstances can the material be used in whole or in part as a part of any commercial or public presentation or activity.  If you’re interested in participating in one of these webinars, contact me at the same email address and I’ll work out a time that’s acceptable to both parties.

At this same point, I’m offering the TMF and NFV ISG and also the ONF the opportunity to host a mass webinar for their members, at no cost.  This will be a shorter high-level introduction designed to open discussion on the application of my proposed model to the bodies’ specific problem sets.

I’m expecting to have the open public video tutorials and slide decks will be available in August, and these will include feedback I’ve gotten from the operators and standards groups.  Anyone who wants to link to the material can do so as long as they 1) don’t imply any endorsement or conformance to the model unless I’ve checked their architecture and agreed to it and 2) they don’t use it in any event or venue where a fee is paid for viewing or attendance.  I want this to be an open approach and so as I’ve said, I’m releasing the architecture into the public domain.  I’m releasing the material with these simple restrictions.

Contact me at the email above if you’re interested, and be sure to let me know whether you’re an operator, a member of the standards/specification groups I’ve noted, or a vendor or consulting firm.  I reserve the right to not admit somebody to a given phase of the presentations/webinars if I’m not sure of where you fit, and if you’re not committed to an open orchestration model don’t bother contacting me because everything in this architecture is going public!

Posted in Uncategorized | Leave a comment

Finding Money in the Clouds

Well, it’s Friday and a good time to piece together some news items for the week, things that by themselves may not be a revolution but that could be combined to signal something important.  An opening point is that Oracle’s Industry Connect event, which proves that Oracle does in fact see the communications industry as a vertical of growing importance.  Why?  We also saw Amazon and Google chest-butting on the price of basic IaaS services, bringing what seems likely to be a cut of a third or more in prices.  Why?  Dell suddenly gets into the data center switch and SDN business on a much more aggressive scale.  Why?

Let’s look at “cloud” as being the core of all of this.  Credit Suisse thinks that what we’re seeing with Amazon and Google price cuts is an indication of improved economy of scale, but I think they’re reading the wrong page of their economics text.  What they’re seeing is a lesson in optimized pricing.

The efficiency of a data center rises with scale because there’s more chance that you can hold utilization levels higher.  Pieces of this and that can be fit into a machine somewhere to make more of it billable.  You reach the point of adequate efficiency levels pretty quickly, so there’s really little change in “economy of scale” being represented here by either Amazon or Google.  However, to fill a slot in a data center you need both the slot and the “filling”.  Both Amazon and Google are going to cut their own revenues by cutting prices so they clearly expect to make up that loss in volume.  It’s not that we are running at better scale, folks, it’s that we can’t fill the slots as much as we want.  Otherwise lowering the price is dumb.  Amazon and Google think that there are cloud-ready applications that could be cost-justified at lower prices but not at the old levels.  They think that they’ll grab a lot of these other apps at the new price points.  They also think that they’ll make up for lower IaaS pricing with increased emphasis on platform services features that will augment basic IaaS.

The cloud is getting real, not by making IaaS the kind of revolution that low-insight articles on the topic suggest it is, but by gradually tuning applications to be more cloud-friendly.  As that happens, the features of the cloud that actually facilitate the shift (the platform services) become the key elements.  IaaS is cheap milk at a convenience store, a loss leader to get you to buy a Twinkie to have with it.

This means that “the cloud” in some form is going to create a new IT reservoir, not one that replaces the current data center.  Not all of the cloud will be incremental of course—SMBs in particular will be likely to shift spending rather than use the cloud to augment productivity.  Still, we are certainly going to see more server and software and data center growth driven by providers of cloud-based services than by traditional IT practices and budgets.  One place that’s very likely to come true in spades is in the communications industry.  Carriers have traditionally low internal rates of return so they can tolerate a price war on IaaS better than current competitors, and carriers also have the possibility of using network functions virtualization to offload functionality from appliances, reducing capital costs.  If they get what they really want from NFV, which is significant improvements in service agility and operations efficiency, they could realize even more gains and justify more deployment.

The data center is where all the real money is in both networking and IT, so it follows that what everyone is chasing is the new data center, which is the easiest to penetrate.  Where is the biggest opportunity for that new data center?  The cloud at a high level, but precisely where in the cloud?  The carrier cloud is the answer, because there aren’t enough Googles and Amazons to drive a huge business by themselves.  Operators will build the majority of incremental cloud data centers, and it could be a lot of data centers being built.

Our model, based on surveys of operators, says that in the US market alone a “basic” level of deployment of carrier cloud to host both services and NFV would require approximately 8,000 incremental data centers, and if the optimum level of NFV effectiveness could be realized the number could climb to over 30,000 data centers.  The midpoint model says that we could add over 100,000 servers from all of this.   Yes, most data centers would be small had have only a few servers—they’d be located at key central offices—but there would still be a lot of new money to be gained.  That’s why the communications vertical is important, why Amazon and Google need to grab more of the TAM now, and why Dell needs to be in the game in earnest.

HP is the poster child for cloud-to-come.  They have servers, software, cloud, network equipment, and all the good stuff.  They have an NFV story that is functionally as good or better than anything anyone else has productized at this point.  Oracle has software that could be combined with COTS to create a similarly good story, but they have to work harder on the software side and on what “platform services” are in order to win.  Even IBM has enormous assets in the software space that could be leveraged into a powerful cloud and NFV position and could address that server/data-center bonanza.  Dell has been a growing influence in servers, but it’s not been sparring one-on-one with HP in software, networking, or the cloud.  I think Dell’s announcement is a step in that direction, an attempt to catch the big boys before it’s too late.

Is Dell doing enough, though?  Are Amazon and Google doing what they need to do?  Is Oracle, even?  The fact is that NFV may be the most important thing in the cloud because it’s a source of demand that a network operator is in a unique position to own.  If operators build optimum NFV they’d have justified a boatload of data centers and an enormous resource pool they could then harness.  The might even have an operations framework uniquely capable of doing efficient provisioning and management of complex services.  Amazon and Google are not (despite Google’s fiber games) going to try to become network operators, so they can’t go to the NFV space.  Oracle, IBM, and even Dell need to have a real NFV strategy and not just a hope of selling infrastructure if they want to counter HP, and even Alcatel-Lucent and Cisco, both of whom have respectable NFV approaches they could leverage.  None of the NFV stories so far are great, and Dell’s announcement didn’t really move the functional ball for Dell in either SDN or NFV—they just announced hosting.  Does a hundred thousand servers, and all the associated platform software and network equipment, sound good to anyone out there?  If so, it’s time to get off your duff.

Posted in Uncategorized | Leave a comment

Service Automation and “Structured Intelligence”

Everyone knows that operators and enterprises want more service automation.  Some classes of business users say that fixing mistakes accounts for half their total network and IT TCO, in fact.  Nobody doubts that you need something to do the automating, meaning that software tools are going to have to take control of lifecycle management.  A more subtle question is to define how these tools know what to do.

Current automation practices, largely focused on software deployment but also used in network automation, is script-based.  Scripting processes duplicate what a human operator would do, and in fact in some cases are recorded as “macros” when the operator actually does the stuff.  The problem with this approach is that it isn’t very flexible; it’s hard to go in and adapt scripts to new conditions or even to reflect a lot of variable situations, such as would arise with widespread use of resource pools and virtualization.

In the carrier world, there’s been some  recognition of the fact that the “right” approach to service automation is to make it model-based, but even here we have a variation in approaches.  Some like the idea of using a modeling language to describe a network topology, for example, and then have software decode that model and deploy it.  While this may appear attractive on the surface, even this approach has led to problems because of difficulties in knowing what to model.  For example, if you want to describe an application deployment based on a set of software components that exchange information, do you model the exchanges or model the network that the components expect to run on?  If the former is done you may not have the information you need to deploy, and if the latter is done you may not recognize dependencies and flows for which SLAs have to be provided.

Another issue in model-based approaches is the fact that there’s data associated with IT and network elements, parameters and so forth.  For service providers, there’s the ever-important operations processes, OSS/BSS.  You need databases, you need processes, and you need automated deployment that works for everything.  How?  I’ve noted in previous blogs that I believed that the TM Forum, years ago, hit on the critical insight in this space with what they called the “NGOSS Contract”, which says that the processes associated with service lifecycle management are linked to events through a data model, the contract that created the service.  For those who are TMF members, you can find this in GB942.

The problem is that GB942 hasn’t been implemented much, if at all, and one reason might be that hardly anyone can understand TMF documents.  It’s also not directly applicable to all the issues of service automation, so what I want to do here is to generalize GB942 into a conceptual model that could then be used to visualize automating lifecycle processes.

The essence of GB942 is that a contract defines a service in a commercial sense, so it wouldn’t be an enormous leap of faith to say that it could define the service in the structural sense.  If the resources needed to implement a service were recorded in the contract, along with their relationship, the result would be something that could indeed steer events to the proper processes.  What would have been created by this could be seen as a kind of dumbed-down version of Artificial Intelligence, which I propose to call structured intelligence.  We’re not making something that can learn like a human, but rather something that represents the result of human insight.  In my SI concept, a data model or structure defines the event-to-process correlations explicitly, and it’s this explicit-ness that links orchestration, management, and modeling.

Structured intelligence is based on the domain notion I blogged about earlier; a collection of elements that cooperate to do something create a domain, something that has established interfaces and properties and can be viewed from the outside in those terms alone.  SI says that you build up services, applications, and experiences by creating hierarchies of these domains, represented as “objects”.  That creates a model, and when you decide to create what you’ve modeled you orchestrate the process by decomposing the hierarchy you’ve created.  When I did my first service-layer open-source project (ExperiaSpere) five or six years ago, I called these objects “Experiams”.

At the bottom of the structure are the objects that represent actual resources, either as atomic elements (switches, routers, whatever) or that represent control APIs through which you can commit systems of elements, like EMS interfaces (ExperiaSphere called these “ControlTalker” Experiams).  From these building-blocks you can structure larger cooperative collections until you’ve defined a complete service or experience.  ExperiaSphere showed me that it was relatively easy to build SI based on static models created using software; I built Experiams using Java and called the Java application that created a service/experience a “Service Factory” because if you filled in the template it created as it was instantiated and sent the completed template to the Factory, it built the service/experience and filled in all the XML parameters needed to manage the lifecycle of the thing it had build.

Static models like this aren’t all that bad, according to operators.  Most commercially offered services are in fact “designed” and “assembled” to be stamped out in order form when customers want them.  However, the ExperiaSphere model of SI is software-driven and less flexible than a data-driven model would be.  In either case there’s a common truth, though, and that is that the data/process relationship is explicitly created by orchestration and that relationship then steers events for lifecycle management.

I think that management, orchestration, DevOps, and even workflow systems are likely to move to the SI model over time because that model allows you to easily represent process/event/data relationships because it defines them explicitly and hierarchically.  Every cooperative system (every branch point in the hierarchy) can define its own interfaces and properties to those above, deriving them from what’s below.  There are a lot of ways of doing this and we don’t have enough experience to judge which are best overall, but I think that some implementation of this approach is where we need to go, and thus likely where a competitive market will take us.

Posted in Uncategorized | Leave a comment