Alcatel-Lucent’s FP3: Good Evolution but Not Revolution

Alcatel-Lucent did its own ballyhoo this week, with the announcement that the company had promised would make the Internet faster.  I’m not big on ballyhoo, and I have to admit that I have mixed feelings on the Alcatel-Lucent announcement.  I want to be fair, and so I want to start with the perspective I bring to the issue of “advancing the performance of the Internet”.

I’m a strategy analyst, someone who surveys and models markets.  My goal isn’t to find out what people want, but rather to find out what’s going to happen, and what’s not.  People want gigabits for nothing, but that’s not going to happen.  We could give people faster broadband today in a technical sense but the decision would fail for financial reasons.  You could argue that anything that reduces the cost of the much-for-nothing goal is an advance, but it’s doubtful that any single development could create a cost revolution, and that is the dilemma Alcatel-Lucent’s announcement poses.  They have made an impressive technical stride that I’m not confident is a significant market stride.

What Alcatel-Lucent announced was it a quite revolutionary achievement in special-purpose network semiconductor design and fabrication, its FP3 chip.  This chip, says Alcatel-Lucent, is faster (by 4x), smarter, and greener (50% less power per bit) than ever before, and the company says that it will accelerate the adoption of 100GigE from edge to core.  Certainly the chip can support multiple 100G interfaces or a future 400G interface, but the question of capacity in my view isn’t one of technical performance as much as of financial performance.

Operators we survey are watching 100G Ethernet to be sure, for a time when it would be economically justified.  They don’t think they’re at that point.  The problem they have is that revenue per bit is already plummeting.  A chip 4 times faster would presumably need a network four times higher in capacity.  Our model says that consumers will pay 17% more for 4x speed; operators are estimating 20% more on the average.  The uptake for premium speed tiers is low, and FCC data shows that broadband users in the US cluster at the low-cost end of the service range.  So how does making the network capable of higher performance change things fundamentally?

Something for nothing, or something for next-to-nothing, may be appealing but it can’t be delivered.  The thing is, people are willing to pay for stuff; it’s just that they’re not willing to pay for Internet bandwidth.  Apple’s success with apps demonstrates that people will shell out millions for just the convenience of using an app for what they could get from a website.  If there’s a revolution in the market, it has to come from allowing the people who build the networks to participate in this higher-level part of the food chain.  They want to do that; they’ve told me that explicitly in surveys for four years now.  They want to add services to their networks, to add a service layer to their network layer.

Which brings us to “smarter”, the third claim for the FP3.  What exactly does that mean?  Alcatel-Lucent says that smarter means delivery of personalized services and content, massive IPv4/6 scale for the future, and fully programmable.  But unless we believe in 100G to the user, the access network will have to be able to do all of that or the user will never see the service, and the FP3 won’t be out there.  We all know that supporting most of the “smart” things is likely an edge role; the power of the Internet was to avoid being aware of individual users or flows deeper inside the network—it doesn’t scale.  And what exactly is the FP3 programmable to do?  Yes, the number of VPLS and VPRN instances is doubled, along with the number of queues and (almost) the number of routing table entries.  The question is how exactly this creates monetization, the revenue per bit that operators need to push up if they’re to punch capacity up by 400%.

Alcatel-Lucent did offer a couple of ideas on services, one on the general evolution of the service experience and one on the video distribution process.  I agree with the points in both; what I still have a problem with is what role the FP3 plays beyond moving the bits around.  I doubt that Alcatel-Lucent proposes to add customer or service-flow awareness to deeper aggregation products that have the traffic scale to justify 100G.  It doesn’t scale.  Are they proposing some intermediate “not-aware-but-sentient” role for the network?  I’d love to hear about that.

I didn’t hear it in this announcement.  It’s possible that the FP3 could play a role in binding the services of the future to the network of the future, but Alcatel-Lucent doesn’t say that.  It’s possible that Alcatel-Lucent intends to meld its Application Enablement and Open API themes downward into the network and create a multi-layer profit partnership, but they don’t say that either.  The FP3 is specifically faster, specifically greener, and un-specific about how it’s smarter.  And it’s smarts that will revolutionize the Internet; smarts that generates bucks and not just bits.  For a company who has gained router market share because of its success in the mobile and content service layers, it’s disappointing they would forget a strong TECHNICAL service-layer tie here.  It would be more disappointing if there isn’t one.

Without monetization in a service sense, traffic can’t be profitable even at current prices, and the FP3 presumes a 400% traffic gain.  Long before we reached that, the current market model would collapse into usage-based pricing and that would limit traffic growth and also the growth of the Internet.  We have to create a healthy ecosystem here, and the FP3 picked up three credible points about that ecosystem; speed, smarts, and power efficiency.  It validated two of the three.

So that’s my dilemma.  I think the engineering is impressive, in fact VERY impressive.  I think it could reduce the cost of high-capacity devices, and just the fact that Alcatel-Lucent announced it may suggest it’s planning to go higher on the router capacity tree.  I just don’t think Alcatel-Lucent has proved that it revolutionizes the Internet, because nothing is going to do that except something that revolutionizes the Internet business model.  The capacity play they’ve made for the FP3 is dangerously close to following Cisco down into the “bandwidth at any cost” abyss that I warned about when Cisco announced its ASR enhancements.  The FP3 shouldn’t have been about speeds and feeds, but about dollars and cents.  The “smarts” point of the FP3 launch is the one that had to be the strongest and that was instead the weakest.  Might Alcatel-Lucent plan to correct that down the line?  Perhaps, in which case I’ll take another look when they announce it.  For now, this is a strong evolution, but it’s not a revolution.


NSN and Demand-Side Networking

The fate of NSN is now even more unclear than usual as the WSJ reports that talks to sell a stake in the venture to private equity firms has failed.  Nokia and Siemens are said to be looking to restructure the deal, but Nokia has said publicly that several options for the venture remain open.  What happened here?  NSN was, and is, one of the larger players in the space.  The problem, I think, is a combination of market pace and vendor inertia, and NSN is hardly the only player guilty.

Networking used to be a supply-side business.  Operators bought gear that created services, and their plans for forward service evolution set infrastructure needs.  People did two- and three-year plans for releasing new voice switch software, for example, because the pace of service evolution was driven by the operators and was totally predictable.  The thing is, while there are plenty of workers in telcos worldwide who probably still think in glacial terms, the market is now driven by people like Apple and Google, who have planning cycles measured in months and not years.  The pace of market change has become more like an avalanche than a glacier.

About four years ago, network operators worldwide awoke to the problem and began to demand support for new initiatives to capture some of the opportunity that the agile handset and OTT players were getting.  This was at first called “transformation” and then “monetization”, and from the first the network vendors balked at the move.  Operators speculated to us that their vendors were afraid that a rethinking of the operator business model would delay sales, and they pressed on to encourage more spending NOW rather than to build the framework that would have justified MUCH MORE spending later.

The gap between buyer and seller was greatest in the US where market pressures from OTT and handset players was greatest, but it’s spread to most other markets.  As it did, it posed more problems for vendors.  With the exception of Huawei, the strategic influence of EVERY network equipment vendor fell as buyers became frustrated with their lack of monetization support.  NSN wasn’t the worst in strategic interest results; their wireless credentials have kept them a contender.  They actually improved in the spring survey based on wireless strength, in fact.  But NSN is a conservative company, somebody who doesn’t understand marketing or the changes that have come about.  It has the assets but doesn’t promote them.

For example, NSN has a “Service Delivery Framework” architecture for the new operator service layer.  It’s fairly complete, as much as any vendor’s architecture we’ve seen, and it’s positionable and credible.  The problem is that the slides that describe it were, when we got a copy, marked “Confidential”.  We can’t talk about it.  They don’t talk about it.  The reason, we believe, is that NSN is trying to use the framework within a professional services context rather than as a product.  That has resulted in the company’s disengagement from four of the five content monetization deals we’ve seen in detail, simply because they don’t have visible assets to get themselves to bat.

Ericsson has a similar problem, in my view.  They also have a good “SDF” diagram that they don’t seem to share in public.  They also have a strong professional services bend, and they want to make monetization projects and not monetization products as a result.  Thus, they are missing a key opportunity—just like NSN is.

NSN could be a valuable property.  Nokia and Siemens don’t have to sell a stake in it if they can push NSN to take the strategic and marketing steps that it’s capable of taking.  If they don’t do that, then any buffing-up-type “restructuring” isn’t going to help.  Thus, my open letter to both Nokia and Siemens is to make NSN into what it’s capable of being and reap the rewards yourself.  Your alternative isn’t going to be pretty.  With the wireless position NSN has, a service-layer story could make the company a compelling partner.  They have one, but it’s not catching on.

Google, Monetization, and Carrier Clouds

Google looks like it’s facing more anti-trust angst; the FTC is reported to be launching an investigation into Google’s advertising and search business, and former and current CEOs Schmidt and Page have decided they don’t want to appear before a Senate Committee hearing on roughly the same topic.  All of this comes as Google, the largest of the large in terms of OTTs, faces a combination of competition from Apple (bleeding through its iPhone and iPad successes) and increasing difficulties in creating sources of new revenue.

Advertising online is coming under more pressure for a number of reasons.  First, as I’ve noted in past blogs, advertisers are interested in it only to the extent that targeting could reduce their overall costs by reducing “overspray”.  It’s clear that their ideal approach would be to show ads only to those who would then buy based on them.  That has focused online advertising on “interdiction” approaches; get into the track of the buyer between the decision to buy and the execution on that decision.  Search ads can work for this, but the trouble is that companies normally first try to game natural results with optimization strategies.  In any case, it’s clear that the search ad business is mature at this point.  Banner or in-video ads have been problematic from the first; my own research has shown that buyers have almost no recall of these ads in banner or site-pre-roll form and that they have limited recall of even in-video ads.  The latter appears to be caused by the unique buyer ability to tune out dross while on a computer; run the same videos on a TV and they see the ads fine.  These pressures generate a pressure on the industry to increase their “value”, and that can lead to even more problems.

A recent report suggests that the behavioral targeting practices of the industry, for example, are leading participants to a place where they risk becoming “adware” and generating consumer and possibly regulatory backlash.  The irony is that BT is a refinement on targeting, which is a refinement on a process whose goal is to REDUCE spending.  Thus, the industry could be said to be eating itself on one hand while risking regulatory wrath on the other.

The simple truth is that ads can’t sponsor everything; they can’t even sponsor very much.  The total global adspend wouldn’t pay the total global capex of providers, and of course online ads won’t get that total adspend and the providers don’t get much of online ad revenues.  This is why many operators see “monetization” as important.  They have one major asset; a market where people pay for something.  Maybe they don’t pay enough or with the right model (usage  versus unlimited, for example) but they pay.  Monetization strategies are strategies to present something that people are willing to pay for.  In Netwatcher in June we offered a content monetization architecture; a mobile one is coming in July.  The goal here is to demonstrate that it’s possible to build a service structure that can earn revenue.  If broadband access is a conduit for delivering paying services that the operator can offer, then it’s OK for it have a minimal ROI.

Verizon has decided to keep its cloud offerings under the Terremark brand, which I think is a good idea because the cloud and cloud-hosted services fall outside the traditional structure of a telco.  Operations in telecom providers means the network, and science and technology is not typically involved in monetization projects.  While internal IT (OSS/BSS) has IT expertise, most operators for now are seeing their internal IT and cloud IT separate, until they’re sure that the two can share an infrastructure without creating security or performance issues for either one.

Cloud monetization is the third pillar of operator profit-building (after content and mobile).  Operators here are envisioning a rather convoluted evolution, with service feature technology (the evolution of the stuff hosted today on SDPs), OSS/BSS, and cloud services all being relatively independent at first and then converging over time.  One likely instrument of this convergence is the “feature cloud”.  Managed security services and other services offered from a cloud platform are little different from content or mobile service features hosted on a cloud, and over time the difference will likely disappear.



Signposts to a Video Future

Sometimes I get frustrated by surveys and research because they never seem to make the distinction between things that are correlated and things that are causal.  The example today is some new research on teen mobile behavior.  It lists all the usual things; they watch the least TV, they use mobile video the most, the use social networking the most, they SMS instead of calling.  All of this is implied as the new sweeping change in the market.  As these youth age, their habits become the market.  TV is toast, and so is voice.  Well, I think this is a vast oversimplification.

Babies do things different from adults too, but just because they have different behavioral norms they don’t necessarily keep cooing and batting mobiles or sleeping in cribs as they age.  People’s role in life, the social framework they inhabit, sets their basic patterns of behavior.  If you look at “youth-in-transition”, the people who have left school, entered a stable relationship, etc. what you see is an almost immediate shift in focus away from a hide-from-supervision model.  Nobody is going to yell at you in your own home (well, perhaps your partner).  Yes, it’s true that many still indulge in some avoidance of supervision or surveillance but it’s not the single driving force for most.  The best example is that I’m unable to find any statistically reliable indication that young people who start a family today are any less likely to have a TV and use traditional multi-channel TV resources than those who started one ten years ago.

The net of this is that youth is different primarily because youth isn’t adult, isn’t personally responsible.  Many of the doomsday scenarios being painted, and many of the radical behavioral shifts being postulated, are simply not going to be as dramatic as most seem to think.

The Hulu story is perhaps a poster child for this.  Hulu was hailed by many (most, in the media) as the harbinger of the new age of video, where television is relegated to homes for the aged and everyone chuckles over “I Love Lucy” reruns because currently created content has moved to a new online form.  But now it’s pretty clear that Hulu is on the block, and the question is why.

One possibility is that it’s not making money.  OTT in-video advertising rates are about 3% of TV commercial rates, and advertisers seem to be more interested in using online targeting to reduce their costs than to engage their prospects better.  The other possibility raised today is that it’s being too successful.  The theory is that the owners of the Hulu JV (Disney, Comcast, News Corp) have a major disagreement on whether the service is undermining paid channelized TV services.

Speaking of tensions of mission, Alcatel-Lucent has announced something that’s designed to take the inherent contradictions out of in-home or hospitality broadband.  Right now the practice is to deploy WiFi in these locations and tap WiFi to unload the wireless network.  Femtocells have always been the alternative; an operator would instead site femtos in the home and in cafes and airports, and these would give operators their offload without creating a need to support devices that can jump onto any hotspot and thus escape operator control.  The theory I’ve been hearing is that femtos would also push back against players like Apple and Google who are trying to get more control of the mobile market, and possibly even of mobile services, by limiting operator subsidies to devices that are 3/4G only.  If that’s the idea, it’s dead on arrival in my view.  There’s no way that operators can make appliance guys, particularly Apple, blink on this one.  But there is a way to make the operator’s services more valuable by creating a 3/4G conduit that works even in the home.  The trick is to figure out how to make all that happen without making the user pay for femto airtime, and so far that’s been an issue for operators.  The Alcatel-Lucent/Broadcom reference design for femtos is a good step for operators if they’re willing to accept the framework in which femtos can succeed—they’re a captive-but-free alternative to WiFi.

Some sort of offload is important for wireless video, obviously, particularly given that tablets are already according to some research consuming more video per device than PCs, on the average.  Having both options available is smart, providing the buyers are willing to take the consequences either way.



Appliance Wars May Be Restarting!

Today’s market is rife with rumor about new mobile devices; the new model of the iPhone and the first true Amazon Kindle tablet.  The aficionados will revel over details like processor and resolution, but the big point here is that the future of mobile services is getting decided by mobile devices and not by mobile operators or infrastructure.

Apple’s new iPhone is likely to have the usual upgrades in speed and resolution and a bigger and better camera, but the big question is whether it will have the SIM-less design that some rumors have identified.  That would be important because it would suggest that Apple may be working to break away from the traditional way that mobile phones are sold—by operators in a subsidy relationship that demands early exclusivity.  Given the iPhone would be a new model and not a new device, there’s probably no risk of exclusivity pressure at this point, so a SIM-less model would likely indicate a proactive plan on Apple’s part, a wide-ranging move to become a truly independent mobile provider that might extend into becoming an MVNO.

Amazon’s Kindle tablet could be one reason why Apple would look at a new business model for its mobile devices.  Ebook players like Amazon can field low-ball-priced appliances because these gadgets can lock users to proprietary book formats and guarantee future sales to make up any loss-leader pricing on the device.  Barnes & Noble already has a Color Nook that’s a rather good and extremely cheap Android tablet, but B&N doesn’t have the marketing clout that Amazon has.  However, if Amazon does field a Kindle, then B&N will certainly ramp up its own Color Nook tablet support.  The company has about 300 apps now; there are tens of thousands available and the Color Nook is still a couple Android versions behind the market.

A battle for the subsidized reader-tablet market between Amazon and B&N would almost certainly force Apple to look at that same approach.  The problem is that while iTunes and the App Store could have been used as the revenue engine for a low-priced appliance, Apple didn’t work that way when the i-stuff was launched, and they’re probably reluctant to surrender the margins now unless they can get them back elsewhere.  That’s why I’ve been speculating that the iCloud we see is a shadow of the iCloud to come.  With some new service revenue stream, Apple could then offer lower-priced appliances and still keep the revenue line growing.

All of this demonstrates the issues that network operators face, of course.  The subsidy relationship between phones and services has been a way for them to influence the ever-more-powerful Apple and Google, and an independent subsidy model would trash that strategy.  More competition in the space, particularly competition on services and content, puts more pressure on operator service-layer plans, too.

In other news, all of the recent hacking has generated a rise in corporate tension on security, and in our spring survey of enterprises we’d already found security a hot-button issue.  Juniper has been working to promote its network-based security model in this atmosphere, something that we believe should be done aggressively given that enterprises believe that “only the network can secure the cloud”.  One partnership in the cloud space illustrates the potential for Juniper’s Pulse security client as a point-of-interaction security outpost, and I’m glad to see this kind of positioning emerging because it’s been under-played by Juniper up to this point.  Even now, I’d still like to see more aggressive positioning.


Caps, Content, and Clouds

Verizon is now rumored to be preparing to move to tiered pricing for mobile data in early July, and there’s a growing conviction among operators that everyone will be charging for usage on mobile networks by the end of this year and that everyone will be charging for wireline usage by the end of 2012.  The move to usage pricing isn’t being heralded as a victory; operators would prefer to get their profits from other sources.  The problem is that they’re becoming convinced that they cannot roll out a monetization strategy in time to sustain infrastructure investment without a bandwidth-cost kicker.

Mobile data use is growing by 50% per year worldwide, and that’s creating not only a backhaul cost but also a multiplication of towers as cells become congested with more users and more traffic per user.  The whole metro infrastructure is stressed by the mobile traffic load, and of course wireline streaming video traffic is also contributing.  The rumors that Google will be offering a YouTube-branded streaming multi-channel “cable replacement” service aren’t making operators happy either.  In fact, it’s streaming competition for TV that’s generating the conviction that they need usage pricing even more than concerns about the pace of their monetization progress.  Only usage pricing can stem the exploitation of incrementally free bits by OTT players, as operators see the situation.

The Google offering, if it’s real, will be a test of the complex relationship between content cost, content cost recovery, and content quality.  Any streaming service that purports to replace channelized TV will have to offer viewers something that’s equivalent in quality.  For Google, whose service is rumored to be involving only 10 to 20 channels, the issue is even more significant because the traditional solution to things like summer reruns has been to let customers flee to cable channels.  What happens when there aren’t any, or what happens if the prime networks are never there?  You can’t just stream old episodes because they’re already available online through a variety of sources, from Amazon to Netflix and from many networks themselves, and in on-demand form to boot.

The pricing model relates to content quality.  If you have first-run movies and original series like HBO or some other pay-for channel, you can charge the consumer directly and forego advertising.  If you want to draw on network TV or older content you’ll likely need to get revenue from advertising at least as a supplement, and the challenge there is that the per-impression rate for ads in streaming video is about a thirtieth that of a TV commercial.  And you have to pay for content out of the total of what customers pay you and what advertisers pay you; either to produce it or license it.

Video in TV and movie form doesn’t always work, and these models of content consumption have been with us for decades.  We’ll have to see how a streaming model can work, which Google may demonstrate, either in a positive or negative way.  We’ll also see how this will relate to usage pricing.

Moving on into the cloud, BT has made some interesting comments that suggest that it at least isn’t necessarily seeing itself as a cloud competitor, but rather perhaps as a cloud orchestrator.  That view isn’t broadly held among the larger operators we’ve surveyed; most believe that they will be deploying their own cloud infrastructure and offering a range of services from IaaS through hosted managed security services, all living on the cloud.  What BT is suggesting is that they might have some such infrastructure and services, but that they see themselves as a kind of prime cloud contractor, integrating multiple cloud providers into a single seamless service that then links to the enterprise.  This single service would also likely include network transport and connectivity, meaning likely a VPN.

What may be behind this is the fact that most network operators have a relatively narrow geographic sweet spot.  BT is strongest in the UK, obviously, just as Verizon is strongest in the northeastern US.  While both operators have international service trunks, neither is likely to want to maintain data center facilities globally, nor are they likely to want to backhaul all their global cloud customers to their home regions.  For some low-margin services, like IaaS, they’d be better off playing the federated role BT advocates.  For higher-margin services they might actually want to federate hosting rights from a partner in various geographies.  That’s what’s going to take some time to work out, because international standards in this area have failed to keep pace with the market.



Broadband Twins Not So Alike?

Good news to providers of wireline broadband services and their equipment vendors; mobile isn’t going to destroy wireline!  Actually, nobody who takes the time to think about the reality of mobile infrastructure believed that to begin with, but the comment does offer some opportunity to look at two aspects of broadband evolution.  “Two aspects” is the relevant point because it’s become clear we have two interdependent broadband futures and not just one.

Mobile services are an RF-based extension to metro infrastructure.  You pipe bits to a tower and beam them into space, and because of fundamental physics you can’t push infinite bits over RF.  That means you need to have multiple towers to serve a large population of users, and that even then a large concentration of users in a geography will limit what you can deliver to them.  An LTE cell might support 100 Mbps of aggregate bandwidth, which means 20 streaming HD videos even in a smallish form-factor would totally consume the capacity.  To supply more, it’s more antennas and more backhaul.  To illustrate the point, if you wanted to offer the hypothetical 100 Mbps per home capacity using wireless, you’d have to give every user an antenna and a backhaul, and the result would look suspiciously like wireline broadband with a WiFi in-home network.

So we aren’t going to replace our streaming 3D TV wireline feeds with mobile.  But obviously we’re not going to drag fiber optics behind us as we commute or go out to dinner either.  Fortunately, most users report a sharp division in what they do with mobile broadband and what they do with wireline.  We don’t expect to watch feature films on smartphones, and we don’t think that getting directions to a store or restaurant before we leave for the place is exactly progress either.  Microsoft Streets and Trips revisited?  Anyway, communication fits into the context of living, and the sharpest contrast in how we live is the contrast between the at-home and away-from-home behavior models.  Since teens practice supervision avoidance by staying away, this contrast is also the largest reason why youth behavior online differs from adult behavior.

From an infrastructure perspective, though, you can see that delivering mobile bits and delivering wireline bits is the same up to the last mile, or at least it could be.  The migration to LTE is important for mobility because it’s a migration to an architecture that provides IP dialtone rather than one that pushes online bits into a voice channel.  True LTE migration is designed to equip the mobile user to be a user of the Internet, drawing on the Internet service range.  That, of course, is the rub in terms of revenues.  The revenue per bit on internet dialtone in the wireline world is very low.  If that same low-revenue model translates into the mobile world, operators lose much of their incentive to offer mobile broadband in the first place.  Or they are incented to apply usage pricing to gain back what they’d lost.  The service world is where operators hope to take up the slack, the place where they avoid per-bit charging because they’ve successfully become purveyors of some of those things in what I just called the “Internet service range”.

The challenge is that just what might be in that range is very difficult to predict.  It’s clear that some content, meaning video, is going to be delivered in mobile form, particularly because of the tablet revolution.  People are going to increasingly view “TV” on tablets, substituting personal viewing for collective TV-watching because TV lacks the compelling star power it once had, the power that got the family all seated in one place at one time.  Now they all want to watch different things because programming has had to diversify to stay relevant.  Tablets might well become the personal viewing platform for the home, particularly as summer reruns push viewers out of traditional TV into VoD, where finding the best thing to watch as a family may present insurmountable obstacles.   Might that in turn create more interest in tablets as a platform when a viewer is out-of-home?  Sure thing.  But there’s more to mobile than just mobile content.

We already know that mobile advertising is different than wireline advertising, even for the same websites.  People on the move who use search are more likely to be looking for an immediate purchase.  People on the move are more offended by irrelevant ads.  People sitting in a social setting but away from home might be more amenable to ads, though.  LBS is important; mobile messaging and social features are important for roving users.  We’ve only now started to see how much might be done in the way of leveraging mobility and behavior.  Telcos want to cash in, but so do Apple and Google.  It’s the next area of an arms race.

The Reality Quadralateral of Bridgewater, Level 3, Ciena, and RIM

We have an interesting potpourri of tech events today, and in combination they might be telling us something about the business future of tech in general and the networking space in particular.  Let’s look at Amdocs’ acquisition of Bridgewater, Level 3’s expanded content services, Ciena’s financial trends, and RIM’s disaster.

Wasn’t it only this week that Ericsson decided to buy Telcordia?  Now we have the other giant in the OSS/BSS space buying policy-management player Bridgewater.  On the surface, just like with Telcordia, this seems one of those enormous yawns.  After all, Bridgewater makes policy stuff for mobile/IMS applications and we all know what Amdocs does.  But what makes this not only interesting but potentially earth-shaking is that OSS/BSS activities are SERVICE MANAGEMENT and Bridgewater makes components for SERVICE LOGIC.  I’ve been saying for some time that in the new network we need to combine these functions in some way, and I noted with the Ericsson/Telcordia deal that Ericsson just might have its eye on the converged service management space.  Operators are telling me that they need a single conception of the service layer that integrates logic and operations functions seamlessly.  They wanted the network vendors to provide it through a service-layer architecture.  The OSS/BSS guys may now have their eye on the prize.  Amdocs may also have its eye on becoming the next guy to be picked up by a big network vendor, too.

Then we have Level 3.  The company has been gradually morphing itself from being an Internet backbone play to being a CDN play.  Backbone revenue per bit is almost at the vanishing point, and that was the driver for the CDN role.  Now the CDN role at Level 3 is changing into a broader role of content monetization support.  With the network operators all targeting content monetization, you might think Level 3 is getting ready to be their partner.  That would be a bad move for the operators in our view.  Content monetization is a low-margin business and you can’t afford to be sharing the wealth there.  It’s also true that content monetization and overall service personalization are converging, which means that operators would need to share more and more of their other service data with a monetization partner to stay up with the market.  No, what’s likely to be happening here is simply a Level 3 reaction to an onrush of operator interest in “full monetization” which would threaten Level 3’s CDN business if they didn’t augment their own features to match those of the operators.  Thus, we’re seeing a competitive move to validate our thesis that content monetization is not only coming, it’s rushing.

So where does this leave us with Ciena?  Well, here’s a company that any way you look at it is just a bit-pusher.  The lower layers of the network can’t be convincingly linked to personalization—they can’t afford to be made aware of users and activities or they won’t scale and contain transport costs.  That means that they’re on the road to even deeper commoditization, and that’s problematic.  It’s particularly so for Ciena because they told the Street they planned to increase margins significantly over Street estimates.  OK, they didn’t offer an aggressive or firm timeline, but they’re making a promise that they cannot possibly keep unless they plan to buy or build their way out of the optical layer.  So where would they go?  They couldn’t expect to climb up to Ethernet and IP because first of all there’s a million big incumbents and second that space has its own margin/features problem.

At least they’ll have company from RIM.  Here’s the classic example of how a company can stick its head in the sand and accomplish nothing other than perhaps getting infested with ants or something.  RIM had an absolute lock on business mobility because they had a lock on the handset/appliance space for businesses with Blackberry.  They dawdled and fiddled and let their edge slip, then they watched Apple taking market share, Android coming on even stronger in unit volume terms, and Microsoft and HP trying to ignite their own business appliance programs.  And RIM countered with a shortsighted, ineffective, uninteresting tablet and gave it an insipid launch.  So what do they do to recover?  Nothing.  It’s too late.



IBM at 100: Lessons from a Milestone Birthday

IBM, a company now known for computers but once in the more pedestrian business of time-clocks and scales, turns one hundred years old today.  If you consider this a moment you’ll see that makes IBM perhaps the longest-standing tech success in all of history.  Considering the tumult that it’s undergone (you don’t switch from scales to supercomputers without generating some stress) it’s success is even more remarkable.  In a time when many tech companies seem to be floundering, it’s worth a moment’s consideration of how IBM managed to do what it did.  I’ve been involved with IBM in some way since 1964, and I’ve seen almost half its corporate life and much more than half of its life as a computer giant, so let me offer some perspectives.  They might help others who’d like to see their own hundredth birthday.

First, IBM was never hidebound.  Early in its computer age, IBM released a mainframe based on what was at the time the state of the art.  Within a year a revolutionary new option was discovered.  So did IBM feed off the revenues of its just-released product as long as it could and then bring out the new?  No, it brought out the new immediately and wrote off all it had invested in that older model.

Second, IBM was always intensely aware of the buyer’s side of the value proposition.  At every step in their sales and marketing, they supported the decision process from building the business case to installing the technology.  This total-value-chain marketing meant that IBM had a truly unique multi-layered engagement model with the customer, a model that they’ve sustained for fifty years or more.  A model that I’d argue no one else has ever successfully copied.

Finally, IBM has always known the value of and the need for ECOSYSTEMIC technology.  Computers and networks and appliances and other tech elements are not isolated boxes, they’re components of something.  A system of devices needs something to systematize it.

Look in contrast at IBM’s competitors.  Sperry Rand, then Unisys, isn’t a computer vendor any more.  NCR isn’t either.  Digital Equipment Corporation was bought by a PC company, Compaq, who was then bought by HP, the only one of the early players still sharing the computing stage with IBM.  But everything that IBM has done well, HP has done less well.  They have far weaker strategic relationships with their customers, they have made what seems an endless series of critical blunders in tech evolution—WebOS might be the latest—and they’ve mistaken the adoption of a catchy slogan or two as the creation of an architecture and an ecosystem.

You could argue that Cisco, in the network space, is now at a crossroads, a point where it decides whether to be HP or IBM.  Both HP and Cisco have expanded their portfolios without having a strong ecosystemic tie to link their broader line with broader value.  Both HP and Cisco have hunkered down on low-margin products that are becoming lower-margin all the time, just because they’re products they’ve become known for.  Both HP and Cisco had charismatic management that everyone knew, but that built sales and company bulk without creating a longer-term model for success.  So which road will you take, Cisco?  Today would be a nice symbolic day to make that choice.



How Will Ericsson/Telcordia Change the OSS/BSS Space?

There was probably joy in finance-land when Ericsson made their move to acquire Telcordia.  The company, formerly Bell Communications Research or Bellcore, was at the same time the “labs” of the RBOCs and the foundation for the support of and evolution of the classic vision of OSS/BSS.  For years it’s struggled with its conservative roots in a market where old visions and classic visions were increasingly irrelevant.  It was picked up by a private equity player who clearly saw OSS/BSS as more exciting than most of us do, and these guys are probably burning incense right now to acknowledge what they could well be seeing as divine intervention in getting them out of Telcordia alive.  They should be.

So does this mean Ericsson was dumb, and that this is simply a step on the slide of OSS/BSS into strategic irrelevancy?  The answer to both these questions is “Maybe, but then again, maybe not!”

It’s true that there is little in all of networking as boring as an OSS/BSS.  It’s true that OSS/BSS gives glaciers a run for their money in terms of inertia.  It’s true that the OSS/BSS beat is where reporters go if they don’t believe in hell.  But it’s also true that OSS/BSS is the business heart of service providers worldwide.  It’s like demand deposit accounting for banks; you have to have it or you’re not a bank any more.  So while the evolution of the OSS/BSS process has been agonizing, it’s probably still going to happen.  Which is what Ericsson MIGHT see here.

Ericsson more than any other “network vendor” doesn’t want to be a network vendor, they want to be a professional services company.  The task of upgrading back office telecom processes is certainly a major professional services opportunity, and Telcordia is certainly a repository of expertise in that area, a repository with a built-in fan base among the OSS/BSS types at least.  Furthermore, it’s a lot easier to spin a profitable professional services deal for OSS/BSS migration if you have the software components of the new systems.  Any product is good because it can be sold multiple times; services are always one-off.  By blending the two Ericsson can make good deals on OSS/BSS migration for the customer and still have a lot to carry to the bank.

So if this is a good deal, then how will it impact the market dynamic?  Well, you can divide the Ericsson competitors into two classes, those who can only spell “OSS/BSS” and those who actually have a practice in that area.  In the former category you have Cisco and Juniper, and in the latter you have Alcatel-Lucent and NSN.  The move impacts these groups differently.

For Alcatel-Lucent and NSN, this spells very significant competition in back-office integration services.  Alcatel-Lucent, who has been more services-oriented in this space all along, may now find itself looking for some M&A in the OSS/BSS space, and right now they really don’t need the distraction of absorbing something.  NSN has decent OSS/BSS elements and credentials, so it’s risk is competition—not so much from Ericsson (who may be the only company in the space that NSN can stand against in marketing excitement) but because the new market dynamic may require positioning and excitement, something NSN has had problems with from the first.  And of course with Nokia rumored to be wanting to sell off its NSN stake, this isn’t a good time for singing.

For Cisco and Juniper the problem is that neither company wants to think about OSS/BSS at all.  Cisco has always considered it a bastion of the Evil Empire of Telephony, an interesting factoid given that Cisco more than any other company might have been the genesis of the TMF, the Church of OSS/BSS.  Juniper flirted with the evolution of management in its sponsorship of the IPsphere Forum, and that was finally absorbed into the TMF, but Juniper has been steadily disengaging from all that activity and has no relevant position in the space at all.  Both companies could now find their occasional blown kiss in the OSS/BSS direction won’t keep operators happy.