Can You Walk “Juniper’s Path to SDN?”

Juniper did an SDN virtual event yesterday whose title intrigued me; “The Path to SDN”.  Since Juniper is one of the vendors whose SDN strategy hasn’t yet been articulated, I decided to attend the event to see if I could get a clue.  In a lot of ways, the event ended up mirroring Juniper’s positioning overall—a mixture of insight and missed opportunity.

The opening was given by an industry analyst who cited survey data to set up the position that the world of virtualization and the cloud was a world that demanded things legacy networks didn’t offer.  I disagreed with a lot of the data (analyst surveys are like religions; they never agree) but my own data supports the same conclusion so there’s no real foul here.  Juniper’s view of what an SDN had to include was also consistent with my own view of reality; they show a functional progression from application-centric interfaces at the top to hardware at the bottom.  You could easily map it to my “Cloudifier”, “SDN Central” and “Topologizer” layers.  The talk even put OpenFlow in the correct context, which is a means of communicating between two of these layers but far from a complete statement of SDN functionality.

So this is the part where a complete statement could—nay, should—have been made.  We can map legacy networking to the same layers as Juniper used for SDN, and then show how each layer evolves, right?  That’s not how it went.  Juniper focused on the assertion that while virtual resources in cloud computing generally meant multi-tenancy, virtual resources in the network meant AGGREGATE resources.  Virtualization that makes a bunch of network boxes look like a single box is the path to SDN.  It’s here that I have to part ways with the sense of the talk, because a statement that different from mainstream thought on virtualization demands a justification, and Juniper didn’t provide it.

In fairness, you can make a case for that statement if you have defined a final SDN state, a general strategy to evolve to it, and then some of the specific issues.  You can, for example, say that if virtual multi-tenant networks are the future, then anything that abstracts network operation away from device specificity is going to facilitate mapping the network to multiple virtual tenancies.  Scale simplification alone would make that true, and I could throw out a couple other good arguments besides.  But it’s not my job to make them; this is Juniper’s strategy and they needed (and still need) to make sense of it in the real world.  It could have been done, which is why it’s frustrating that it wasn’t.

Juniper is one of the most innovative companies in terms of hardware design.  Their gear has a good, solid, reputation.  Their positioning has always been muddy and dull, and I think the primary reason is that as soon as somebody draws a chart that has a switch or router in it, the presentation is sucked into being about a box.  All of the value of the network universe somehow has to become the property set of this box and thus the story becomes the Story of the Box and not, in this case, the Story of SDN.  Shed the box-centricity and this could have been one of the better SDN pitches.

There was an online poll taken at the end, one that offered attendees four optional views representing their take-aways.  They ranged from “I think SDNs have promise but I don’t understand them” to “I think Juniper has a good grasp of SDN.”  Inexplicably, the results of the poll were shown to all, and the Juniper-favorable choice was the least picked.  That’s sad, but it illustrates that if you’re going to do a public event you have to understand the public’s views and needs…and not just the Story of the Box.

There is yet to be a major network equipment vendor with a strong believable story of SDN evolution, and that’s both an opportunity and a problem.  Competitors who need such a story, including in particular Alcatel-Lucent and Ericsson, still have a clean slate in positioning themselves and thus have an opportunity to tell the real story.  Juniper can still do that too, but every time you come to the plate and strike out it makes it harder to be confident on the next at-bat.  And, with buyers in their fall technology planning cycle, we’re rather late in the game now; not too many at-bats left for anyone.

 

Seeking Content Sanity

Despite all of the recent interest in cloud computing, the darling of network operator monetization in dollar terms is still content.  In addition, content represents the largest risk to the operators, particularly to the cable MSOs who provide more than half the Internet service in the US.  OTT video is both a target of opportunity for channelized providers (TV Everywhere and VoD over IP) and a source of competition (Amazon, Hulu, Netflix).

Delivery of content isn’t the only problem.  Nobody who watches channelized TV can help but realize that big-budget content production is becoming a thing of the past.  Networks are producing more and more “reality TV” and re-marketing their own shows rather than developing the kind of content that’s traditionally been the magnet drawing people to TV.  Absent a fresh and continuous source of new material, there is real doubt about whether content can be profitable to anyone.  But the big near-term risk is to channelized traditional TV, which sheds viewers to “retrospective OTT VoD” material like Hulu or Netflix when the ever-shortening fall and spring seasons of new material end.  With less interesting stuff even in prime season programming, some of these viewers may never come back, and that only reduces the pie to produce material that might keep the remaining viewers happy.  Negative feedback is a bad thing.

Content owners are rapidly seeing their power increasing.  Epix cut a non-exclusive deal with Netflix this time around, and quickly did a deal with Amazon Prime.  Most content providers are now refusing exclusivity, and that increased the total store of content available OTT and also reduces the ability of OTT players to differentiate based on what they offer.  Price is all that’s left, and even OTT players have costs they have to cover.

Regulators, meanwhile, are waffling in the voter winds.  Congress is considering how OTT videos should be regulated, having already decided that some rules like closed captioning should apply equally.  FCC Chairman Genachowski is wringing his hands over the issue of usage pricing, which he feels could be “abused” but is also concerned that the OTTs may be forcing operators into a “stop-investing-in-the-network” corner.

At the root of our problem is the inability to recognize that there IS a root; two, in fact.  To make consumer content work you need delivery capacity and content to deliver.  Our current free-market model is successfully compromising both of these things, and that’s in no small way due to the fact that lack of settlement for traffic handling on the Internet is creating an artificially reduced cost for OTT delivery for third parties, but increasing costs for operators.  This artificial cost advantage has created too much OTT competition and too much traffic, compromising both content production and delivery.

My view is that regulators can forget any rational Internet service model unless they’re prepared to mandate a rational business model.  That means putting settlement on the Internet, to provide a mechanism for money to flow from OTT content source to consumer just as traffic does.  Might that create higher pricing or fewer OTT competitors?  Sure, but if we force ISPs to raise their prices to charge the consumer for delivery, how are we saving consumers?  And having money flow track investment need is a better way to insure that we actually have both content and capacity where we need it.

 

Success and Successors in Networking

John Chambers has been an icon in networking for decades, but at the same time the Street (and everyone else who can count years) realizes that at some point there will have to be a new Cisco CEO.  Just who that might be and when it might happen has been, at times, a major distraction for Cisco and a major concern for the Street.  One reason is that investors have long believed that Cisco is too big in terms of product scope.  Networking has too many commoditizing sectors and the new stuff with margin potential would return more to investors were it split out of the glacial residue.  Chambers is widely believed to oppose a break-up, so any such move would have to come when he’s gone.

It’s not at all clear that’s going to happen any time soon.  Chambers suggests two to four years, which would be enough time for Cisco to solidify its position in some key emerging-technology spaces via acquisitions (Chambers mentions this) and also time to establish Cisco’s position relative to software-defined networking and the union of network and cloud.  This is something Chambers has NOT talked about, but it’s something I think is really a key step not only for Cisco but for other network vendors as well.

For a bunch of reasons, networking is changing to a kind of inside/outside model that’s best visualized as a cloud-and-agent structure.  A massive cloud, containing all of the information resources and processing power, is connected with extraordinarily high-capacity connections.  This cloud extends all the way to the metro area, and in dense geographies all the way out to the central offices.  At the edge of the cloud is a rim of hosts that can run agent processes representing users and their devices.  When you want to do something you contact your agent, who gets it done by drawing on intra-cloud facilities and returns the result to you.  That means that the cloud is served by a thin veneer of access infrastructure.

In this model, you quickly convert a hierarchical IP network into a simple aggregation network, and whatever you use to build that network the profit margins are lower because the differentiation is much lower.  I’m of the view that this shift is inevitable, created by mobile services and consumerism.  The technology manifestation of the trend will be toward what’s called “SDN” but SDN is simply the path of adapting network technology to what’s a completely new mission.  However, because SDN is that path a vendor’s SDN position is the most important single thing in determining whether that vendor will stand or fall in the future.  That’s why I want to look at these positions closely as they emerge, which (as I noted earlier this month) is likely to happen starting in October and extending through the 4th quarter.

All vendors are presented with a new balance of opportunity and risk.  In general, the opportunities are greatest where a vendor has service-layer assets because making these assets compatible with our giant-cloud delivery model will be the central goal of all operators for the balance of this decade.  The risks will be greatest where the vendor has a substantial exposure to traditional routing and, to a lesser degree, switching.  Strong data center credentials will be an asset in cloud-building, and strong optical credentials an asset in building interior paths between data centers.  Access technology, especially an enlightened backhaul and IP roaming strategy, will be strong assets too.

Among Cisco’s competitors, I think Ericsson has the best objective current model.  They’re building up their OSS/BSS business, which if played right could give them a service-layer position.  They have optical, they have access, and they have been quietly compelling in their SDN positioning.  It’s the quiet part that puts them at risk; they need to sing a whole lot louder and better or they’ll have the best strategy nobody ever heard of.  Alcatel-Lucent might be at the most risk, because they still haven’t taken an effective SDN stance and they have such a broad product family that it’s hard to win somewhere without losing elsewhere.  Juniper has lots of piece-part assets but has never been known to put this sort of thing together to create a strategy.  Huawei tends to avoid leading into new issues, so they’ll likely wait for the trends to mature.  In any event, as the industry’s price leader, they have nowhere to go but up.

 

Red Hat Must Swing for the Clouds

Red Hat announced its numbers, and as one Street report said, “the streak is broken”.  The company was light on revenue and net income, considerably below both the average of its performance over time and its prior year.  The result was that the Street has bumped them after hours, but the Street’s judgment may in this case be hasty and incorrect.

Known (or maybe even renown) for its open-source-professional-services business model based on Linux, Red Hat has been a poster child for the open-source evolution of platform software, a trend that’s surely not on the wane.  Most recently, it’s JBOSS application server has been looking like a very viable alternative to expensive platforms like IBM’s WebSphere, and that’s especially true given that IBM’s strategic marketing has been weakening, which makes it harder to push IBM’s platforms to new customers.  It may be that we have a bit of a market timing glitch that will pass.

But there’s  a problem with the “it’ll all come out in the wash” view, and that problem is the cloud.  Today the cloud is the most-recognized technology driver of change in the market.  Red Hat actually has a very decent set of cloud-related components, including of course the Red Hat Cloud and OpenShift PaaS framework and OpenForm hybrid strategy.  The problem is that it’s presentation of its cloud capability is confusing and our survey base reports that it’s not even promoted well by Red Hat sales personnel.  Compared to players like HP, IBM, and Microsoft, Red Hat is a cloud wannabe.  Their strategic influence score is HALF that of these three key players, lower even than OpenStack, a technology Red Hat supports!

Red Hat’s success depends on its ability to promote its software as the framework for cloud-specific applications, which means it needs to be a leader in the conceptualization of the cloud as that universal virtual computer I’m always talking about.  Given that Red Hat is unabashedly a PaaS promoter and that it has the software tools needed to build such a cloud vision, it’s hard to understand why the company doesn’t get out there and push on it more aggressively.  What seems to be happening is that Red Hat is surrendering strategic high ground to pursue tactical sales, perhaps to prevent the very kind of slippage it showed in the latest quarter.  That never works; push a problem downhill and it gets bigger.

One thing I think would help is to escape from “open-ness”.  Open source, open standards…all this open stuff is good in a market where buyers are afraid of being locked in by a given vendor.  Who thinks they’re going to be locked in to OpenStack?  The cloud is in fact the most open thing that’s come along in IT in ages, and likely the biggest strategic success for open source of all time.  “Open” is table stakes, so to brand all your stuff with the term is to throw away the opportunity to brand your stuff with something that actually differentiates.  Think about it, Red Hat.

A JP Morgan CIO survey just released shows that IT execs believe their spending will be down in 2012 and will show a modest improvement in 2013.  Hardware was where the largest expected y/y declines are found, but application software was the runaway priority winner in the survey.  That matches our own work, which has consistently shown that it’s applications that impact worker productivity and business operations, and therefore software is a focus where improvements in those areas are mandated by senior management.  None of the stuff that Red Hat touts on its site make the cut in terms of high priority opportunities to drive sales.

The cloud doesn’t make the JPM cut either, and that we think reflects the fact that the market still sees “cloud computing” as public hosting and not as a new IT architecture.  For Red Hat, that blind spot is critical, not only because it offers the company a way to get itself in the strategic forefront of the world’s biggest IT trend, but because Red Hat is already trying to push things like PaaS and hybridization.  Absent a holistic cloud vision that ties public cloud to private IT change, CIO engagement is going to be a tough slog.  Since service investment is expected to be up a bit for 2013, this may be Red Hat’s big chance.

 

SDNs: A Strategy or a Good Wash?

Operators who talk with me about OpenFlow present a challenge wrapped in a paradox for those who want to promote the technology of SDNs.  The paradox is that operators have a variety of highly tactical targets of interest for SDN principles, a variety that’s hard to harmonize with a single drive forward.  The challenge is that even future harmony of tactics depends on the emergence of a strategy, and it’s not clear how that can happen.  Operators, big potential buyers of SDN and the most likely to push the technology out of the data center, are divided.

The good news is that while I’m seeing a lot of OpenFlow and SDN missions proposed and not a lot of harmony on the details of any of them, there is one common element.  Every operator who’s talking SDN to me right now is talking a metro application.  Some are looking at SDN principles for what could be called “over-stream capacity management”, meaning the control of bandwidth from source to user to insure that streamed content needs match capacity assigned every step of the way.  Some are looking at SDNs as a means of better managing roaming for mobile broadband, specifically in supporting mobility for more general IP/Internet applications rather than IMS applications.  Some think SDNs might help WiFi broadband for tablets and smartphones integrate with cellular services, or even help create unified services from a series of hotspots.  Most believe that there might be a better electro-optical partnership possible for backhaul, for hotspot connectivity, and for residential broadband.  Most also think that network efficiency and operations could be improved.

The bad news is that even among network operators, there is limited contextual awareness of OpenFlow and SDNs.  While every operator I talk with or survey has some specialists who are very aware that OpenFlow is a piece of a future SDN whose other pieces aren’t really being talked about, I’d have to admit that if you had to assign a position to the vast bureaucracy of any given operator, that position would be that somehow OpenFlow is going to evolve to address everything.  Maybe, but I’m not sure I see where that’s happening and I’m not sure that the recent history of standards processes demonstrates any credibility in the claim that these processes can move quickly toward such a broad goal.

To start with, SDN is at the union of two critical trends, the cloud trend and the evolution of the network to a true NGN model.  The cloud frames the demand and requirements side and the SDN frames the evolution and the hardware/operations side.  The two cannot meet in OpenFlow because it’s simply a control protocol to manage forwarding-table entries.  In the September issue of our technical journal Netwatcher, I talked about the bottom part of this partnership, and in October I’m covering the top end.  I also propose to do some hangouts on the topic of cloud/SDN symbiosis, and hope to be able to talk about some specific vendor approaches in October or November, depending obviously on the announcement timing.

The second issue is achieving a strategic SDN position without a specific goal.  We can’t define SDN in any universally accepted way, and the problem is getting worse as vendors discover that SDNwashing is even easier than cloudwashing.  Even a solid definition isn’t the basis for a strategic position because you need an architecture, something that everything from the network on the bottom to the cloud on the top can work to achieve.  We’re trying to do SDN without knowing what one looks like, confident we’d know it if we saw it.  Not so; we’ll never know it without a quick injection of order to dispel SDNwash chaos.

The cloud community has defined abstractions for current network services, and SDNs so far are aimed at an alternative mechanism for creating current network services.  We need to understand how the totality of the current network can be created with or be symbiotic with SDN elements.  I think that will likely emerge as we tweak the relationship between cloud and network, and that’s the best place for them.  Why invent stuff hoping the cloud will somehow morph into using it.  We have a unique opportunity with SDN—an opportunity to build network services based on demand and not supply.  Let’s not blow it.

 

Oracle is Reading “Cloud-Ready!”

Oracle is one of the most interesting companies in the industry, I think.  Not only is it a tech giant and bellwether for the space overall, it’s innovative and it’s got one foot on each side of quite a few of the critical industry divides; cloud versus data center, open-source versus proprietary, software versus hardware.  The company reported yesterday and came in pretty close to expectations, but I think there’s still a lot we can read from their results.

To start with, Europe didn’t drag Oracle down as much as some might have feared.  Yes, software and cloud numbers there were a titch weak, but the weakness was truly minimal.  What this says is that in the case of platform and application software, enterprises are still investing even with the Eurozone economy in tatters.  That is very encouraging for tech overall, but more for software companies.  My survey users suggest that software is the tech element most tied to productivity and thus the one most likely to be sustained even in bad times.  Companies will try to make do in hardware, including networking.  I think Oracle’s results prove that out.

The second thing I read out of the Oracle tea leaves is that this company really does see where computing is heading.  Who besides Oracle makes a big thing out of PaaS on an earnings call?  They talk about their platform services a lot, and blow a quick kiss at IaaS, which is the opposite stance most vendors take.  I think that shows that Oracle realizes that the future of the cloud is PaaS, that a virtual computer with a cloud-ready set of application services will drive the evolution of IT.

Oracle also realizes that the first task this virtual computer will face is to straddle the public/private boundary to create hybrid distributability of applications and components.  They made a specific point of talking about the fact that the same stuff that creates the Oracle Cloud is available to their enterprise customers as a software platform.

I think their hardware position is also very cloud-centric, though here I must admit that I’m drawing on nuances and not on direct comments.  The database appliance model Oracle is driving is the perfect strategy for the cloud.  Stick the database servers on premises, give both local and cloud-hosted apps access to them, and you dodge the whole problem of cloud security and price/performance for data storage.  But of course you can also price DBaaS competitively as a cloud service too.

Is there anything disappointing about the quarter and the call?  Sure, and if you know me you probably can guess what I don’t like.  There’s not enough marketing aggression.  Like IBM, Oracle seems to be allowing itself to become framed by its current accounts, hemmed in by tactical competition with the usual rivals.  You don’t foment revolution by sitting down with the masses at a big conference table, dressed in a nice suit and drinking Avian water, and talking in well-formed sentences.  You wrap yourself up in tattered rags, leap on the table, and orate.  “Oracles” in history foretold the future, they didn’t bury you in tactical recommendations about dry issues of technology migration.  That’s what Oracle the company has to do, and it’s not doing it yet.  Thus, it’s still vulnerable in a big way to a competitor like IBM or HP or even Cisco who might be willing to man the barricades.

 

 

Will We EVER See SDN Substance?

SDN buzz continues, but it’s my view that most of the stories coming out qualify for the “kiss the SDN baby” award; political theater.  What’s happening is simply that SDN is hot, that vendors are afraid of being put at a competitive disadvantage for lack of SDN support, and so are moving to take the minimum steps necessary to claim support.  That in most cases is implementing OpenFlow.

OpenFlow does not an SDN make.  By itself, OpenFlow doesn’t even make a path for traffic to follow.  All it does it let you update forwarding-table entries in a conformant device.  The purpose of a forwarding table entry is to move traffic through a node, not a network.  You still need path computation to do the latter.  To do path computation you need topology, you need device and link state…you get the picture.  We’ve talked about this particular issue before, but this month in Netwatcher we’re looking at the details of the SDN-to-network link in our treatment of the “Topologizer” element of the SDN of the future.  That element (along with the “Cloudifier” and “SDN Central”) are essential in turning OpenFlow into an SDN.  We haven’t heard about these above-OpenFlow features from anybody, including the new OpenFlow supporters.

Huawei, one of the recent OpenFlow fans, has also demonstrated a lack of SDN vision by initially pooh-poohing OpenFlow and SDN, only to be converted by the hot lights and glamour the market affords the concept.  If there was ever a vendor who should have been leading the charge on SDN, Huawei was it.  They are weakest in the areas where OpenFlow is the most destabilizing of competitors, and their prime competitor (Alcatel-Lucent) seems to be dragging its feet on OpenFlow.  Cisco and Juniper, two other competitors, have narrower product lines and an SDN vision could have shined a spotlight on that fact by creating an integrated, multi-layer-evolution, vision of traffic handling.  So far, though, Huawei is just SDN counterpunching.

Cisco’s Product Changes: Exiting or Entering?

There’s been a lot of discussion about Cisco’s ceasing development of its Application Control Engine elements for some of its most popular switches and routers.  More, given that the company has also cut investment in WAAS.  Some are taking this as Cisco’s abandonment of the spaces to more agile competitors, but I don’t think so.

Application networking is squarely in the target zone of the evolution of the new virtualization and cloud computing markets, implicated in SDN evolution, and changing from the top down in response to greater software componentization and orchestration efforts.  Further, I’d contend that an awful lot of both load balancing and WAN optimization are rooted in the old discrete-server paradigm, and so is singularly unsuited to the way the spaces are likely to develop.  So I have to wonder whether Cisco isn’t looking at the whole issue of application networks (as it should be) and working out a new approach.

And this isn’t the only place one is needed.  We’ve evolved networking in an era where network elements were physical devices.  Network elements are evolving to being virtual devices, and then to not being devices at all but rather “containers” that map to applications in one direction and to a pool of resources in the other.  It’s inevitable that software development will adapt to the new rules, and in doing so developers will create an application model that is likely to be much  more dynamic.  This dynamism will undermine the value proposition for load-balancing and WAAS, both by having the application do more directly and by introducing the notion of software control (virtual networking, DevOps, SDN) of the network.  Most current load balancing and application networking solutions are either designed to address issues with static resource assignments or use a presumption of static assignments to work.  In the cloud, everything has to be dynamic.

Could Cisco be looking to be the first vendor to actually field a “cloud network” strategy?  Recall that Cisco is known to favor a “fast follower” role in the market, to avoid blazing trails and instead seek to run others down on the trails they’ve cut.  The question is whether, in this particular case, there might not be enough value to induce Cisco to move quickly or whether Cisco perceives another player (like HP or IBM) is likely to take that lead role shortly.

What the heck is a “cloud network”, then?  It seems certain that at the minimum a cloud network is highly virtualized and highly controlled by software.  It seems certain that the network is a more general resource pool, a pool from which all sorts of capabilities are drawn on demand.  That would tend to favor the notion of “hostable network features” that vendors (including Cisco) have been pushing for some time.  Every device has to serve multiple feature missions because nobody could tell what mission that device might have to play at a given moment.  This vision is the antithesis of fixed-mission-and-position devices that make up most load-balancing and WAN optimization product lines.  It’s seemingly in line with Cisco’s ACE vision, the latest casualty, though.  That suggests that perhaps Cisco believes a major change in what “application control” is and what kind of “engine” might be needed to support it is an onrushing industry reality.

Whatever happens and whatever the motivation, we can expect that Juniper is going to have to respond to this.  Juniper is balancing between trying to be a real enterprise and data center network player (with things like QFabric) and being a security player trying to carefully extend its penetration.  Because they’re a high-inertia hardware-driven company, they can’t afford to let a competitor set the market tone and gain an advantage—it could take four years to re-siliconize to meet the threat.  So Juniper more than anyone needs a cloud architecture, something that lines up hardware to meet the virtual-network needs of the cloud and then dresses the tree with ornaments that provide integrated security, application awareness, etc.  Which is just what Cisco MIGHT be doing.

This adds up to what might be a very interesting fall for enterprise networking, and even some profound changes in how we conceptualize applications, virtual resources, and the cloud.

 

Amazon’s Maps Show Android’s Directionlessness

If you were having a problem accepting my view that Amazon was determined not to be a Google supporter even though Android is the basis for the Kindle Fire line, listen up.  Amazon is releasing its own mapping tools and APIs, following in Apple’s footsteps, and for exactly the same reasons.  Mobile is about local, local is about shopping, and shopping is about where to shop.  This obviously raises some questions about Google’s Android direction.

The fundamental problem Android has is that it’s not like Windows in a critical way; you can’t update system software in a generic way because of customization done by device vendors.  That means that Google can’t push new features to older devices and it can’t insure that every Android device is immediately able to utilize the stuff that Google makes available.  That, in turn, tends to turn the “Android community” into the “Android Balkans.”

But as bad as that is, there are other issues that could be worse.  If you can’t upgrade Android in place, then why base your plans as a device vendor on Google’s new versions?  Why not take the Open Source version (which is still hanging around 3.x) and build your own top-end software above it?  Now you have differentiation.  The risk is that you may not be able to run all newer-version-dependent apps, but if you weren’t going to upgrade to the newest version then you’ve lost nothing.  This is essentially the path Amazon has taken, and the risk has been limited by the very Balkanization of Android anyway; developers tend to make their stuff backward-compatible where humanly possible because there are so many “backward Android versions” to be compatible with.

And more every day.  Jelly Bean, the new Android version, isn’t yet committed for the great majority of Android devices out there.  In fact about as many devices are definitively NOT going to be upgraded as are committed to be so.  I noted when Jelly Bean came out that it was possibly THE critical upgrade for Android in terms of experience, and Google should have moved heaven and earth to get it on everything, including subsidizing vendors to upgrade.  That’s still what they should do.

Even Mozilla’s Firefox OS concept plays on the limitations of Android.  Like Kindle Fire, it uses a core Android from the open source version, but it builds a more web-customizable GUI on top.  If something like it takes off then it really impacts not only Android but also Apple, because you could build a browser that could run Firefox OS apps on pretty much any platform.  So in short, Google has an Android problem at the time when they need one the least.

 

In the Land of the Network, Apple is King

The iPhone 5 may not be revolutionary (well, let’s be fair—it’s not) but there’s never been much doubt it would be successful, and in fact the early allotment sold out in no time and much faster than earlier models.  The question for the industry now is how to deal with this phenomenon.  It’s not the traffic, it’s the fact that Apple and other appliance vendors are continuing to advance their role in driving the direction of mobile.  Wireline is already about pushing bits at low margins.  What are operators to do, and how about equipment vendors?  Some seem to have reasonable ideas, and some less than reasonable.

DT is planning a major rollout of vectored DSL to reduce the cost of broadband deployment versus FTTH, and this is one of those deals that I believe might be less than reasonable.  Operators overall tell me that advanced loop technology is profitable if you can deliver multi-channel video over it, and DSL is video-compatible only with the fairly complex IPTV overlay that AT&T’s U-verse models.  You can deliver video with fiber, as well as broadband essentially without limit.  Vectored DSL, which reduces crosstalk on bundled pairs that plagues short-haul ultra-fast stuff like VDSL, seems to me to be a rather limited and optimistic response to market needs.  In any event, you still need FTTC because the loops have to be short to deliver the 100 Mbps or more that cable can deliver.

In contrast, Telefonica is continuing to push the service-layer envelope with an adventure in augmented reality—not the Google Glasses stuff but interactive multimedia advertising specifically targeted at the mobile user.  Their deal with Aurasma, a leading global player in the space, is both an indication of Telefonica’s determination to get their own mobile ad position into the market and an indictment of network vendor service-layer strategies.  I’ve said this before, but how many times do we need to see carriers jumping into bed with specialized service platform vendors to realize that the enormous opportunities for a service-layer architecture as a boost to network equipment vendor fortunes are largely lost now?  This is a particular problem for vendors like Alcatel-Lucent, Ericsson, and NSN, who need to have as much monetization-oriented story as they can get to differentiate versus Huawei and to build opportunities for professional services.

Alcatel-Lucent won a big deal to transform Telefonica’s NMS, which is an accomplishment by any measure, but it’s still a kind of retrospective success in that it builds on what the network has always been and not what it has to become.  The iPhone is creating the future of networking, and managing networks is just a small step above pushing bits.  I’d like to see Alcatel-Lucent recognize that if you’re going to be talking about more efficient network operations you need not only transformed management but also transformed practices, and that leads to SDN.

Software defined networking is on the move, though like most hype-driven trends it’s sometimes hard to say whether all the moving is in a constructive direction.  One recent focus has been on the application of OpenFlow to optical switching, something that could be jury-rigged with the current level of specification but could also benefit from some more explicit standards help.  The problem is that the move, in my view, is highlighting some fundamental differences in how OpenFlow is interpreted.  Some see it as a protocol to communicate forwarding rules—I count myself in this group.  Others seem to see it as a specification for a low-cost switch, meaning that they expect the OpenFlow parameters to map right to data-path handling in silicon.  That’s not going to happen now, but I don’t think it should ever have been expected.  We should be expecting OpenFlow devices to “compile” rules in a way that’s optimized to their specific implementation of forwarding-plane behavior.  There will still have to be control-plane processes in OpenFlow switches, no matter what, and we may as well give them some useful missions now to avoid confusion later on.

Speaking of confusion, I’m still frustrated by the fact that everything in the media’s vision of the cloud world seems to be revolving around IaaS and Amazon competition.  Virtual hosting is a two-dimensional economy game; first it depends on how much cost your virtual framework can replace and second on how much better your economy of scale is than an enterprise’s.  It doesn’t take a supercomputer to figure out that displacing only the hardware platform isn’t the right answer to optimal opportunity.  In fact, the real cloud opportunity will always lie in creating what I’ll call a “cloud PaaS”, a model of a virtual network operating system that’s inherently distributable and hostable.  Amazon, I think, is moving in this direction with its storage options—these are services that exist in and for the cloud.  So, obviously, are some other cloud platforms like OpenStack.  None of them so far are really talking about the future of the cloud.  If there IS any future, then it has to lie in the definition of a platform that explicitly captures the cloud’s benefits by making them available to applications as OS services.