Yammer and Political Yammering

It’s one of those days of a buffet of items that missed the cut during the week.  There’s a theme though, which is the evolution of broadband and Internet.

Microsoft announced it’s going to buy Yammer, a company who specializes in creating a kind of social workplace, a collaborative framework based not on evolved voice notions but on social networking.  The move is almost surely a follow-on to their Skype acquisition because a social framework for collaboration still requires something that can integrate actual communications among workers into a model for collaboration.  You can see the picture here; teams are hosted on a Yammer structure and exchange views through posts, and then jump off into collaboration pairwise or in real-time groups, via Skype.  SharePoint apps can integrate with this picture to create a direct linkage to applications and work processes.

I think this is probably worth a shot for Microsoft, but I also think it may be a bit of a tip of the iceberg, meaning that Microsoft likely has things in mind a bit broader than the pure Yammer model.  Any human activity can be socialized.  The social-network model says “Integrate your life into this online process”, and the business/collaborative model says “use this model to support cooperative activity”.  That’s only a small step from “build this cooperative framework into your online activity”.  Microsoft could easily build social contexts around key online activities like listening, TV, etc.  That might be better in the long run than letting Facebook live your life.

On the consumer side of broadband and online service, we have a lot of news about the government’s influence.  Government, of course, cares about election more than about us and thus is likely to take the position that has the most glitz and gets the most votes rather than what’s logical.  However, it can do stuff, sometimes unintended, and there are a couple of initiatives that have that maybe-good-maybe-not characteristic.

The “Ignite” program creates a kind of industry partnership to promote 1 Gbps broadband, and it’s backed by operators and vendors, but in my view none of this backing is really meaningful.  The ostensive motivation here is to “help the US catch up” with countries like Korea.  The problem is that absent redrawing US geography that’s not very likely.  Korea does well with broadband because its population and geography combine to make it highly efficient to serve people.  Where density of opportunity is low, as in the US, the cost of deployment is higher than the market will bear.

Where do we see FTTH in the US?  Answer, in Verizon’s territory.  Why?  Because Verizon has the highest demand density of any US operator, a demand density that approaches that of some of those high-broadband-speed countries.  So here’s my contribution to your effort, Igniters; have everyone move to the northeast.  Otherwise, you’re politicking.

Another government issue is the DoJ review of Comcast’s practices with streaming.  OTT players are now complaining about TV Everywhere because it gives an advantage to the guy who built the network that delivers the content.  Well, if the Internet is open, it should be open to all the options for payment and settlement, but the VC community doesn’t fund carriers it funds OTTs, so guess who ex-VC FCC Chairman Genachowski supports here?  And if providers can’t pay for QoS they probably can’t pay for traffic, which means that streaming video of any form will end up being paid for by the imposition of usage pricing.  Furthermore, all this exploitation of capacity by OTTs is a proximate cause for underperformance of infrastructure investment, which is why global operator capex is under pressure.  Which flies in the face of goals to pursue 1G Internet.  Government sure gets technology!

 

Cisco’s SDN: Real or Cynical?

Never one to shun conflict or controversy, Cisco has probably created both with its announcement of its SDN strategy at its “Live” event yesterday.  It would be fair to say that Cisco didn’t even blow a kiss at OpenFlow, it only promised to blow a kiss at some point in the future.  Needless to say, this has polarized the response.  The Cisco Open Networking Environment will be touted by some as the holy grail of SDN.  Others will declare it to be the work of the devil.  It’s a realistic view of SDN benefits and potential, or it’s a cynical attempt to destabilize an emerging and competitive trend.  You get to take your pick.

If you strip judgmentalism out of the picture, Cisco’s ONE says that application control of the network is a lot more complicated than just OpenFlow.  That’s very true, and I’ve said it here myself.  Cisco says that the basic goals of software control of networking can be met in other ways, and that’s true too—and also something I’ve said.  Cisco says that OpenFlow is suitable for experimentation and not really much else, and that’s not really true.  What is true is that OpenFlow addresses a very small piece of the big SDN pie.  For what it does, OpenFlow is arguably a major step forward, but it wasn’t designed to solve a whole business problem and that’s what the market ultimately needs.

OpenFlow lets you APPLY application controls to forwarding, but it doesn’t determine what the control should do.  It tells you how to make a single device create a forwarding table entry, but not how to thread a route or pick the best one.  These aren’t “failures” of OpenFlow, they’re simply a reflection of the fact that the protocol was designed to allow various systems of software control to be exercised on the network.  It doesn’t dictate what those systems are, but in the real world the value of any SDN strategy will be determined not by how you diddle forwarding tables but by how you know what to diddle.

What this adds up to is that ONE may be a kind of OpenFlow-less vision of SDN but it’s not necessarily cynical.  I qualify with “necessarily” because you can’t judge motives here.  Yes, Cisco is offering alternatives to OpenFlow and clearly prefers those alternatives.  But yes, there are reasons for that position that most enterprises would likely agree with.  So while I don’t doubt that Cisco is “countering” OpenFlow, it’s also supporting market reality pretty well, and that’s more than can be said for some OpenFlow strategies.

If you want to characterize ONE in a way that will likely delight no one in particular, think of it as “What Junos Space Should/Could Have Been and Isn’t”.  It’s a developer framework that offers access to APIs that provide control over network behavior, and as such it is in a position to implement SDN principles on the network.  It also includes applications that support the cloud by integrating with popular hypervisors, orchestration of services, etc.  In short, it’s really deep down a service layer, but one that crosses the service provider to enterprise boundary and so offers general utility.  ONE is a lot more than SDN.  It’s not OpenFlow, though, so in that regard it’s (for the moment at least) less.   There’s a layer missing in the OpenFlow concept; the layer that creates the holistic connectivity vision that OpenFlow can apply.  SDN recognizes this need, and so does Cisco.  But Cisco will fix SDN for Cisco.  If we want a broader fix, we have to look to the ONF process.

For Cisco’s competitors this is just more bad news.  Properly positioned, ONE makes Junos Space irrelevant.  Properly positioned, it also makes Alcatel-Lucent’s OneAPI and perhaps even High-Leverage Networking irrelevant.  Of course, ONE hasn’t been positioned to do either—yet—so there’s still a window of hope for these guys.  For Ericsson and NSN, neither of whom really have a position here yet, ONE sets a pretty high bar but at least a visible one.  The trick will be to get out there with something that is well thought out and articulated while there’s still time.

I’ve said this before but I have to say it again.  “The cloud” is the future model of network/IT integration.  Networking can either contribute a lot to this, or little.  In the former case, it adds value and margins to network operators and network vendors.  In the latter case it subtracts from margins.  In Europe, according to some of today’s Street research, we can expect the growth of smartphone use to drive up capacity needs.  This at a time when operators can ill afford to be pushing more capex programs.  The expected winners in the Eurocapex shift include Huawei, and that shouldn’t be a surprise.  Huawei is the commodity-market winner.  Absent value that drives margins up for operators, equipment commoditizes too.  I hope that Cisco pushes hard on ONE, and maybe pushes the industry into a little service-layer-vision sanity.  I also hope that the ONF realizes that threading a nut onto a bolt is a necessary condition for building a car, but not a sufficient condition.

Cisco’s “Architectures” and Networking’s Future

Cisco’s annual event kicked off, and while all these events aren’t necessarily a window into the strategic future of Cisco, this one might well be.  Cisco came out more strongly in favor of the notion of strategy and architecture than I’d ever heard them.  It’s too early to say that this change is going to be fundamental, meaning that it will drive the future of Cisco (and maybe that of networking), but there’s reason for hope.

Our view of the service provider space is that the operators are stuck in bit neutral and looking for someone to save them.  That salvation will mean adopting a wide-ranging set of changes that add up to what was once called “transformation”.  Those changes will have to conform to an overall vision, which of course is what an architecture represents.  The question has always been where that vision would come from.

Vendors, both network and IT, have seemed to favor a future where the operator drove the strategic architecture bus.  On the surface that seems contradictory to the point of crazy; when has a vendor wanted a buyer to say what they need?  This time, though, the problem has been twofold.  First, vendors truthfully don’t seem to have had a strategic vision.  Second, any transformation to the future risks incumbencies and sales of the present.  Those barriers have stood for almost five years according to our surveys.  Are they falling now?

Maybe.  In the cloud space, for example, Cisco has introduced a series of products that provide what’s arguably the first cloud-specific vision of the network.  By allowing router instances (hosted on servers) to be effectively part of each cloud tenancy, the Cisco cloud approach completes the picture of the public cloud as a virtual extension of the enterprise.

There are other ways to do this sort of thing, but they’re not integrated and they aren’t from the market leader, likely to be providing real hardware routers to both the providers and the enterprise cloud customers.  Enterprises and service providers are historically reluctant to jump into new things when they’re expected to assemble all the pieces themselves, and so Cisco’s architecture drive is important for the cloud.  One example does not a market shift make (or even demonstrate), but I’m heartened by this in no small part because there’s no vendor in a better position to make architecture matter than Cisco, and it needs to matter.

For Cisco, one key benefit of all of this could be a seat closer to the head of the strategic table, where favors in the sense of higher margins are handed out.  For network vendors overall, the major problem isn’t that nobody will buy gear but that the competitive value proposition for network equipment is harder to establish and so price competition results.  You don’t want to be moving into lower margins; the Street doesn’t like it.

Competitor Juniper, who had their analyst day on the same day as the start of Cisco’s event, confirmed that Verizon was a customer for their PTX and also said they’d be offering a smaller version of their data center fabric.  Both could have been strong-ish stories, but in neither case did Juniper draw any compelling Cisco-like architectural pictures.  The PTX, like all fiber-core strategies, is inherently dilutive for routing.  Small fabrics are an oxymoron because it’s not likely that small applications generate enough layers of switching to need fabric connectivity.  Top-of-rack is a good market if it’s your servers in your rack.  Both PTX and QFabric needed to be sung in the chorus of the cloud, and they weren’t—not at launch, not since then, not now.

Regulators in Europe are calling “consolidation” of the carrier space the hope for the future of the network operators.  Most, of course, would consider being consolidated kind of like contracting a fatal disease.  So would equipment vendors.  And they should, because consolidation is the symptom of a market with no possible differentiation other than price.  With all that’s happening in networking in general and the cloud in particular, it’s truly pathetic to be giving up at this point.

 

A Tale of Two “i’s”

There may be nothing that so starkly presents the two very different visions of the future of networking than the “two i’s”.  By this I refer to the contrast between Apple’s i-software-focused WWDC and the investment discussions of two key network vendors—Alcatel-Lucent and Juniper.

Apple’s WWDC was a disappointment if you believe that every one of these events has to launch a new device.  There was no Apple TV (I’ll come back to that) and there were new models of iPhones or iPads.  What Apple seemed to be doing was tidying up its software picture and making a few changes to iCloud.  Does that mean Apple isn’t going to move and shake?  Ha!

I think Apple still means what Jobs suggested a year ago; iCloud would make PCs and Macs just another kind of device.  The WWDC announcements on Mac OS (Mountain Lion) tightened the integration between iCloud and the Mac line, which allows Apple to do more of the kind of integration of the cloud and the PC that we’ve seen from Google and Microsoft with their “drive” products, for example.  Both Google and Microsoft have pushed the notion that their cloud extends to devices as well as desktops/laptops, too.  Google even bought QuickOffice to improve its inside/outside assault on Microsoft’s office.  So is Apple going to do that too?  Yes, of course.

The cloud has become the name given to a transformation of the Internet to a world of online services rather than a world of content.  That transformation is going to remake the Internet in a more radical way than, say, the IPv6 transformation we just experienced (as, so far, a non-event despite decades of hype).  It’s the cloud transformation that will be the proximate driver of the network transformation, which brings us to the other side of the picture with Alcatel-Lucent and Juniper.

Alcatel-Lucent had a shareholder meeting and got blasted for their lack of stock appreciation.  Juniper is having their analyst day today, talking about their own business model.  In the last year, Cisco’s stock has gone up just a bit, while Juniper’s is down over 40% and Alcatel-Lucent’s is down about 70%.  You can see why investors might be antsy.  So what does the investment “i” have to do with the Apple “i”?  The answer is that the cloud transformation I noted here has been around for just as long in the network space as in the appliance space.  Why then has one player grasped it and these others not?

One old friend I have at Alcatel-Lucent suggested that their problem was management-by-objectives gone wild, little-i not adding up to a big vision.  They have all kinds of objectives, but they don’t add up to a strategy.   At Juniper you kind of have the “big-i” problem, meaning that the company itself seems possessed of a persona that can’t look beyond its own property line.  Juniper made the first public presentation of the cloud that I ever saw, but despite the passage of three years and multiple new products, they still don’t have any clear vision of why the cloud is different and how they’ve demonstrated their understanding.

SDN and OpenFlow are catalysts and symptoms of the network view of the cloud.  They’re catalysts because they articulate some of the fundamental changes that the cloud brings, the most obvious and significant of which is the notion of full application control over network behavior.  They’re symptoms because both concepts have been driven by attempts to relate the network to the cloud and not to evolve the cloud from the network.  For the service providers and their vendors, it’s that cloud-out-of-network theme that’s critical.  Who among them is articulating it?

Here’s the truth behind the “i”.  Apple knows that network services are bought from the outside in.  We know the network by what it does for us, and so if it’s doing nothing but pushing bits around it’s not very interesting or valuable.  Yes, you need it if you want the iPhone to work or iCloud to sync your stuff, but your goal is the sync and not the bit.  Apple has an outside-in vision, a vision that aligns with the money flow in any service.  The Alcatel-Lucents and Junipers of the world are still stuck on the inside, for no reason other than process.  Fix it, guys, while there’s still time and know there’s precious little time to do that.  Apple’s not waiting for you.

Some Random Tech Shots

It’s potpourri Monday, folks, and I’ve saved some items that didn’t make the topical cut last week for treatment.  Don’t look for too much of a theme today, other than that I’m looking for “news” that’s not being treated comprehensively.

If you think you’ve heard everything try this one out; the UN’s ITU is proposing a tax on “export bandwidth” from US Internet companies!  Sounds crazy, but if you dig down a bit you see the view of many developing countries; that US sites are flooding the networks there with traffic (at the customer’s request, but surely promoted here) and the result is congestion and pressure to spend more on network capacity.  Dig even deeper and you run into “bill-and-keep”.

In pretty much every kind of public networking except the Internet, we have always shared the payment for service among the providers who participated.  For example, we had (and still do) a notion of “reciprocal compensation” for calls terminating on one carrier and originating on another.  The principle is that the terminating carrier is getting socked by traffic and not collecting revenue to compensate.  Sound familiar?  The problem is that bill-and-keep has become a near-sacred principle for the Internet, enshrined also by Genachowski at the FCC who wants ALL services to move in this direction.

There are benefits to bill-and-keep; the process of settlement among providers is a non-trivial technical effort with non-trivial costs.  There are also problems, the most long-standing of which is QoS.  Nobody will offer QoS on traffic passing through or terminating on them from another provider who’s been paid for it.  But the big point here is that we in the US tend to regard our own system as the baseline, the standard against which all else is judged.  It’s an unreasonable attitude, and we need to participate in these debates on the “internationalization” of the Internet in a more enlightened way or we risk losing what we’re trying to protect.

Comverse also made the news, at least the financial news, showing a loss in the OSS/BSS space when many expected the company would gain.  There may be a deeper problem in play here, I think.  If you look at the “big” application in OSS/BSS it’s billing, and if you look at the issues driving billing you can argue that the big one is the new higher-layer services.  In fact, many companies (including Comverse) are seeing changes in services as their big opportunity.  The problem is that if you look at OSS/BSS costs relative to the cost for billing OTT customers for their services, you see that the old telco way here is just too heavyweight.  That’s true in fact for OSS/BSS in general.

The future of Internet and cloud services isn’t created by evolving traditional voice practices, simply because the technology that creates those services isn’t rooted in the PSTN.  There are many who believe that we can simply restructure OSS/BSS, or networking in general, to be able to produce OTT-like services and the telcos automatically win.  Not so; all that would buy them is an opportunity to compete, and that won’t turn into a win unless they can offer more for less.  What about the old practices, especially in the OSS/BSS space, isn’t really about less for more when you get down to it?  We can do voice free, and do, and yet people still want to charge for it.  Get a grip.  OSS/BSS has to transform.  Comverse isn’t transforming it, but neither is anyone else.

Another news item is from the SDN space, where we have a combination of Google’s report that its use of OpenFlow is improving trunk utilization to nearly 100% and one from the ONF itself saying they might let the old earth take a couple of whirls before they update the OpenFlow spec again.  Is this illogical?  Maybe.

OpenFlow’s big problem right now isn’t as much the state of the standard as it is the context of its application.  We live in an under-educated (and maybe uneducable) market, and absent insight from vendors and the media buyers are left to blunder into the truth rather than seek it out.  In the SDN and OpenFlow world, that’s true in spades (as it is for the cloud) and so in some ways it might be nice to stop moving the duck while shooters get into position.  On the other hand, I’d like to see the ONF become more articulate about those other contextual issues, even if they go beyond the scope of the standards work they contemplate.  Should ONF look explicitly at some of the issues Google exposed with its application?  Most buyers don’t know what those issues are or how to address them, and yet it’s likely most would hit them at some point in a trial.

 

What Oracle’s Teaching Us About the Cloud

The Oracle vision of the cloud is late, for sure.  Ellison waffled on the value of the cloud for sure, even though he doesn’t want to admit it.  Despite this all, though, the Oracle Public Cloud is important for the cloud market, for three basic reasons.

Reason number one is that it demonstrates yet again that the cloud is a new application architecture and not a hosting strategy.  This isn’t about public, or private, it’s about hybrid on one sense, but only in one sense.  Truth be told, the cloud message is about resource transparency.  Ultimately, the cloud is sort of compute Marxism; “from each according to its requirements to each according to its capabilities”.  Stuff runs where it runs best and it doesn’t matter where that is.

Microsoft’s Azure has always been such a model, and this is the model that HP has announced too, but it’s not the dominant model of public cloud vendors because they ARE public cloud vendors not software providers.   Oracle’s joining of the fray is a pretty good guarantee that all the public cloud guys will be struggling to create private architectures, but unless they grasp the essential reality of resource transparency they’ll miss the mark.

The second truth is that Oracle is demonstrated that IaaS isn’t the answer.  Users consume SaaS whatever the cloud platform is, so everything is going to be judged by its ability to present application services to consumers and workers.  IaaS is too far down the food chain here; you can’t create a unified model of resource transparency and couple it with a notion of application empowerment if you start with virtual bare iron.  This is a PaaS game, in no small part because a new set of platform ingredients are needed to create the pervasive resource-transparency vision and equip it to host apps of any sort, for any mission.

Oracle’s big benefit here is that it can incorporate Java and RDBMS services into its platform, and both these things are critical for the way the cloud has to evolve.  Remember the application-centricity?  It’s making the apps presentable, composable, distributable that matters in that mission, and Java is a big element in building RESTful interfaces that present application functionality to GUI composers.

The third truth is that all of the sellable experiences of the network of the future are really applications running in the cloud, just like the business stuff is.  There is no separate architecture for service provider IT and enterprise IT, any more than there’s a different architecture for banks versus utilities.  Software is software, and all of the aspects of application deployment and service deployment are faces of the same coin.  You’ll get your network features of the future via RESTful interfaces too.

This is the thing that’s been hurting the network guys.  The Street yesterday gave Juniper another downgrade and told Alcatel-Lucent that its margins were at risk to being unsustainable, making a default for the company a risk.  OK, I agree with both those points, but what’s interesting is that both companies had an opportunity to do the right thing in the emerging cloud-driven future, and both failed to do so.  This isn’t about making network equipment “better” it’s about creating a totally new context for network equipment—maybe “cloudwork equipment” tells the tale best.

I think we, as an industry, have gotten ourselves into a pickle by letting ourselves be focused by vendors and not by markets.  You look at the media today and you see very tactical stories; little or nothing about grand movements of technology trends.  That’s because nobody wants to sell a grand movement.  BYOD is a cloud issue.  SDN is a cloud issue.  What Light Reading (inelegantly) calls “SPIT”, or Service Provider Information Technology, is a cloud issue.  The Internet’s future is the cloud, and so is the future of the data center.  But it’s not just a matter of hosting on Amazon or replacing Cisco or Juniper routers with hypothetical OpenFlow switches.  It’s a matter of rethinking the role of IT.  It’s never been about doing the same thing with successively cheaper stuff, it’s been about doing a LOT more with a LITTLE more in cost.  We can’t get there tactically, just as we can’t get to the cloud tactically.  Sometimes you have to lift your eyes from the dust of the trail and look over the next hill.

 

SDNs Find their Niche?

Big Switch, one of the most active and interesting of the OpenFlow players, has announced a number of new things that may help the market better understand the value of OpenFlow.  In particular, they help understand how OpenFlow might be used in conjunction with “normal” network protocols.

To start with, it’s important to understand that OpenFlow forwarding is kind of protocol-neutral, meaning that since the forwarding tables are explicit there’s no control plane context that things have to fit into.  You can use this feature of OpenFlow to create new switching/forwarding models that escape the whole adaptive-discovery thing, but you can also use it to augment normal networking, and thirdly to “squash” OSI networking down to a single layer.

The Big Switch announcement is about improved virtualization, and here the goal is to use OpenFlow in a kind of “roundhouse” role to link the physical resources of a virtualized data center or cloud to the logical address space that’s expected by users.  If this is properly done, it elegantly solves the problem of managing the reachability of cloud elements.  Similarly, it could be employed inside a CDN to address content.

To make this work, you need to be able to interwork between OpenFlow forwarding and traditional networking.  Think of every “logical resource” as a central point in our roundhouse, with each position of the turnstile linking that logical resource to a physical one.  Or vice versa; one beauty of OpenFlow abstraction is that you can look at most relations symmetrically.  But anyway, Version 1.3 of OpenFlow makes the process of interworking more efficient by handling tunnel encapsulation better.

The Big Switch announcement also illustrates the most overlooked and important point in OpenFlow, which is that the standard “Controller” function is really a stub that connects a source of connectivity requirements with a forwarding device.  Something higher up has to understand the applications’ needs for connectivity and translate that into a form that can drive creation of paths through successive  forwarding table entries.   Big Switch is developing these applications, and it’s here that the company’s unique value will have to be created (because the lower level is all open source).

The “squash OSI flat” model is something that Big Switch is looking at/working on but for which there’s no specific announcement.  Here the notion is that if “forwarding” is really decoupled from “control” and “applications”, then it should be possible to take all of the decoupled non-forwarding elements and squish them into a single functional entity.  Instead of having multiple layers of protocol, which in many cases are all trying to provide common things like fail-over in an independent way, you now have a super-control function that sees the Glorious All of the network and that then exercises coordinated forwarding changes that happen to be in what the OSI world would have called “different layers”.  Except there’s no real difference now.

The new vision Big Switch is articulating is less a fork-lift acceptance of OpenFlow than a kind of camel’s-nose creeping commitment.  Create missions where it clearly shines, enough to justify using it as an overlay.  Over time, as these missions grow and new ones are added, expect the size of the OpenFlow overlay to increase, and eventually for the boundaries to touch and the network become almost totally OpenFlow.

An interesting angle on SDNs yet to be fully explored is their relationship with CDNs, meaning content delivery.  There has been a wave of announcements on the content and mobile content space, ranging from Netflix’s roll-your-own approach to Citrix’s buy of Bytemobile, but I wonder if all of this might not be taking second seat while the first chair is still open.  If SDN principles really enhance traffic engineering then mobile EPC and CDN applications should be perfect for the application of those principles.

Appliances in the Driver’s Seat: To Where?

Every day we seem to get more signals that the complex market we call “the Internet” is getting more complex.  All markets are ecosystems, all ecosystems are symbiotic relationships among elements, and what’s happening is that the number of elements in the ecosystem of the Internet is exploding.  Not every organism survives a sudden broadening of ecological space, and that’s probably going to be true here too.

A good place to start is with the whole notion of visual experiences.  Up to now we’ve had a simple taxonomy; video is real-time or canned, delivered on demand or linear channelized, and using RF or IP.  Microsoft and Sony seem to be working to change this, and it’s possible that these changes will fundamentally change video entertainment.

The Microsoft strategy (“SmartGlass”) is on the surface a mechanism for linking Windows 8 devices and Xbox consoles to create a more interactive video experience.  With the new system, application developers can orchestrate how video components are interlinked to create something, and users can do things like use a tablet to see other films an actor who’s appearing in their current content might have been in.  They can even (in theory) index to the scenes of these additional films and view them.

The Sony strategy, Wonderbook, is embodied in the notion that a video camera can be used to inject a virtual reader into a story, and also read gestures that would then determine the story line itself.  Sony talks less about the developers, about the platform, and more about the current thrust, which seems aimed at the Harry Potter type of reader.

Underneath the early positioning, both these approaches seem to be aiming at a common target, which is an orchestrated version of augmented reality.  The Microsoft approach is described in more detail so it’s a more convenient example.  The idea is a timeline that divides content into scenes and is associated with metadata.  A collection of content so structured can be “played” to allow the user to hop around at will.  You could add a gaming element to this, or live video, and orchestrate it along with canned content.  In all, you could compose experiences and include a high level of interactivity.

I think both Microsoft and Sony are also seeing this increasingly as a cloud process.  Whether the elements of a composed experience are entirely in the living room (owned assets), delivered (rented) or hosted in the cloud, and whether the elements are devices or content elements, they’re still a distributed collection, and that’s what a cloud is.  I also think that if this sort of model of orchestration of experiences catches on, it could be a real threat to channelized viewing.

So far, despite media hype, the fact is that streaming video has been more directed at making video available where channelized TV isn’t available than about replacing the latter with the former.  However, if users become accustomed to a model of viewing that relies on high interactivity, then either channelized viewing has to adapt to support it, or there’s a good chance that in the channelized-viewing bastion of the living room, changes may really start to occur, and of course it’s new appliances that are the proximate driver of all of this.

Appliances are already impacting productivity, and Google’s buy of QuickOffice may illustrate this.  Tablets, as we know, are overwhelmingly used with WiFi only, and so they aren’t always on.  They have become a poster child for the limits of the cloud, in fact, because when they’re not able to connect they turn into a brick from a business productivity support perspective.  So Google looks at this, looks at how its strategy for cloud productivity might wreck on the tablet reef, and decides to grab a product that would allow the tablet to work on documents locally when no connection is available.

The challenge this poses for “the Internet” and all its players is that we have a usage revolution that’s decoupled from the business model of some of the players.  There is a definite shift toward data-driven appliances, and yet the mobile carrier revenue model is still voice-dominated.  No wonder AT&T says that in two years it’s likely operators will be offering data-only plans for appliances and relying on OTT forms of voice to support any voice needs of the owner.  If everyone is shifting toward texting or video, and if everyone expects voice to be a composed media in a complex experience, then you can’t be messing with PSTN stuff.  Including, of course, IMS.  I’ve said for years that IMS needs to focus on what it’s really needed for, because trying to compete with Web 2.0 in data apps is never going to work, and will only create a lot of bath water to toss the baby (registration and mobility management) out with.

Consumer Changes May Drive Networks in New Directions

Google’s augmented reality glasses have captivated people, but there are other aspects of blending of potentially live material with gaming and other experiences that may be realized faster.  There are other changes in the consumer space that appear to have some potential to create broad impact, and that’s what I propose to talk about today.

Microsoft has released its SmartGlass Windows 8 app to link mobile devices to Xbox and to TVs, and it looks like Microsoft plans to go beyond simply facilitating streaming video.  It’s possible that SmartGlass might augment the experience of gaming, for example, and that would be smart given that Xbox and Kinetic are probably Microsoft’s strongest and smartest innovations of late.

SmartGlass is really about creating an orchestrable relationship among media elements, which Microsoft calls “activities”.  The seemingly normal model might be to be watching something (a movie) and using tablet or phone (via SmartGlass) to access incremental information like cast data, related movies, etc.  The system has the capability to index by metadata so you can jump to a specific scene.  Since you can blend the stuff freely, you can create a blend of realtime and game, or video, and thus create augmented reality.

Sony’s Wonderbook E3 technology also includes some augmented reality features.  The Sony mechanism lets you “insert” yourself into a book via a camera and kick off sequences based on gestures, something that’s also possible with SmartGlass but wasn’t really demonstrated.  Unlike SmartGlass, which is clearly a platform on which Microsoft hopes a whole ecosystem will build, Wonderbook seems more a storybook-for-kids kind of technology.  Clearly Sony hopes to develop it into more than this, but the profit pressure on the company may have led them to focus more on releasing something for immediate sale.

The thing about augmented reality in these forms is that it involves orchestration and convergence of multiple media types.  I think that what’s really being built here is a cloud application where the game console is the user’s agent and the media resource set is essentially cloud content.  That model suggests that eventually elements of alternate reality, including video feeds, might originate remotely.  You could create an image where you appeared beside the Queen reviewing the Jubilee flotilla on the Thames, for example.  All that could promote the cloud and also drive up network traffic.

The big question that these mechanisms for orchestrable experiences generates is whether operators could play a role.  At one level that’s easy to answer; you can play any role you can compete for in a competitive market.  At another level the question is whether the guys who are driving the change (Microsoft, Sony, Apple, Google) have such an advantage in being able to pre-position their technology assets that anyone else is playing second fiddle.  If that’s true, the operators would always end up shooting behind the duck.

Another interesting development in the consumer space is AT&T’s recognition that in a couple years we’d likely have data-only plans for smartphones.  The driver here is that operators really want two things; ARPU growth and cost management.  If users can be made to pay for data usage (which they can be) and if data overages run up their bill more than voice service elimination would cut it, it could be a major win for operators.  Operator-provided VoIP, particularly in P2P form, or even simply ceding voice to Google or Skype, would make more sense than capitalizing expensive voice infrastructure that inevitably would make the offerings uncompetitive in a VoIP-dominated future.

Finally we have social networking.  Anyone who’s been reading this blog knows that I’ve never been a fan of the Facebook IPO or business model.  While the after-offering loss Facebook has suffered and the collateral hit taken by other social network companies (like Groupon) seem to validate my view, it’s really still too early to know whether the space is just suffering from the general economic malaise or has a fundamental problem.  I think it’s the latter, of course, and the fact that Facebook is trying to figure out how to get kids onto the site illustrates that they believe there’s a problem too.

There is, and the problem is in no small part the fact that the ever-expanding drive for profits, when applied through the filter of social networking and media, is going to lead us to some very bad places or lead to inevitable under-performance of the companies.  How many times do we need to be reminded that advertising is a zero-sum game?  Yes, you can argue that avid Facebook users might look for product guidance on the service (I’ve offered some, though I’ve not asked for any).  However, what percentage of the social babble is linked to purchasing?  Precious little.  Search, in contrast, is pretty widely linked to purchase research and even execution.  Thus, search should outperform social, and we already have a mature search engine space.  Moral: VCs need to be thinking about some other hype wave to ride because the social wave may be over.

More Network Tea Leaves to Read

Guess what?  Optical companies are hot again.  Ciena reported better-than-expected earnings, Alcatel-Lucent says that people are pushing it for delivery of its new 400G stuff…it seems like the days of fiber transport slumps are gone.  They may be, and for two kind-of-orthogonal reasons.

If you think about the comments I got from EU network operators, related in yesterday’s blog, then it won’t surprise you to hear that Wall Street is getting more excited about vendors who offer fiber optic transport.  In a future where the value of the network is really a value of the cloud, the role of the network is to deliver stuff cheaply.  This favors a network where more is spent on creating capacity than on managing connectivity.  Thus, there’s a demand-side motive for valuing fiber capacity.

Related to this is that the Street isn’t valuing vendor defenses against things like OTN.  Recent research notes have suggested that products that tie optics to routing to compete with a pure optical core are likely to under-perform.  Some of their data comes from network operators, who say that they want a future IP network core that is just a big optical mesh.  It’s this desired melding of optical core and agile feature-aware edge that seems to be driving Ericsson’s vision of the router, which I’ve noted is a vision operators are liking.

The other driver for optics is the whole OpenFlow/SDN thing.  The problem with optics in a traditional OSI-modeled network is that the layers are supposed to be independent of each other, each offering fixed service to the layer above and consuming service from the layer below.  This puts optics in the position of PVC pipe.  The SDN model, which allows a higher control process to push forwarding rules down onto devices, could support a more integrated vision of optical and electrical layer cooperation.

Speaking of SDNs, Cisco made a presentation to the financial industry on its SDN strategy.  Cisco’s own SDN definition (according to the presentation) “complements” the standard SDN definition which Cisco says is primarily about decoupling the control and data planes of the network.  Actually, SDNs are about centralizing the “network intelligence and state”, as Cisco’s own citation of the standard SDN definition shows.  Cisco’s complementary definition is “a customizable framework to harness the entire value of the intelligent network offering openness, programmability and abstraction across multiple layers in an evolutionary manner. It offers a choice of protocols, industry standards, use-case based deployment models and integration experiences while laying the foundation for a dynamic feedback loop of user, session or application analytics through policy programming.”

Forgive me, Cisco, but I’m having a hard time pulling an SDN definition out of this, or understanding how this complements one.  But Cisco does have some valid points (if not cogent definitions) here if you dig a bit.  They talk about “evolution” of the SDN model, and that raises the question of how one might actually evolve given the large installed base of network devices.  We could displace gear if there were a significant benefit to be achieved to offset the cost, but just the availability of cheaper technology doesn’t justify displacing that already purchased.  Further, it’s not clear just how far SDN can go in networks.

My view is that the SDN model can be visualized as a pair of pyramids joined at the apex.  The top one is the collection of information resources on application connectivity needs, concentrating toward a point where this totality of knowledge can be centralized.  Absent a central collection of connectivity policies you can’t have centralized “network intelligence and state” as the definition demands.  From that point, you move to the lower pyramid which represents the mechanisms for issuing forwarding instructions based on these central policies.  OpenFlow is one protocol for doing that, but there are others that could be used, ranging from “provisioning” paths using something like MPLS (or one of its derivatives) using the path computation elements, to simply policy-managing sessions and admission control aspects of something like IMS.

Cisco may have that kind of flexibility in mind.  The key slide in the presentation shows three circles, labeled (from top left, clockwise) “Policy”, “Analytics”, and “Network” (bottom).  A “Programmability” arrow goes from the network to “Policy”, an “Intelligence” arrow from the network to “Analytics”, and an “Orchestration” arrow from “Analytics” to “Policy”.  The insight here is that SDNs do need to know the state of the network to effectively control forwarding, and operators do want to mine value from their network investment.  The problem is that the slide doesn’t reflect any input from the application.  You need to know what application connection needs are to create a forwarding map; the only other option is to learn that dynamically, which isn’t very different from routing.  But if “Programmability” means imposing forwarding rules and if we had “Policy” input from the applications in some way, the picture would be pretty nice.

One thing seems very clear, not only from Cisco’s presentation but from Google’s SDN example and from other work I’ve heard about (largely research); the real trick of SDN is getting a central source of intelligence and state.  Controlling hardware from that point is not rocket science.  And yet we don’t hear much about this in the OpenFlow or SDN dialog so far.  We have ways to accommodate gathering the data (Big Switch offers vertical APIs, for example) but we’re still grappling with where the data comes from in the first place.  Google uses processed forwarding information captured at the IP edge to SDN-manage routes.  I’ve proposed that DevOps templates/containers could provide the information.  Both these are needed, and more, and that’s what I’d love to see Cisco address—or somebody else address.  Preferably both.