Finding the Golden Link

This week’s activity seems to me to be pointing to the future course of network equipment.  On the one hand, we have Huawei reporting $32 billion in sales, good growth year over year, and demonstrating that networking from their perspective is a value market.  On the other hand, we have Cisco making moves (including its acquisition of ClearAccess, a home-network-gateway-management company) that suggest that they believe there’s still a place for features.  How that particular dynamic plays out will decide whether the US has any future as the kingpin in network equipment.

I think Cisco’s been making smart moves for the last year.  The first thing they did when their credibility numbers started to slip in our survey was to beef up their strategic sales engagement, and it’s been working exceptionally well.  They then started in on programs to add value to the higher layers of their offering—the NDS buy and the recent ClearAccess deal.  This would boost their ability to engage network operators to help them with profit growth, which is what every equipment seller needs to be doing.  But does Cisco have this figured out?  Not yet.

The networking budgets of the future are going to consist of layers of inexpensive stuff that pushes bits around, topped by a veneer of service intelligence that creates something profitable out of the bits.  Cisco knows, more than any player in the market except rival Huawei, that it is not possible to be even a decent-margin competitor in those low layers over time.  Thus, if you want to sell the bit-pushing stuff that creates the revenue but not the profit, you need to think VERTICAL INTEGRATION of your service veneer and your bit-pushing.  And that, friends, is what’s been hard—for Cisco and for everyone else.

Talk to a network guy about service-layer integration and it’s like pulling the string of a ventriloquist’s dummy—the words “QoS” come out automatically.  The best thing that could happen to network vendors today would be for the Good Market Fairy to waive her wand and make their tongues cleave to their palates when they tried to utter the words.  Vertical integration between network and service layers has to be via something that’s directly merchandisable, or it’s not going to do any vendor any good—because it won’t help the buyer make money.

Curiously, none of the vendors seem to get this.  Everyone in the marketplace sees this (except the vendors) and so everyone is waiting for some big stumble and fall, looking for the Lilliputians who will fell Gulliver.  That’s what’s behind the current notion that OpenFlow switches are going to rise and smite the mighty.  Wrong.  OpenFlow is just a concession to commoditization, and there’s no smiting in commoditizing markets, it’s more like death from erosion.  Where the real risk of a sudden loss exists is in that critical binding of network to service.  A success there, by ANYBODY, could create a real market wave, because we’re entering a future that redefines the relationship between consumer and network service, and the magnitude of that relationship’s impact is simply incalculable.  There are trillions of dollars in service revenues at stake, and the total infrastructure impact could run into hundreds of billions of dollars, enough to make anyone a real contender.

Everything we talk about in networking, whether it’s online advertising or streaming video or even QoS, is aimed implicitly at a cost-driven evolution.  That’s not going to build the network of the future, it’s going to undercut the network of the present.  The solution to industry stagnation and infrastructure commoditization is the same no matter what the industry is, it’s RAISE THE TOP LINE.  That’s what somebody is going to do; it’s just a question of “who?” and “when?”

Two Market Lessons from Comcast

Comcast is joining with the telcos in offering a hosted PBX and UC product, and the move is significant both for the overall competition in the UC space and for the evolution of voice and UC overall.  Cisco is expanding its Jabber UC, and also linking it better with telepresence.  Sprint is partnering with Cisco in cloud UC too.  But the big guns have yet to make their move in this space, and when and if they do it could be a major market-changer.

Cisco’s Jabber update and integration with telepresence is no slouch from the perspective of a change in the dynamic.  It’s a strategy for the cloud, for enterprises and service providers (as Sprint is showing).  It may be a goad for Google, because Sprint is the only operator to have actually integrated its service (wireless of course) with Google Voice, a product with great potential but which has been stalled in its current shape for a long time.  The telepresence integration is important to Cisco, too, because of the way collaboration works in the real world.

Our research on this particular topic is extensive, and what I’ve seen is that most intense collaboration (the kind you use telepresence for) evolves out of simpler attempts to get something done.  You try the phone, you engage sharing a document view, and you then move to video.  This pattern means it’s really hard to get to the finish line if you keep trying to bypass the starting gate.  Chambers, who doesn’t understand why people just don’t buy routers till the bleed from the ears, kind of skipped that part of the collaborative lesson.

They may be enough of a threat to lever the other guys, of course.  I already noted that the Sprint deal might give Google some pause, and some of the features of the Jabber UC platform seem to hit at where Microsoft is likely to take Skype.  Here the problem for Cisco is the free platform side of Skype.  Cisco doesn’t want to become a provider of free communications itself, nor do Cisco’s carrier customers.  The latter are not adverse to having a low-cost UC platform, but as the AT&T, Verizon, and now Comcast moves show they’re far more comfortable with a digital-voice PBX hosting mission than a full-blown UC mission.

Which gets us to Skype and Microsoft.  Everyone expects that Microsoft will create a kind of super-Skype that will build on the basic UC capabilities of the free platform and avoid driving away the current massive user base.  At the same time, Microsoft has a potential conflict with its own customers on tap because Skype is a revenue-drainer for the telcos and Microsoft still wants telco partnerships for cloud services.  They need free Skype, they need Skype integrated with SharePoint, and they need super-Skype as a carrier platform.  That’s a lot to get done.  Google could move faster; Cisco could have.  Will it take Microsoft, hardly the market’s leading innovator these days, to follow along?  If so, there may be a lag here because I think Microsoft wants to have super-Skype linked to Windows 8 and its phone and tablet strategies.  There will be a tendency to keep adding bows and buttons and laces and pretty soon you don’t have a dress, you have a blob.

Comcast is in the news for a reason beyond UC, and it’s neutrality (again).  Comcast’s traffic engineering practices and appeal of an FCC ruling were what got us neutrality regulations in the first place.  Now, even as those regulations are being appealed, Comcast is pushing the envelope again.  Its X-box streaming service uses the carrier’s “off-net” IP bandwidth, the same thing that Comcast uses for voice and that AT&T uses for video.  The purpose for now is to avoid having the traffic hit the bandwidth caps, but it could also provide enhanced QoS.  This isn’t specifically prohibited by the neutrality rules, but it’s something the FCC said it would watch because it could become a threat.  Some are now saying that time of threat has arrived.  AT&T has also indicated that app developers would be able to pay for traffic their apps use and thus to not have them charged against usage for the customer.

Operators are pretty confident that the FCC can’t stop “bypass-Net” here for premium services, which means that things like cloud services are also likely to  jump off the Internet bus even at the access level.  Given that they do that inside (see my prior blogs) the cloud already and within CDNs, we’re seeing a pretty quick trend toward creating a different model of services that won’t be based on an open and incrementally free connectivity fabric.  That threatens the Internet model, but the model is already being threatened by the fact that profit on incremental capacity is impossible to earn in the current service model.  We may be facing a choice between two poles of solution that people aren’t going to like.

 

OpenFlow: To the Cloud?

If demand-side issues are driving changes in the industry, then it’s fair to ask where the industry is going.  We talked yesterday about the major drivers, and today I want to talk about the major technology shifts a bit more.  In particular, I want to make the connection between the changes in opportunity and the changes in technology.

The Internet is a cheap bandwidth fabric, and as such it’s not the sort of thing to attract a lot of interest from those with ROI targets.  However, the Internet is here to stay in no small part because there’d be an international revolution if you tried to do away with it.  Thus, we’re seeing the changes in demand that we’ve talked about create changes on top of the Internet.

We can visualize all of the demand-side forces acting to create a broader notion of the Internet as a cloud—broader because the new notion involves the information and processing resources needed to support decisions.  In a way, we’re drawing the hosts into the network because apps and Siri-like personal agents really act as intermediaries, hiding the sources and resources because those indirect agents do all the interacting with them.  Google does a lot of this today, and so do CDNs.  So as we cache and disguise things, what happens?

With relatively few points of cloud hosting to worry about, likely most cloud providers would create fat optical pipes between data centers and thus create what would be effectively a flat data center MAN/WAN.  The technology a lot of people favor for this is OpenFlow, and I think that it’s logical to assume that OpenFlow could play a major role.  In OpenFlow, the forwarding rules are created by a central software controller, hence the term “Software-Defined Network” or SDN.  The strength and weakness of OpenFlow is this explicit path control.  It doesn’t scale to Internet size, but in the cloud model it doesn’t need to because the internal connections aren’t between users, they’re between agents and resources, and among the resources themselves.  The structure isn’t “open” like the Internet any more than a CDN is in its internal paths, so it doesn’t need Internet flexibility and scale.  The Internet model is a liability inside the cloud, in fact.

The key point here is that we should not look for something like OpenFlow to become the architecture of the Internet.  It’s the architecture of the cloud, and most specifically of the two-layer inside-outside model of the cloud.  I think that model will prevail, and so I think OpenFlow will prevail—eventually.

Remember my rule, though.  Only new revenue can drive revolution in infrastructure.  In order for the OpenFlow revolution to happen the cloud revolution has to happen, and today’s cloud computing isn’t focused on making it happen at all, it’s focused on the cost-based optimization of tech resource consumption by enterprise IT.  I’m covering this whole issue in greater depth in the next (April) Netwatcher, but for now let me say that the total revenue available to the cloud model I’ve been discussing is nearly an order of magnitude more than the revenue available from displacement of enterprise IT from data center to cloud.  That’s enough revenue to pay for a tech revolution, but we won’t get it all at once.

A year or two from now you’ll probably be consuming some OpenFlow-based services, but you won’t know it.  In fact, if my model is correct there will never be a time when the nature of “the Internet” appears technically different at the client level.  The IPv4 to IPv6 address transition would create far more visible change (and some will make their cloud transformation under the covers).  We’ll need the big Internet address space, the open model, for access to services and for a declining population of legacy websites and services.  We’ll have it, forever, even when a transition to the cloud model of the Internet is complete.  Which won’t be for five to ten years.

I’ve been getting some very optimistic predictions about how OpenFlow is going to push everyone out of the market; how it will kill major vendors.  Not likely.  Not only is the cloud-transition process going to take a long time, it’s going to be evolutionary and thus will tend to favor trusted incumbents.  That’s particularly true given the fact that recent studies have suggested that open source software libraries contain some serious malware vulnerabilities.  I think it’s likely that more and more open-source success will come from having commercial providers take responsibility for the sanitizing of the code and the integration of the elements.

 

What’s Driving the Market Bus?

We’re hearing a lot about the changes in networking, in IT, in pretty much everything technological.  Likely there ARE going to be tumultuous changes, but if we want to understand them, to get the planning-for-the-future process right, then we need to understand what’s driving them.  We tend, in the IT and networking world, to focus on tactical issues rather than the big picture, so let’s broaden out a bit and see what’s creating our future for us.

Broadband mobility is our primary driver right now.  The fact that we can link people to high-powered information and processing resources via a simple portable device is a profound change in the notion of “empowerment”.  What broadband mobility has done is to shift focus from the instruments of satisfying our needs to the instruments through which we receive our answers.  The limitations of the appliances, whether created by technology or simply by form factor, are shaping our future.

Smartphones are the next in our list of drivers.  A smartphone is the ultimate information appliance because it’s handy and easy to manipulate, but it’s also the most limiting.  We can’t do a lot of browsing and gathering on a smartphone; the form factor and our desire to use the device while at least semi-engaged in other activities is critical.  Smartphones gave us apps because we needed a way to conveniently get not RESOURCES but ANSWERS.  And because more and more of our regular interactions are via smartphones, more of our expectations are set to “answer-me” mode.  Smartphones, combined with ubiquitous mobile broadband, create the “information-coupled society”.

Tablets are revolutionary in no small part because they’re a slight shift in the trade-off point that smartphones created.  They’re big enough for an immersive experience but portable enough to be considered a personal accessory.  But the most significant thing about tablets is that they’re arguably the first legitimate child of the web.  They’re a browser in a box, impossible before the Internet age.  They do most of what the average person needs an information appliance for (as opposed to a superphone, which does “calling-plus”) and so they relegate personal computers to being productivity tools.  They’ll never replace the PC but they will clearly cut off the “Internet model” driver from PC buying, which means that they’ll likely displace nearly half the total PC demand over time.  Ultrabooks?  Ultra-silly.  If you want a PC buy a cheaper laptop.  If you want portability, buy a cheaper tablet.

The smartphones and tablets are both “related drivers” to mobile broadband.  Our next driver is a bit more orthogonal; it’s the ebb and flow of broadband pricing policy.  Broadband capacity per unit cost, in technology-market terms, is like lemmings.  When there’s a lemming boom, predators explode because the ecosystem is stacked with prey.  Now I’ve been privileged to see more than my share of lemmings, and they don’t march to the sea as claimed, but prey booms followed by predator booms inevitably lead to prey busts.  And you know where that leaves predators.  Our challenge in networking is to insure that we have a kind of soft landing here, which is challenging because nobody really wants to admit to the problem of low return on capacity—except the operators.

We’re heading into a transformative period, and there’s little question that the explosion of innovation we’ve seen has been the result of the rich growth medium of the zero-marginal-cost-per-bit Internet.  All of our other drivers depend on not having a collapse of capacity or a radical increase in capacity cost, which is why I think it’s so important to consider what could be done about the problem.  The answer is either to make capacity a lot cheaper to produce or to subsidize its production with other revenues.

 

Cut Cords, Cost-Based Clouds? Not.

Friday’s usually a bit of a slow news day, and so it’s often a good day to recap some things that were pushed to the rear of the interest queue by other events.  There are a couple that fit this category that I’d like to explore a bit today, and the first two come from the Internet.

I’ve never been a supporter of the popular notion of cord-cutting, meaning abandoning cable/satellite or telco TV in favor of a pure streaming OTT TV model, and the latest data shows very clearly that cord-cutting is just not happening.  TV viewing is up, subscribers for TV services rose even in 2011 as Netflix gained its greatest ground.  What I think is clear is that mobile/portable devices and streaming are combining to make it possible for some people to watch things at times they couldn’t or wouldn’t watch TV, or to jump off onto an alternate program when they didn’t like the consensus choice for the home TV.

What this says is that (again despite the hype to the contrary) the TV Everywhere model is probably well-grounded on the viewer-behavior side.  If people are supplementing their channelized viewing, then the best way to support their needs is to offer them alternate delivery of what’s essentially the same material.  Rights to network programming, then, are conveyed with subscription to the broadcast of that material.  Once you’ve got the rights, you can then stream both roughly current and past episodes of your programs.  This is the force that I think is transforming the video space, and also the equipment space.  In the latter, the change is coming about because TV Everywhere viewing is a proactive monetization strategy and it shifts the focus of some early CDN interest from pure traffic management.  I think that players like Akamai who have been public CDN providers and are now dabbling in licensing their stuff to the carriers are doing so because TV Everywhere monetization could make carrier CDNs a reality on a larger scale (but a longer timeline, says my data; the projects take longer to get approved because they have more integration requirements than simple CDN projects).

A related point to this is advertising of course.  According to the analysis of 2011 numbers, online ad growth slowed and in some areas reversed.  Paid search, for example, declined not only in Q4 but for the year overall.  Network TV ad spending rose by almost 8% in Q4, but it was also off a bit for the year.  This suggests that there isn’t a persistent flight from TV commercials to online advertising (and so it may explain why Google is honing its privacy policy to target better).  Interesting how objective data often flies in the face of “classical wisdom”.

Classical wisdom is a problem in the cloud too, if you look objectively at the numbers.  I just saw a survey that said that the main reason why people want to move to the cloud is not for flexibility (something the cloud actually delivers) but for savings, which the cloud cannot deliver for the average IT app.  So what this says is that the expectations of businesses for the cloud migration are not realistic and that the cloud will be a tragic failure.  Here’s the thing.  You cannot, now or ever, fund a technology revolution with anything but NEW BENEFITS.  The cheapest way to do something is almost always how you’re doing it already.  You have sunk costs (unamortized capital cost), commitments to support, and most of all operating procedures and performance benchmarks that you’ve build a business around, and you’re then going to toss this for something hosted on the Internet?  Believe in cost-based cloud adoption and you’ll see your shadow next time you go out; there will be no clouds to block the sun.

I don’t believe in that model, of course, nor should you.  IBM had part of the truth in their recently published survey—they said that business process optimization would drive cloud adoption.  We analyzed this report in our March Netwatcher.  I believe in business-process-based cloud adoption.  IBM’s only problem was that they said that it was simple cost elasticity and scaling that would be the “revolution” and that’s nonsense again.  We are going to redevelop our model of IT, and our model of the Internet, based on the cloud and the changes will touch every aspect of every business and every consumer’s life.  But it’s not instant gratification.  This is a long process, which is why nobody bothers to hype it.  For those vendors who can stay the course, there’s an opportunity here the like of which we have never seen before.  There are hundreds of billions of annual dollars on the table, more new revenue than there has ever been, and new benefits fund revolutions, remember?

 

Seeking Direction: HP and MSOs Both Struggle

HP has been having its problems, obviously, and they’re interesting from a whole-industry perspective because HP is both a broad-based player and a potential major contender for leadership in the future network/IT fusion.  The problem right now is that they’re locked up in a restructuring mess and there’s a real risk they’re not going to emerge—ever—as a healthy player.

When HP said it would dump its PC business, everyone got all up in arms.  It’s a third of their revenue after all.  The problem is that PCs are at best a thin-margins-till-the-end-of-time game and at worst something everyone is going to have to cede to cheap offshore manufacturers in any case.  And that’s if you believe they will survive at all in a post-tablet world (which I do, by the way).  The trouble is that when Whitman reversed the decision, it left unaddressed the really big problem, which was that HP really didn’t have a firm strategy to lead them into the future.  Even Whitman couldn’t believe PCs were that strategy, so what was it, and where is it?

Like every vendor in the IT or networking space, HP has to either believe in a cloud vision or not.  If there’s not going to be much of a cloud revolution, then there will likely always be a fundamental division between information appliances that are designed for some heads-down production and those that are more for looking and doing simple-low-input tasks.  PCs, especially laptops, are in the former group and tablets and smartphones in the latter.  That means the no-cloud future guarantees PC survival but makes it clear that the value proposition isn’t likely to expand.  How many producers are there out there compared with the number of dabblers?  HP doesn’t have much of a win here.  So how about if there IS a cloud vision?  That’s also a forked road.

If cloud success comes from displacing internal IT, then the cloud will commoditize and marginalize servers and PCs alike, because that’s what current IT is run on.  You can’t displace internal IT with hosted IT at higher costs, and you can’t assume that hosted IT will consume more servers than internal IT did if costs aren’t higher.  Thus, HP loses servers in a cloud future, UNLESS that cloud future somehow builds from more than just cost-based shifting of apps from data centers to hosting.  HP’s success would depend on a non-displacement success for the cloud.  OK, that seems simple to me, but not to HP, apparently, because they’re not doing anything that suggests they have a view of where the new, non-displacement, cloud apps are coming from.  I’d sure like to know what HP’s cloud vision is, because I think that their reorg focus so far has been mostly on keeping the Street in the dark about how badly components like their PC business are doing.

We’re possibly on the verge of another revolutionary change too, this time in the cable space.  Comcast and other cable operators are looking seriously at how to transition to something that’s more IP-oriented in the delivery of their video programming.  Here, as in the cloud, I think we’re seeing an example of someone testing the waters without knowing whether they want to swim or fish.  How good is Comcast’s convergence platform RFI going to be when it’s far from clear how much IP video can be made profitable—if any?

Linear RF is the best way to deliver broadcast, if broadcast is all you’re going to do.  The problem is that for most operators, you also have to contend with things like VoD and broadband Internet.  These compete for cable spectrum, and the more you have to provide to these non-broadcast apps the fewer channels you can fit into the remaining space.  At some point you have to ask yourself whether you’d be better off pushing everything out in IPTV form from a single pipe, particularly if you need to be delivering TV Everywhere content over IP to other devices anyway.

The problem here is that we have an RFI going out for a market that has no convincing business, technical, or regulatory foundation.  Add this to the fact that over 80% of all carrier RFIs never deliver any equipment and you have a picture of the cable industry in the throes of the same confusion that HP is in.  They’re just confused about something different.

 

Did Oracle’s Quarter Have Clouds or Need Them?

Oracle is one of the more interesting tech companies, if you’re looking for an indicator of where markets might be heading overall.  They have a broad exposure across hardware and software and also a nice combination of “offensive” and “defensive” products, meaning those that do well when confidence is high and those that are more conserving.  This quarter, though, it’s a little hard to read the tea leaves.

At the company level, the numbers were good news; Oracle beat Street estimates and its stock gained in after-hours trading.  It was a contrast to a disappointing report this time last year, but at the same time there were still some common elements with that sad period—hardware.  Despite good order growth for their hardware appliances (database and analytics) the company’s hardware sales were again below expectations.  At this point, I’d say it was clear that Oracle is not going to be able to sustain an independent server business; its hope lies only in the appliances.

To some of the media pundits, Oracle’s dependence on things like analytics seems to fly in the face of the “cloud revolution”, but that’s because these people don’t understand the difference between cloud commitment and cloud stories.  Truth be told, any analytics application that’s so data-intensive as to demand fast-memory processing of the results is clearly not going to run in the cloud, where users would have to contend with data latency or storage costs.  What is then responsible for the hardware dilemma?  It is the cloud in a way, but not the way people think.

What vertical bought more Sun servers than any other?  Hosting and carrier companies.  In the hosting space, the x86 platform has been stronger for cost reasons.  The cloud drive, particularly IaaS, makes it impossible for a company to rely on a non-x86 server because the machine images produced for non-x86 machines are different.  The telcos, who have traditionally used the Sun platform for all manner of good stuff, have been under-investing in OSS/BSS because it’s not a profit center and in service-layer features because they lack an architecture.  Oracle needed to push a vision of the service layer if they wanted to rely on their traditional carrier vertical, and they still have not done that.  Oracle needed to push a vision of PaaS if they wanted a cloud model that didn’t put SPARC platforms at an inherent market disadvantage, and they didn’t do that either.

A few people have said that the problem is that Oracle isn’t in networking and rivals like Cisco and HP are.  I don’t buy that either.  Networking isn’t creating too many heroes on Wall Street right now.  However, it is true that with a combination of servers, middleware, and network hardware you could present a pretty darn compelling data center and cloud story.  The question is whether Oracle’s lack of networking is worse than, say, Cisco’s lack of software.  I don’t think so.  The cloud ultimately is a platform, and it’s actually pretty easy to build a good cloud-service story with middleware.  Easier than building it on networking (as Juniper is proving).

So it’s not the success of the cloud that puts Oracle’s vision at risk, it’s the failure of Oracle to control a darn good cloud opportunity.  There will be a transformation of IT to a cloud/services model.  It will look something like PaaS today, something like SOA (today and for the last five years), and a LOT like the stuff Oracle provides.  Why then do we hear so little insight from Oracle?  Could it be that they understand sales too well and strategy too little?  That sounds like the “Tech Company Disease”; it’s that prevalent in the space today.  Well, Oracle needs a cure or the current quarter could be an aberration.  You can avoid waking a sleeping giant, but a whole population of them puts the odds against a pure defensive strategy.  There are too many players who have the desire and the capability to answer the cloud-service challenge.

 

A Zero for NetZero, a Hope for Microsoft

The media is having a bit of a problem coming to terms with the NetZero proposal for “free” wireless broadband, and what’s sad is that their writers’ block is for the wrong reason.  They want to say “everything is free” because that will get a lot of reader attention, but they’re skeptical that NetZero is really free at all.  One year, 200 megs per month, max and you’re shut off.  Some point out that this is based on Clearwire WiMAX so it’s not going to work with most devices (you’ll need to buy a gadget/dongle).  All fluff.

First, this is clearly a marketing ploy and it’s one based on the fact that WiMAX isn’t exactly setting the world on fire.  The “other 4G” as it could fairly be called, WiMAX was a decent strategy for the five-year-ago period when it was being conceptualized.  Then, having a hundred meg or so to divide up in a large mobile cell seemed reasonable; the average wireline broadband user had a couple of megabits of bandwidth to play with.  You could plot large cells with more people, and you have a less expensive service that would be a good alternative to something like a WiFi hotspot.  But WiFi got faster and faster, people wanted to watch HD movies, and the rest is history.

So of course is NetZero with this plan.  The insurance industry have something they call “adverse selection”.  The notion of insurance is one of pooling risk; a bunch of people buy a policy to insure against something that’s not very likely to happen within the group, but is catastrophic for those to whom it does happen.  The collective premiums fund the losses of the unlucky few and a profit for the insurance company.  But suppose everyone could figure out whether they’d be one of the unlucky?  Only those people would buy the insurance, the premiums wouldn’t begin to pay the losses, and you’d be history as an insurance company.  That’s the issue here.  Free wireless broadband is really appealing to people with little money to spend on broadband.  So a bunch of people will jump on this, collectively consuming a lot of capacity, and few of them will ever convert to a paid plan.  If they were willing to pay, after all, they can get a competitive plan from “real” wireless providers already.

So is this a way to get somebody to buy a WiMAX dongle, hoping that it will then rope that person into WiMAX forever?  Good luck with that one, NetZero!

In the cloud world, it’s interesting to see that Microsoft is gradually exposing some sense in its marketing of Azure, even as from a pricing and positioning perspective it still seems locked in the notion that Azure is an EC2 competitor.  This is a classic example of letting yourself get positioned by counterpunching.  PaaS, which is what Azure is, is a far more logical cloud architecture for the mainstream of cloud demand.  Because more software is included in the cloud, the unit cost and support for the user is less.  Because the platform of the cloud can easily incorporate cloud-friendly development features, you can build cloud-specific apps more easily than you could with IaaS platforms that look like naked iron.  Where does this come across in Microsoft’s material?

Their latest is a rather nice note in MSDN Magazine on Node.js, which is a joint Java activity with Joyent, a startup who recently got a nice round of financing.  Node.js is a server-side javascript interpreter that’s optimized for HTML and I/O handling and that is easily incorporated into modern web applications to extend a basic web server with back-end application power.  Yes, you can install Node.js on an IaaS machine, but if you can get it as part of a platform why not cede the software support/maintenance to your PaaS provider?  The forum Microsoft picked here is the problem; yes you need developers to develop applications, but you need senior management support to pay the developers while they’re doing it.  Microsoft needed to position Azure better for the decision-maker.  The good news for them is that it’s not too late, though the big pressure point for cloud change is not too far off.

What’s that pressure point?  A new EC2 or a better IaaS price, or what?  None of the above.  The big pressure for cloud evolution is tablets and smartphones.  Your app is an icon that’s a gateway to a cloud service.  Siri is a voice-based search engine that instead of giving you a list of possible results, gives you a result that’s probably contextually correct.  As you move to mobile devices, you change the use of the web from research assistant to personal valet.  That’s a cloud mission not a website mission.  Can you build “contextual clouds” using IaaS?  Sure, same as you could with bare metal.  It would be much easier to do that with a platform that had actual features to facilitate those kinds of apps, which is PaaS.  Azure might still be Microsoft’s secret weapon.

 

An Example of an App-to-Cloud-to-Flow Ecosystem

I mentioned in a blog last week that there was some important progress being made in the fusion of cloud development and deployment—what the industry calls “DevOps”.  There are also important developments in the area of cloud networking, another topic I’ve blogged about recently.  One indication of a unified approach to these critical problems was announced today by Big Switch at 11 AM EST, too late for my normal blog.  We’re going to talk about the Big Switch Open SDN announcement here, but first I need to summarize why I think it’s important.

The cloud has preoccupied nearly everyone, but not much attention has been focused on how the cloud changes the model of network services.  In the past, we obtained services by linking OVER the Internet to a URL that represented the capability or information we wanted.  On the surface, the cloud model doesn’t seem too different.  We have stuff hosted “in the cloud” but the stuff is still accessed via a URL.  Sure there are issues that are associated with the way that a dynamic resource is mapped to that URL, but hey it’s not rocket science.  Look deeper, and you see more difference, perhaps enough to create a revolution.

In a cloud future, users’ needs are more dynamic too.  Imagine a Siri-like process front-ending a dynamic resource pool and you get a glimpse of what’s coming.  The user makes a request of a friendly agent in the cloud and the agent marshals all sorts of processing power and information to fulfill it.  That information isn’t delivered directly to the user, but through the agent, and the information paths are internal to the cloud and not external to the user.  That’s cloud networking; the separation of cloud-flow from user-flow.  Content delivery has already taken to a similar model; a CDN is a set of caches (pushed increasingly forward toward the user) and an interior network that delivers data to those caches.  Users connect not with distant content hosts but to local cache points.  It’s a service-network and service-access dichotomy; like the cloud.  Inside the CDN are a limited number of (you got it!) flows.

And enter another flow, OpenFlow.  OpenFlow is an explicit-connection model of networking where flows are authorized not automatic.  For the whole universe of the Internet it doesn’t scale, but for the flows inside a cloud it’s perfect.  Even VPNs likely fit will in the OpenFlow model, and data center networks darn sure do.  The cloud validates OpenFlow, providing that you can get an OpenFlow cloud model built in the real world.

Architecturally it’s not hard to see how to do that, and to create a utopian model of linking applications to explicit network flows.  A switch controller simply creates forwarding rules; that’s the OpenFlow model.  In practice, though, you obviously need to worry about things like how you manage persistent flows, how you create VPNs or VPLSs, how your applications actually drive policies—you get the picture.  The point is that there is a lot of stuff that has to be added to basic standards to create a flow-based future network, and the process has to start with a conceptualization of the problem at an ecosystemic level, from apps to flows.

Some of that could in theory be provided by a model of cloud networking, because the cloud problem is that resources have addresses but applications can’t have them, at least until they’re assigned to resources.  There’s a virtualization layer needed here that players in the OpenStack area, for example, have recognized and are attempting to address through work like Melange and Donabe.  Here we have policies linked to applications and provisioning, but we need to link that to network flows.

Sound like two faces of a common problem, separated by a logical inch or so?  Sound like something we need to get solved?  If you’ve followed my blogs, you know that’s what I think, which is why today’s announcement is important.

Big Switch is a startup player in OpenFlow, one of the early ones in fact.  We wrote about them in our Netwatcher OpenFlow piece in October 2011, in fact. Then, they were a controller play.  What they’re now doing is defining a broader ecosystem, an open structure (called, not surprisingly, “Open SDN”) that is based on open standards, open APIs, and open source.  Their own business model, like that of other open-source players, is to provide professional services and a hardened version of some software for commercial application.

The Open SDN model is a flow from application to switch, focusing on how you build a practical flow network and sustain its operation.  It handles things like multi-tenancy, essential for the cloud, on-demand or policy-based flows, and best of all it handles integration with things like OpenStack.  While Big Switch isn’t asserting direct compatibility with all of the various OpenStack network-related projects, it does have Quantum project involvement and a submission there.  Quantum is an open network-service offshoot of OpenStack’s inherent vision that the network is also a resource in the cloud, and it could be linked to Melange and Donabe for a more cohesive DevOps strategy.  The point is that this makes Big Switch arguably the first player to link all the way from the cloud-resource vision down to an OpenFlow switch.

My view is that all of this is really just the tip of a cloud-network-and-NaaS iceberg.  If you can do cloud networking, then you can do everything that’s part of the cloud, and since the cloud is the abstraction of computing and network-delivered services for the future, you can do what the future needs.  It would be easy to get all excited over the pieces of the network of the future, but we can’t build it by thinking at the pieces level, which is why there’s a real need for a top-down model that links apps to clouds to flows.  At least we now have one such model.

One model doesn’t make a market, usually.  We’re likely to have a lot more action in this space.  As I said in an earlier blog today, Cisco is now rumored to have an OpenFlow spin-in on the drawing board, but I think that’s likely to be a hardware play.  The stuff that acts as the bridge between the application, the cloud, the resource control, addressing, and information flows could be really critical as a competitive point for vendors in the networking space, and even for OSS/BSS developments.  Thus, Big Switch may be like a tiny magician who has pulled a 900-pound gorilla out of a hat instead of a fuzzy bunny.  Can they control their own fate here?  We’ll see.

Cisco Spin-In and Ciena Praise Equals Cloud Network

There are more indications this week of a sea change in networking that goes beyond the simple question of whether you switch or route or whose boxes you use.  One data point comes from a Credit Suisse story and the other from a Cisco rumor.

Credit Suisse is saying good things about Ciena, a company who’s been stuck in low gear for as long as I can remember.  They’re “bucking the trend”, says the analyst, but that’s probably the one statement that’s out of whack in the whole report.  It’s not that Ciena is bucking the trend, which in this case is the trend toward lower capex, it’s that it’s on the cusp of the new trend.  Hold that thought till I get through the next data point, though.

Point number two is that Cisco is rumored to be launching another incubated startup, this one in the SDN space, and with some of the same players that were involved in one of the prior one according to some insiders.  The new space is SDN, which means OpenFlow.  Cisco has been fairly articulate in support of the new explicit-switching notion, going further than the reluctant-acquiescence response of many other switch/router players.  SDN is a new notion of data movement by permission, a contrast to the open Internet connection model.

What’s common here?  It’s not technology as much as drivers for the changes.  What we’re seeing is a remaking of the Internet model, created by the sum of the modern forces of iPhones and iPads and Facebook and Twitter and Google and Netflix and LTE and more.  The old Internet relied on creating a community that was reachable only with universal connectivity.  The new Internet recognizes that however much freedom you offer in connective choice, the user is going to spend most of their time on a few sites doing a few things, and that the thing that will have the most traffic impact is content.

The shift to CDN and cloud is inevitable given the direction of services online, and CDNs and clouds mean a service network with a sharp boundary, an “interior” where you have a few high-powered valuable connections, and an “exterior” where you have conventional Internet addressing.  Like it or not, vendors, this is how the network of the future will be built.  The architecture means you don’t need big routers, just big on-ramps (Ericsson’s SSR comes to mind) and fat pipes inside to coordinate service traffic among a small number of supersites.  It’s a perfect model for an optical network and SDN.  You can see why Ciena (optical pipe player) and a Cisco SDN incubator (holds a place for Cisco without making it look like routing is throwing in the towel) are important.

Ciena has a potential advantage here.  Raw optics isn’t the solution either.  They have the potential advantage of moving from the space with the lowest margins into a higher-margin space, no matter where they move.  It’s Ciena who should be doing OpenFlow, and not just in partnership with universities and science projects!  Listen up there, guys.  The low-margin space for them is the high ground for you, so don’t give it up.