Social, Content, and Ad: Threat or Opportunity?

Facebook, whose revenues are said to be ramping up sharply (toward, no doubt, what its backers hope will be a truly cosmic IPO), is apparently planning to confront the problem of limited ad opportunity I talked about yesterday with a simple solution; eat all of it.  The company is rumored to be planning to make itself into a complete media portal, serving all kinds of content in a truly social context and even providing for social communications.  In fact, I’ve heard that they’re quietly planning to scrap their partnership with Skype (announced when Google+ burst on the scene) and develop their own framework for social and collaborative communication.

The problem with this model is that it presumes that social behavior underpins all commercial activity.  Remember, it’s commercial activity that ads target; advertisers don’t give a darn about your social life except insofar as it is exploitable to manipulate your buying.  Everyone’s research seems to agree that socially-driven buying is concentrated in the youth segment.

The point here is that Facebook is heading after a market space that is even more restricted than the overall ad-sponsorship market.  Google has it right in that search ads are still the most likely things to impact retail activity.  I just bought something online, and it was a purchase that online search impacted because it changed not only my preferred vendor but switched me to online fulfillment over retail instant gratification.  Did I ask my Facebook (or Google+) friends about it?  Didn’t even occur to me.  And I probably spend more than your average 25-year-old.

For video, we’re seeing a bit more focus on the kind of features more likely to be associated with monetized video than passive streaming.  Vendors are jumping onto this, not with what we’d like to see (a true, architected, service layer) but at least with silo video offerings.  I noted that Alcatel-Lucent had announced a multi-screen application, and NSN did the same this week.  The NSN offering is surprisingly glitzy for a company that’s not exactly a household word in effective marketing sensationalism; it demonstrates screen-switching and social video for example.  And both NSN and Alcatel-Lucent may in fact be moving toward a unified service-layer approach.  Whether for sales focus reasons or because they’re hoping to sell professional services, or just that they’re not there yet, the current material doesn’t talk about service-layer integration and orchestration.

It’s fair to ask what this is all going to mean, and I think one thing that’s certain is that the network of the future is going to revolve around the CDN.  Content is the majority of traffic growth.  Content is the majority of monetization opportunity.  Content that has any monetization is served by a CDN to manage QoE.  Content that has none is served by a CDN to control bandwidth utilization.  Getting the picture here?  But the CDN of the future isn’t the old Akamai model peering-point connection.  It’s deeply distributed, it’s highly policy-managed with respect to where caches go and what goes into them, and it’s highly componentized so that it can be composed into flexible media offerings that are specific to the operators’ local needs and rules.  You can see this model emerging from both Alcatel-Lucent and NSN, and also being expressed at least by Cisco and Juniper.  You can even see it from CDN startups like Verivue.  The fact is that every operator is going to need both bandwidth optimization and content monetization.  That these missions are very different means that CDNs and the logic that’s built around them has to be very flexible.  We’re working to find out just how flexible all these options really are, and we hope to provide some of that this month in Netwatcher, and more in future issues.

The whole video thing is going to be impacted by the fall season of technology product launches, among which are supposed to be Apple’s new iPhone and Amazon’s tablet.  We are seeing iPhones and Android smartphones gaining traction even as gaming platforms, and obviously we’re seeing tablets increasingly as personal media portals.  There will be people who will use smartphones more (youth), people who use tablets more (everyone else), and all of these people will be both stressing networks and generating opportunity.  I don’t think that the OTT video market will really threaten channelized TV because I doubt that the delivery of that much material can be made affordable to the consumer and still return anything reasonable on investment for the operator.  I do think that the revenue kicker it can add to commercials embedded in standard content could be very significant, and so I think you’re going to see more from vendors to address streaming and monetization of video.

 

 

Bye Bye Bartz

Well, Carol Bartz is done at Yahoo, and that’s a truth that has mixed implications for the market.  Yes, it’s true that Yahoo has continued on its downward slide since Bartz took control.  But there’s a bigger question here, which is whether there is/was anything that could be done to stop that.  That question has implications for the broader web marketplace.

Yahoo was once a darling of the web (but then so was AOL).  It was one of the literati of the Silicon Valley culture, a firm that prided itself as being one of the new model of American business.  Like most such companies, it sprung out of venture funding and grew in a glamour period when having an “I” anywhere in your mission made you a hit.  The company is widely seen as having lost to Google, which from the perspective of search was surely true, but Yahoo had an enormous number of loyal fans who made it one of (and often THE) top portals on the web.  What happened?

Part of the problem was the VC genesis.  Companies who start their lives as venture-funded startups are evolving in a fool’s paradise.  There’s no accountability in a classic business sense; the goal is to create buzz that will result in somebody buying you.  Worst-case, you puff yourself up somehow and do an IPO.  I’ve said a million times that the whole VC process is a glorified pyramid swindle, and I stand by that.  You don’t learn to be a real business by learning to be what the VC community calls a “burger”; born to be “flipped” or sold.

Another factor was the whole Silicon-Valley-culture thing.  There’s a general lack of understanding of business reality in the Valley.  Part of it is from the VC genesis of most companies, but another part is from a kind of cultural superiority.  We are the new age, you are the old.  Prepare to die off and have us take over!  Tell that to IBM, the most successfully venerable of all tech companies.

Starting about five years ago, I saw this mindset contaminate Yahoo’s appreciation of what might have been the greatest opportunity of all time.  In that period, the big telcos and even cable companies were getting very interested in things like advertising and OTT-like services.  They had no real way of getting into that space quickly, and they were eager to partner with somebody.  Google was quick to say “No!” to that; they were at the time embroiled in a Vint-Cerf-sponsored war with the telecom establishment.  Yahoo could have said “Yes” and become the poster child for synergy between the network operators and the OTTs.  Instead, they dismissed the notion with at least as much dripping disdain as Google did.  And with that they threw away the keys to the kingdom.

Yes, ads are important; for one thing, they fund content.  But they can’t fund everything.  We’re seeing players like Groupon, one of the latest wrinkles in what’s still essentially an ad space, first employ creative accounting to justify its IPO, then get into controversy over statements made during what was supposed to be a quiet period, and finally pull its IPO plans, ostensibly because of the economic conditions.  Too many consumers at any level of the food chain tend to kill off the prey, and eventually each other.  Yahoo could have been the apex predator.  Now it’s probably prey too.

 

 

Portends of the Fall Planning Cycle

We’re heading into the fall now, and with the change in season will come a new period of technology-strategy planning for both enterprises and the service providers.  I’ve tracked the former group with a formal fall survey since 1982 and the latter since 1991, and the results of the surveys are always interesting.  This year instead of publishing a special report in November to cover the results, I’m integrating them into our December Annual Technology Forecast issue of our technology journal, Netwatcher.

For the enterprises, the challenge with project spending has been identifying projects that provided a net benefit.  Over the last ten years the focus of enterprise projects has shifted from providing some enhancement to the top line to one of defending the bottom line.  That means shifting from a productivity-driven thesis for projects to a cost-management thesis.  The problem is that cost management vanishes to a point; you can’t continually build IT spending on a static set of benefits and at the same time demand “improvements” in ROI unless you take spending levels toward zero.  There is still a credible “cost” on the table, associated with the management of application performance, but it’s not been addressed in an organized way by the vendor community in general and by networking vendors in particular.  Neither group has been able to come up with productivity-based benefits to drive spending UP, either.  This fall we may see whether that will change.

For network operators, the big problem is obvious; monetization.  Right now I’m seeing operators pretty pessimistic about wireline investment except in emerging economies.  The Internet is the only wireline driver for traffic growth and it’s a driver whose growth is currently non-monetizable under neutrality rules and the unlimited-usage paradigm.  Operators have identified three priority areas for monetization (content, mobile/behavioral, and cloud) but only the latter is getting much near-term capital support because the former two rely almost totally on the emergence of a service-layer paradigm—an NGN Advanced Intelligent Network architecture.  That’s not been happening, at least not in an open sense, and so I’m seeing an accelerating shift of capex to mobile networking, where dollars buy cell sites and backhaul and switches and not routers.  That shift works against the network vendors overall, but in particular against those who don’t have much of an RF/cellular stance.  Which is why Cisco did their agreement with NEC, of course.

Operators haven’t abandoned monetization of content or mobile services; Telefonica just restructured to create a division that’s explicitly charged with that task, for example, and our survey in May showed that most operators had board-level projects underway to identify a variety of monetization goals.  The current problem is that about half these projects have near-term milestones the operators say they can’t meet for lack of conformant implementation tools.  I’m always amazed at these complaints because they show that even when vendors are confronted by buyers with well-articulated requirements they’re finding it impossible to simply address them.  Instead they want to talk about taking a “first step” that, absent any credible longer-term vision for the project, might be the only step they can support.  That’s what’s stalling progress.

One vendor who has recently taken some of this to heart is Alcatel-Lucent, who has quietly beefed up its Application Enablement story with some real insight into how services are created.  For example, they say that the service layer develops “Platform APIs” that can then be exposed to developers, and an example of such an API is multi-screen video!  I just finished an open-source app note on this same service opportunity, and I’d love to be able to compare it to the details of the Alcatel-Lucent approach, but so far the inner workings of these Platform APIs and the way they get developed (including by whom) isn’t on the website.  I’ve also noticed some recent positioning by NSN in this space, and even by Ericsson (who has been gaining some traction in the multi-screen video space over the summer).  An interesting contrast to Cisco, who despite having what might be the clearest technical picture of a service layer seems to be stalled by reorganizations in the exploitation of their assets.

 

Cable Cellular and the Tablet

One of the more interesting wrinkles in the ongoing tablet wars is a decision by more cable companies to back away from any commitments (on their own or as MVNOs) for wireless capabilities.  There was a time when everyone thought that the quad play was going to be a major requirement, so how did this happen?  Apple, in a word.

First of all, the iPhone created an appliance magnetism that broke many customers away from having cellular services from their home carriers.  It disproved the notion that you could create loyalty with non-functional bundles alone, and that in itself was a major factor in limiting interest in quad-play economics.

Second, it’s proved more complicated to create FUNCTIONAL bundles, active symbiosis between wireless and wireline, than was previously considered.  Yes it’s possible to create apps to let you do something on or with your TV, but for the key youth market those tools are less interesting because they’re not home anyway.  And service-layer technology, an architecture or framework that would let operators (including MSOs) build sophisticated componentized services from features, have been hard to come by.

Third, tablets are proving that if consumers have a larger form factor and a place to sit, they will consume “TV Everywhere”.  On one hand this might appear to promote a cable company’s entry into cellular, but it doesn’t for two reasons; usage costs and hospitality hot spots.  You don’t have to stream many videos to your tablet to run into extra-cost territory, and in any event why pay for mobility when you need to sit down to watch?

Since tablet vendors offer WiFi tablets at a much lower cost than cellular-equipped models, more and more consumers are jumping on that approach, and TV Everywhere doesn’t have to include that many places that don’t offer WiFi.  I think we’re going to see WiFi exploding at the same pace that tablets have exploded, and I think we’re going to see less focus on “wireless” and more on WiFi.  One more reason why the DoJ should have let AT&T and T-Mobile merge!

 

DoJ to AT&T: No! Street to Network Vendors: No!

The Justice Department has filed an anti-trust suit to block the AT&T acquisition of T-Mobile, a move that is raising all manner of comment on both sides of the issue.  With all deals like this, the question is whether the consolidation is bad for the consumer to the point that justifies blocking it.  Is this one of those deals?  I’m not sure.

The telecom industry is suffering from low ROI, and has been suffering for years now.  Consolidation can definitely help with that because competitive overbuilding of infrastructure raises the capex and opex for everyone involved.  Would four operators have a lower total network cost than five?  Sure would.  Similarly they’d have lower marketing costs, and three would present lower costs than four, and so forth.  The limiting case here is a regulated monopoly, which we had at one time in the Bell System.  While the issue here is hardly whether we return to the  Bell System, it’s useful to look at the limiting case to frame the problem.

If we collapsed the industry into a single player there would be no consumer choice and no mechanism to control pricing other than regulation.  That would work in theory despite what many have argued; it worked for a century in fact.  The difficulty with regulation comes not from its inability to protect consumers but from its inability to manage innovation.  How do you get a regulated monopoly to invest in NGN?  You’d have to presume government bureaucrats knew when and how to do that, and all you need to do in order to disprove that presumption is look at the recent debt ceiling fight.  National communications can’t be a slave to partisanism.  A monopoly won’t work any more, but we’re not monopoly just because we approve the AT&T/T-Mobile deal.

I don’t think there’s any convincing proof of consumer harm here, and in fact I think that forcing the industry to try to sustain more players than the market could naturally support has the effect of raising base costs for everyone.  It’s more likely that this is a political move.  The average voter knows little enough about basic, but highly important, issues.  They know nothing about telecom, but they are easily swayed by the vision of Their Government On the Rampage, riding out against the forces of anti-competitive evil to assure them lower wireless prices.  That’s an easy image to sell, where reasoned debate on the merits of the deal would be hard to capitalize on.  It’s an election year.  We have the Party of the People and the Party of Business.  Guess who’s running DoJ!  Wrong decision, in my view.

The same issues of ROI pressure are also hitting the equipment vendors, many of whom will be happy if the merger doesn’t go through.  The big Wall Street research houses are split on 2H11 capex, but the general view is that it will be better than the first half but not up to par.  As a result, vendors in the space are likely to remain under pressure.  The Street seems to think that the carrier Ethernet space will be the most pressured, and thus has been preferencing players with limited exposure there.  My own view is that it’s not helpful to look at the prospects of vendors based on OSI layer.  Spending today is focused on revenue and competition and tends to be more “vertical”.  Everyone knows mobile is hotter than wireline (which is why Cisco did its NEC partnership).  The issue is that outside mobile it’s hard to identify easy vertical categories because there’s no convincing picture of what’s on top of the heap.  The players with mobile dominance are likely to do well, but not only in the RAN.  Juniper, who was dropped by UBS to its “least-preferred vendor” list, has no mobile/RAN position and so is particularly vulnerable.  That’s odd to me, or at least unnecessary, because Juniper has very strong service-layer assets and has simply not been able to exploit them fully.  Errors like that are easily corrected, or at least more easily than fundamental product-line omissions.

UBS also downgraded F5, and that seems to establish a broader negative positioning for switching, which I happen to agree with.  But it’s not because switching is less valuable; it is in fact at the heart of the virtualization and cloud revolution.  The difficulty is that the vendors have not been able to make buyers understand that their products have any distinctive value in supporting those revolutions, and thus are facing “feature shock”, something that happens when you deluge a prospect with features that have no business context to validate them.  Again, this is a problem that should be easily corrected, but somehow we’ve become incapable of dealing with our own benefit case in the industry!  IT giants like HP, IBM, and even Dell are moving data center network iron because it’s increasingly seen as something as undifferentiated as raised flooring.

 

 

 

Tech News Flood

Dell has used a couple of software conferences as bully pulpits for some of its own cloud announcements.  The company is making a major cloud move, one that they obviously hope will elevate them to the status of a “real” computer company (they rank number three in our surveys as what users think is a “real” player in the space, after IBM and HP, but it’s a distant third).  In their effort they’ll partner with both VMware (one conference pulpit) and Salesforce (the other) to offer Dell-branded cloud technology, but they also intend to host open-source cloud offerings (a la Hadoop, perhaps) and even Microsoft Azure.

Dell’s greatest strength has been in the SMB space, and that is also perhaps the best target for cloud services in the near term.  Enterprises secure good economies of capital and support scale in their normal data center build-outs, and it’s hard for public cloud services to compete.  For the SMB, neither capital nor support economies are easily established, but the latter in particular is problematic because SMBs often can’t attract skilled IT technicians.  Remember that Dell also has a professional services arm now, and that means its own support skills likely have a lower marginal cost.  All to make their price potentially more attractive.

VMware, meanwhile, is advancing its own cloud position with a Data Director designed to create an enterprise DBaaS model that would also in my view facilitate cloud models where the application or its components ran in the cloud and the data stayed in the enterprise’s own repositories.  This would help considerably in building a larger cloud TAM because it dodges the thorny problem of cloud data pricing and security.

In another initiative, VMware has joined with Arista, Broadcom, Cisco, and Emulex to create what they call the “Virtual Extensible LAN) or VXLAN.  This is a strategy to add a 24-bit header to a VLAN packet and then encapsulate the whole thing in IP.  It would allow the creation of more VLANs with more members and do so using scalable IP rather than Ethernet.  VMware will be adding VXLAN support to its Hypervisor and the result would be a more scalable data center and cloud LAN architecture.  The four obviously hope this will become a new model for addressing distributed cloud resources.

The initiative is more important for its goal than its methodology.  We’re seeing network technology adapting to the cloud.  That shouldn’t be surprising, nor should it be happening only now.  The network creates the cloud; it’s the binding force that makes not only the resource pool possible but also makes its access possible.  The network is the business case, the network is the business.  But the network has been silent on the topic of the cloud.  Maybe this is a sign that the silence is finally over.

Cisco has also (finally) taken a step to get traction in the service layer, not with Videoscape or another broad-based initiative but in the mobile space.  They’ve established a partnership with NEC to sell LTE systems that will include the Cisco/Starent ASR 5000 and the NEC base stations.  NEC isn’t a household word in RAN, but that suits Cisco fine; they want to be the kingpin of the deals in the outside-Asia markets anyway.  Mobile credentials have been the strongest reason for Alcatel-Lucent’s gain in market share in the router/switching space.  Now Cisco hopes to counter the move.  The deal puts the most near-term pressure on Juniper, who must now leverage its NSN position better and/or establish the very broad-based service layer strategy that Cisco seems determined to avoid.

That’s a bad choice in my view.  NEC doesn’t have the juju in LTE to pull Cisco to the front of the line on mobile deals.  What it needs to do is to combine LTE presence with some savvy mobile content monetization.  In short, link it to Videoscape.  If Cisco can really do that (which it could through the Cisco Conductor XMPP bus and some glue) it would have a truly durable and maybe even compelling mobile position.

Juniper announced an enhancement to its Virtual Gateway (vGW) to provide security for virtualization, and cloud, environments.  Juniper has always had a strong security portfolio but it’s only been recently that it’s promoted the tools directly into the cloud data center space.  There are security features in the new QFabric architecture that will be fully available late this year, and the new vGW and Junos Pulse strategies both play well with those enhancements and create what is arguably a complete cloud security solution, the most complete on the market.  But the elements are coming together slowly, and UBS lowered estimates and its target price for Juniper based in part on macro-economic concerns.

Brocade is also experimenting with a new business model, the “router-as-a-subscription”.  The customer gets a router for nothing but pays on a per-port-per-month basis for how it’s configured and used.  The model is already being seen by the Street in a bipolar way; some are saying it’s a nervy innovation and others that it’s a sure sign of an industry in its death throes.  When you cave that much to cost pressures, the nay-sayers believe, you admit that your pricing power is gone forever.

There’s a decent notion behind this say supporters, even though it’s still cost/defensive in nature.  The idea is that by making the router a subscription service you transfer it from the capital budget to the monthly expense budget, which may be attractive from a cash flow perspective (you can write off 100% of expenses but on the average only a quarter or more of capital cost per year).  It’s an argument that’s being made in cloud computing, after all.  I agree that it’s a clever play, but it’s still an illustration that the enterprise router market is so abysmally price-pressured that you have to play accounting gimmicks to make a sale.

Another interesting development is an announcement by the two big independent DNS players (Google and OpenDNS) that they’ll support geographic (or at least address-hierarchical) extensions to the DNS lookup process to help insure that the users get linked to content caches that are closest to their particular location.  This is a pretty significant step for a number of reasons, not the least of which is that these big DNS players are potentially removing a differentiator used by CDN providers.  The downside of the idea is that it’s not granular enough to optimize delivery within a metro area, in my view.  It’s a way to make “normal” CDN access work better, but not the leading-edge distributed-cache metro-optimizing versions of CDNs.  There, operators will have to come up with their own solutions (or rather find vendors who solve the problem for them).

Rounding out the story, Ericsson has introduced some enhancements to its router line, proving out what some of our survey carriers told us about a renewed initiative for Ericsson at the IP layer.  The company wants to play on the theme of “premium service” routing but here as with its competitors there’s a fairly limited notion of what a “premium service” really is.  For vendors, it’s about transport and connection; for operators it’s about content and mobile/behavioral and (increasingly) the cloud.  In our surveys, Ericsson still falls into the noise level for IP-layer competitiveness but there are signs that the company is getting better recognition there.

 

Tablets, Clouds, and Spending

The ever-changing focus of the consumer electronics folks is now shifting from Google/MMI to the upcoming Amazon tablet, which could be introduced in as little as a month.  The details of the technology aren’t known, but they’re probably less important than the price point.  If Amazon brings out an iPad-sized unit for under $300 they could create a whole new dynamic for the tablet space.

I noted in an earlier blog that the big asset that Amazon has in the tablet world is a follow-on revenue source that could be used to partially subsidize the devices.  Not only would that let Amazon field tablets at a lower price, it would put pressure on Apple by forcing them to either lower their own price with cross-subsidies from iTunes and/or App Store or let themselves be trapped as a high-end lower-volume play.  Neither would be a fun choice for the new CEO to make.

The other thing that an Amazon launch could bring is more impetus for both Apple and Google to look hard at MVNO status.  Amazon already has a 3G version of Kindle, and while that sort of thing won’t be automatically transferred to a tablet (which would presumably have a lot more online utility) it does seem likely that Amazon would want to link its own tablet up with its own streaming service.  They could rely purely on WiFi, or they could push into 3G/4G, and that might even make Amazon an MVNO candidate.

For now, it’s pretty likely that the big winner in the tablet space will be hospitality WiFi and the supporting equipment.  Even if tablets are equipped with cellular services, heavy entertainment use will surely drive users’ bills up unless they can unload the traffic.  Sitting in a comfortable coffee shop while viewing is safer than trying to drive or walk during the process in any event.  It’s the tablet’s linkage to entertainment video that’s also sparking the renewed debate over just what network licenses to cable and satellite companies really include in terms of non-broadcast use.  TV Everywhere is a model for making the right to view more independent of the delivery mechanism, and on one hand that can raise ad revenues and help profitability in an age of constricting profits for broadband.  On the other hand it aggravates the networks who believe they should have additional revenue from these new sources.

Tablets are also a clear path to increased reliance on network-hosted functionality, which is at least somewhat a definition of “the cloud”.  In the enterprise world, the VMware conference has become a convenient launch point for a lot of stories and not much news.  I think the biggest thing to come out of it is the fact that VMware is explicitly committing to the notion of the hybrid cloud, a move that some pundits are criticizing as being too conservative.  The problem is that this isn’t about promoting technology, it’s about promoting technology benefits.  I’ve said from the very first that our surveys consistently show a maximum of about 24% of enterprise IT spending migrating to the cloud, and that maximum can be met only if the public cloud and the data center don’t end up as IT-resource-ships in the night.  You can’t disintermediate your critical information tidbits from each other or from the workers who need them to drive productivity gains.

We are seeing things that enhance cloud hybridization, ranging from VMware’s stuff to new offerings for hybridization management from IBM.  All of them help administer a hybrid environment but they won’t make the business case for users who are still having a problem getting their arms around the whole paradigm.  My research shows that over the last decade we’ve lost hundreds of billions of dollars in global IT spending simply because we’ve been unable to connect spending to benefits in the convincing way we did in the past.  The people who get that particular problem figured out will be the people who lead the next technology charge.

 

Practical Points in Content Monetization

Over the last three weeks I’ve been developing an application note on multi-screen video based on the open-source Java framework I launched three years ago.  One of the steps along the way was to send the note to operators for comment, and the results that I’ve gotten from that process have been very interesting.  What I think they do is to lay out the way operators today are looking at content monetization and the service layer.

The hottest issue in the service layer, based on the number of comments on the topic, was the issue of FEDERATION AND SERVICE EXCHANGES.  “Federation” is the term that’s most often used to describe formal asset-sharing agreements that are functionally supersets of traditional peering or interconnect.  I say “functionally supersets” because the main focus of Federation is the sharing of service logic components or other things not directly linked to connection.  The most common example is the sharing of caching/CDN assets in mobile services.  Operators believe that as they build higher on the “experience stack” of network services they’ll need to cooperate across provider boundaries there just as they do today at the lower layers.

Service exchanges are a little more complicated.  Some operators, and some other players in the market, are interested in creating what might be called “feature repositories” where operators go to access stuff they need for their services, most often when an operator’s customer has roamed in some way into another operator’s service geography.  The concept of an exchange is more flexible than that of formal bilateral federation because it involves an operator making what might be considered an “open offer” for cooperation that can be taken up as needed.  This might be a formal service or it might be simply a convenient publishing- or meeting-point for available offers, a kind of “registry”.

Another interesting point was the fact that operators do see the CDN as the foundation of content monetization even though the majority of them don’t see selling CDN services in competition with current CDN giants as their primary monetization track.  Most operators have run through the functional maps of a content layer, and have concluded that CDNs fill in a lot of the boxes.  I think this is interesting because the commercial CDN space isn’t exactly flying high, and there’s been some consolidation among startup CDN players over the last couple of years.  Played correctly, I think the comments indicate that the CDN is a hot property for a vendor.

CDNs are also perhaps the most critical test bed for “asset exposure”, or the abstracting of features into something that can be composed dynamically into services.  When you look at how a flexible service can be built for content delivery, you realize that some of the elements are common to other services, and so how CDN assets are made more generally available may be one of the first exercises of flexible asset management for operators.

I was also very interested in noting that while the operators did universally want to understand how to link network transport/connection behavior to content delivery and did want to manage service assets via OSS/BSS, neither of these two issues was a major focus for comments.  I think operators realize that you first have to produce a conception of a content service that will sell and be profitable, and then tune it to leverage current assets.  Neither network handling nor OSS/BSS integration are features of OTT video, so they are by definition not mandatory elements in the service.  They may be DIFFERENTIABLE elements, but you can differentiate your solution only if you have one to differentiate!

.

A New Dimension to Verizon’s Cloud

Verizon has taken what may be a very important and evocative step toward maturing its enterprise cloud strategy with the purchase of privately held CloudSwitch.  The significance of the move is hard to appreciate without an understanding of just what the heck CloudSwitch is, so I propose to start with that.

The classic vision of cloud computing is virtual something-hosting, where the “something” is anywhere from an entire application to the bare-bones machine image (SaaS down to IaaS, respectively).  This model is useful as a way of looking at the cloud in isolation, but for most enterprises the cloud in isolation isn’t very interesting.  Since they don’t expect to migrate more than a max of a quarter of their IT spending to a public cloud, the key question for them is how you hybridize.

Microsoft is the only one of the currently popular cloud leaders that has taken the hybridization to heart from the first.  The Azure cloud is a PaaS cloud that, with the help of the Azure Platform Appliance, which is a partner-delivered combination of Microsoft software and server and network hardware.  With APA, a user can build an “Azure cloud” that seamlessly extends between an enterprise data center and a public cloud provider (Microsoft, of course, but in theory other cloud providers who adopted the Azure architecture).

CloudSwitch can be visualized as a more generalized model of the same hybridization notion.  With this approach, the user deploys a series of CloudSwitch Instances in the cloud and a CloudSwitch Appliance (which is a software component, not a gadget) in the data center.  The Appliance links to all of the Instances in as many clouds as there are, and it essentially synchronizes each instance as a host for one or more virtual machines that are managed to be identical functionally with the applications’ resources in the enterprise.  What you end up with is a kind of “envelope” that everything runs in and that can be made to extend to any number of clouds that can host a virtual machine.  A secure Internet tunnel links the components of this architecture.

There’s a lot to be said for this approach, but it’s not a panacea for cloud issues.  What CloudSwitch does is make public cloud resources (Terremark’s resources, in this case) appear to be elastic extensions of local VM hosts.  The way this is done (once it’s set up) is largely transparent to users so the cloud can really appear as an elastic extension of the data center.  For “cloudbursting” applications where the cloud takes up the slack when applications overload the in-house computing resources, it’s an easy way to build the framework without becoming a specialist.  Compared to Azure, which is PaaS and thus imposes some application and middleware constraints, it’s more flexible.

One place the concept falls short is in the area of “equivalence”.  Yes, CloudSwitch can create a virtual data center that spans the real one and the cloud, but the stuff that goes into the cloud still has to conform to the price paradigms of the cloud and is still constrained by the tunnel connection in terms of performance.  For Verizon, these limitations probably won’t be critical because I think the company is likely to be targeting SMBs with the offering and because enterprises could be made to understand the limitations of the Cloud Isolation Technology and exploit the capabilities readily.

The most significant thing about the deal, I think, is that this is a NETWORK OPERATOR buying a software company.  If you think about this, that’s a major twist because in the past operators would have expected their vendors to offer them a package, or would have “introduced” a startup to a big vendor in an arranged acquisition.  Here’s a Tier One buying their own service-layer technology.  If you need proof that the network equipment vendors have fallen asleep at the switch, this should be it.

 

Apple’s Next Big Thing

Steve Jobs has finally decided that his health won’t permit him to head Apple, and has passed control to Tom Cook, the Apple COO who has been the administrative head since Jobs took a leave early this year.  I met Steve twice in my career, once very early in Apple’s rise and again after he’d brought the company back from the brink.  There was no mistaking his innovative flair, then or now.  While I’m sure that Apple management can run the company, I’m far less certain that they can run the market.  Steve could, and did.

The move comes at a very critical time for Apple.  While the company has been the almost-single-handed driver of the mobile revolution, the product cycles in that space are getting shorter and it’s harder to say what the next generation of devices might be.  A smartphone is a logical extension of a standard phone, and one that exploits the broadband mobile connectivity that was already in place.  A tablet is in many ways an extension of a smartphone.  What extends the tablet?  What is the Next Big Thing?  The answer is the cloud, the mobile/behavioral ecosystem that will create the electronic virtual world we’ll all live in, in parallel with the real world.  For Apple, it’s the iCloud, a course Steve Jobs has already charted.

Google knows that, of course, and they see a similar vision.  One could argue that they see it even more clearly than Apple, in fact, because Apple’s culture has always been just a tad elitist and thus egocentric.  Android and the MMI deal are Google’s appliance play, and ChromeOS is for now carrying the flag of the cloud, in the form of hosting the thinnest of all possible clients.  ChromeOS, in my view, is just a placeholder for an eventual shift toward a more Android-centric future, but one that focuses on exploiting Android as a cloud conduit just as Apple wants iOS to be.

The thing is, the secret sauce of the future is the mobile/behavioral stuff, and that is something that neither Apple nor Google have any particular incumbency in.  Nobody does, in fact.  My work with operators suggests that they understand that there’s a lot to be done, and a lot of money to be made, in the mobile/behavioral symbiosis, but the problem they have is that this particular area of service innovation is even more vague than content monetization, and they can’t get anyone on the vendor side to talk effectively about content.  What hope do they have for mobile?  If you’re a vendor and if you want to own the market of the future, this is the problem you need to solve for your customers.

Interestingly, Alcatel-Lucent has just issued a press release calling for more thoughtful use of mobile assets in customer care, and when you read into the details you see some of the elements of a mobile/behavioral solution at a more general level.  The Alcatel-Lucent mantra is “contact me, connect me, know me” and that is pretty much what I believe to be the key to mobile/behavioral opportunity.  You have to be able to reach the customer proactively with social/behavioral changes to their virtual world, to connect them to the other partners (human or cloud-machine) in that world, and you have to know a lot about their interests, desires, and prohibitions to make inferences about what’s best for them at that moment in time.  I’d like to see Alcatel-Lucent take this story more into the general consumer market.  I’d also like to see some competitors push the story even further.