Rumors: Cisco, Juniper, and Apple

Cisco is said to be announcing a new white-label managed service offering that’s designed to be resold by network operators (presumably Cisco customers) to rebrand for the SMB space.  Cisco would provide the actual remote management resources.  The move is yet another interesting slant on how Cisco thinks it can help operators, make more money for itself, and perhaps pull through more equipment sales.  The question is whether it will work, and as you’ll see below we may have to wait for the precise details in order to tell.

It’s not that managed services is a bad idea, or that having vendors provide a white-label service or take a cut somehow from transactional services is untried.  Alcatel-Lucent, for example, has promoted its Open API program as a means of supporting developers across provider boundaries without special per-developer-per-provider relationships being needed.  It takes a cut of the fees associated with the APIs that the developers use.  And managed services are on the rise, particularly for SMBs who can’t possibly retain a skilled staff with there’s so much competition for network experts.  The problem is whether the operators see this as helping or competing.

Most operators have their own plans for managed services for SMBs, and many have already offered them.  They may well see Cisco’s offer as simply a way for Cisco to grab a piece of the revenue stream, a revenue stream the operator is fronting in retail terms.  Cisco is likely to say that the deal will reduce operator costs and improve time to market, but the real question will be whether the reduction in costs justify the reduction in revenue, and whether the selling issues with the service overcome the time-to-market benefits.

The challenge here is that everyone wants to make more money in a market where both services and products are commoditizing.  Services are a way to do that, but the buck starts (to play with words) with a single buyer.  Users don’t build networks to consume products or services, but to fulfill their needs.  Network vendors have been much worse than IT vendors at figuring out how to support the users’ value propositions in a way that’s also profitable for them.

Another network vendor news item today is that some financial analysts (Stifel, for example) are predicting Juniper’s having a soft quarter, and that part of the problem is some glitches with Junos.  Juniper, they say, has always touted the stability and singular version control of Junos as assets relative to Cisco’s IOS, and there are now reports of problems with stability after a recent release and of the need to supply a custom version to some operators.

The Big Problem for Juniper, in my view, isn’t the rumored version discipline problem with IOS but the fact it doesn’t have any solid mobile story.  In the operator survey I just completed, those vendors who had a strong mobile services slant in their products and experience gained in strategic influence uniformly.  Those who did not (including Juniper), lost.  What I think is happening this quarter is that operators are finally thinking about the future.  You can see that in Verizon’s public announcements on content and the cloud.  As they do, they’re less receptive to the usual “How many boxes can I put you down for?” sales positioning.  They may be reacting faster to changes in strategic influence instead.  Typically our rating in that area has been a leading indicator of change—a tell on next year’s behavior.  Might Juniper’s dip in influence be hitting already?  Cisco had an even larger credibility drop in the operator influence ratings, and its SEC filings are suggesting it’s having softness problems too.

If network vendors are asleep in the telco world, Apple’s happy to be wide awake.  The latest public rumors on next year’s iPhone model is that it’s multimodal in that it works on both CDMA and GSM as well as LTE, and also that it’s going to be “SIM-less” and can be soft-configured to work with any operator network.  That seems a pretty clear indication that Apple is looking to break the traditional link between handset providers and operators created to use service-plan subsidies to reduce phone prices and increase sales.  But Apple knows darn well that they can’t sell unsubsidized iPhones at higher prices and they don’t want to commoditize them either, so what are they up to?  I heard from a Silicon Valley contact that Apple was planning to introduce an iTunes-subsidized iPhone, or rather an iCloud-subsidized one.  The idea is that Apple would have a term membership in a subscription cloud music and video service that would include a reduced-price iPhone.

This is a major shift in market dynamic, if true.  The operators have had a lock on subsidies before, and if they’re losing that to a giant market machine like Apple, then they have good reason to be worried about vendor support for their transformation plans!

 

Cloud Services, Service Clouds

We’re continuing to see more developments in the cloud space that go beyond the obvious (the hype) and address some of the important issues.  One in particular is also demonstrating some important facts about the cloud and cloud services; it’s the “Database.com” offering from Salesforce.

Cloud databases have been an issue of increasing importance because they’re essential for the cloudsourcing of any team or company application and because they represent a new dimension in security risk for enterprises.  Amazon’s EBS was the proximate cause of that company’s recent cloud outage, so cloud databases also demonstrate the new dimension in vulnerability that this sort of distributed technology can bring.  Enterprises need a way of harmonizing cloud use with data security or they’re not going to the cloud with anything that’s important, and that would relegate the cloud to hosting websites or testing/piloting applications.

Database.com is first an attempt to integrate strong security into a cloud DBMS (an RDBMS to be specific).  It includes strong authentication at the API call level, meaning that every access attempt is verified, and by-row tabular security rights within the DBMS.  All of this is good stuff, and for many enterprise applications it will help relieve security fears.  But it’s not enough by itself.

No matter what any vendor says, mission-critical enterprise data isn’t likely to go into the cloud; the career risks for anyone making that decision are profound according to the results of our spring survey.  None of the enterprises we asked said they believed they would cloudsource a mission-critical DBMS.  So it would appear we’re at an impasse, right?

Not so fast.  Database.com is also an example of a database model, DBMS-as-a-service.  It’s always been possible to visualize “data” in multiple ways; as disks, as file systems, as DBMSs.  That multiplicity of vision translates into a multiplicity of models.  You can send disk commands to a database, or file-system commands, or you can send DBMS queries—SQL, for example.  When you send the low-level commands, you drag disk I/O over a connection to the cloud if you cloudsource the data, and when you send high-level commands you receive the results of a query and not the ten million records you might have spun through to get those results.

OK, fine, but this is still DBMS-in-the-cloud.  It is, unless you turn the tables.  Instead of looking at the DBMSaaS as something the cloud offers, how about if the enterprise offers it?  Suppose that in a hybrid cloud, the DBMSaaS queries were made by the cloud applications back into your data center?  That model is readily supported by modern back-end repository strategies and DBMS appliances.  With tabular joining in an RDBMS, it would be possible to create a database that was partly stored in the cloud but whose sensitive elements were back in the enterprise.

Even this may be a model of an even deeper and more important issue.  Enterprises say that basic platforms (IaaS) in the cloud are, at reasonable levels of utilization and with reasonable availability enhancements, about 75% more costly than internal servers.  That says that the basic business model for IaaS can’t be successful in securing wide penetration of cloud computing into mission-critical apps even if you solve security and availability concerns.  But services can be offered from cloud infrastructure, and efficiency in both the resources needed for the service and the way the service can be linked to enterprise computing/business activity can be more compelling.  Outsource firm Virtela, who has already created an interesting umbrella VPN service as a kind of VNO across multiple operators, is also launching a cloud service set based on the same framework.  The idea is to take applications like security in the mobile space or application acceleration and make them into “services” of the cloud.  These are more easily introduced than competing architectures for mission-critical apps, and enterprises in our survey seem to think that sort of thing is the right way to go.

So do carriers, of course.  Verizon is clearly looking at this same model, as well as BT.  KT, while making some waves by promising IaaS services that are more cost-effective than Amazon’s, is also planning higher-level cloud-based services, and we’re told that they believe there’s more money in the services space than in basic IaaS.

All of this, of course, gets us back to the notion of “SOA clouds” and the need to think of applications as being cloud-optimized.  The SOA architecture facilitates the consumption of application components in service form, delivered either through RESTful interfaces or more rigorous SOAP connections (which is how Database.com works, by the way).  Microsoft and IBM have both been working with their customers to move thinking in this direction, and the results are becoming clear by the number of enterprises who now think more in SOA terms than in terms of virtualization for their clouds.

 

Succession Lessons from TMF and Cisco/Microsoft

TMF’s Management World conference continues to show itself as a kind of cross-section of market and technology issues for the NGN.  This particular body, unlike most standards bodies, has long been almost a business, a marketing powerhouse that’s jealously guarded and effectively promoted its prerogatives.  The question is whether it can now overcome some of its other long-standing characteristics to step out and lead the service wars.

Operator OSS/BSS processes have always been visualized as “service creation” because services were in the past organized behaviors of networked devices.  Management was effectively a coercion of cooperation to create something, and once that “something” was created the processes could be considered to have shifted into “maintenance” mode.  There was a distinctive service provisioning process, with distinct phases, and OSS/BSS tracked it and billed it and planned it…you get the picture.

The TMF, the archetypal management body, has never accepted a model of management other than “provisioning”, and much of the work they’ve done with eTOM and SID and the other standards they’ve pushed is actually obsolete in the current world.  Today, you need to visualize a service as a web page, a kind of script that joins functional elements (RESTful URLs).  They actually have that notion in the relatively-new NGOSS Contract work, but it’s goal isn’t to create a web-page-like service script, but to create a management script.  What they’re missing, I think, is the fact that this scripting process has to organize both.  There is no way to create web-modeled services for the NGN without scripting functionality and management in parallel.

Operators “know” this to a degree, but not to a consistent degree.  What I’ve found is that if you make a pitch on the value of service logic and service management integration to a high-level technical architect with a focus on content monetization or mobile behavior support, that person gets it.  Make the same pitch to the OSS/BSS types and you’ve made an enemy for life.  The whole of the OSS/BSS community is locked in the past on this particular issue.  I’ve had some recent online dialogs on “NGN”, and there are some involved who are clearly pining away for the days of TDM voice even though there is no question we are NEVER going back there.  Change is always resisted, and that’s the issue now with the service layer.  The TMF is more a barrier to the future now than a vehicle to secure it, and yet for operators who still pray for relief from service-layer confusion through standards, it’s pretty much the only hope.  This TMW session is showing that some in the TMF realize they have to step up, but others are still donning their green eye shades, climbing up on high stools, and doing provisioning ledgers while calling it “NGOSS”.

Cisco and Microsoft are also demonstration intertia, and its risks.  It’s clear that investors/activists are going to put the pressure on both companies, and because it appears that neither is making the transition from their traditional days as a market leader to a new day when they can also lead—but with a different core competency set.  Only IBM of all the tech companies I know has ever proved it could undergo a number of market-driven technology transformations and still be a major player.  Can Ballmer and Chambers prove their companies, under their leadership, can?

If Ballmer steps down, or if Chambers does, the result changes only the band on the hat on the head of the executive processes for both companies.  There’s thousands of people in each of these firms, and for the most part they’re blundering along on a course that’s traditional for them—just like an OSS/BSS guy.  Imagine an inspirational flag-waver in a group of ten people—the group will follow the leader.  Imagine now that same flag-waver in a crowd of ten thousand.  They’re not following the leader any more, they’re ebbing and flowing on their own inertia.  The truth is that if investors want Microsoft or Cisco to change, the fastest way to make that happen is to get the current flag-waver, who’s being followed by at least those close at hand, to move the mountain in another direction.  That truth is probably recognized by both CEOs, but the fact is that they don’t know what to do.

Cisco is having a confidence crisis; its latest SEC filings suggest it won’t be showing much growth at all for at least one and probably several more quarters.  Microsoft is staring down the muzzle of the tablet canon, seemingly mesmerized by the risk it poses but helpless to move aside.  Both could, by inaction, bring about the very kind of radical market change that would hurt an incumbent the most.  They each have a flagship role in setting the perception for the industry they dominate.  While others can take market share from both these giants, they can’t take the flag.  Nobody will ever lead PCs again as Microsoft did, nor will there ever be another Cisco.  The markets overall will be poorer for their loss, if we lose them.

 

Under the Ebook Covers

Amazon and Barnes & Noble are obviously engaging in a war over the ebook market, and there are new dimensions to the battle emerging every week.  In the latest move, B&N announced a new gray-scale Nook that’s conceptually between the older dual-screen eInk Nook and the color model that’s grabbed attention as a poor man’s Android tablet.  Amazon countered with an ad-subsidized model of its 3G Kindle.  I’ve already noted the rumors that Amazon will launch a line of 7- and 10-inch color tablet/readers in the fall.

One dimension of this is easy to understand; Amazon’s own sales data shows that ebooks now outsell printed books.  For Amazon that’s a clear indication that you want to be in the ebook space, but it’s even more a wake-up call for B&N.  They have a chain of bookstores, after all, and they not only have to transition to an ebook format in a reader world, they’ve got to figure out how to utilize their retail presence.  They’re ramping up their in-store specials, adding hands-on groupie meetings among new Kindle users, etc.  But for both companies the key is not just having the “best” reader but having the most readershare.  Your reader customers are yours alone to exploit; PC and tablet readers can be installed on any device and so don’t lock your customers into your own format.

For both B&N and Amazon, the Android devices are a risk because these can be “rooted” or pulled out of their native restricted mode to run a general version of Android.  That, for example, lets you install Amazon’s Android reader on a B&N Color Nook.  But both companies think this is a minor risk; the big problem is just getting their little devices into everyone’s hands.  The lower-end Nook, which has touch-screen overall and not just in a ribbon at the bottom, is a formidable challenge for Amazon; Kindle navigation was harder even before this in my view, and the new Nook makes even a low-end reader very “tablet-like”.

Microsoft’s Skype deal has pushed the issue of service-enablement via an appliance to the forefront in areas outside the ebook market.  Microsoft just said that the Mango version of Phone 7 will have Skype calling, and while it’s not yet clear if that capability will be left in place by partner operators, those I’ve talked with say it will.  We’re moving away from an ARPU model driven by voice anyway, they say.  If tiered data pricing generates the same ARPU, so what if voice becomes a part of the data picture?  It only cuts down on what you have to capitalize.

The network is what enables this, of course, but that’s not enough to make “the network” the value leader in the new ecosystem.  The average user of an ebook reader has no idea what network service gets the books onto the reader, and that’s even in today’s relatively early market phase where nerd count per hundred users is higher.  In the future, people really will see a kind of “service aether” that pervades their world and somehow links their desires to fulfillment.  That’s the market everyone is fighting for, and Amazon, Apple, B&N, and Microsoft are only showing us little chunks of it.

That the lower-layer delivery process isn’t where the action is has been demonstrated by Cox’s announcement that it’s abandoning its notion of being a wireless carrier on its own and opting into a relationship with Sprint.  Don’t be surprised if players like Amazon and B&N end up being MVNOs themselves, and for sure don’t be surprised if Apple makes that move; the rumor they’re looking into branding a service of their own is already circulating.  At some point, that kind of move may be necessary to insure Apple can control the whole value chain, and of course that would also be true for Google and the rest.

Another place where the ecosystem may be changing is in the OTT-TV space.  The Netflix success is showing everyone that OTT video can be sold, and that implies that TV Everywhere can both undermine Netflix and others with “free-ness” and also present a potential incremental revenue opportunity.  If the right to multi-screen content is tied to the fact that you have a TV delivery of that same content, then advertisers are less worried about the credibility of online ads as a substitute for commercials.  This kind of deal also lets the network operators who deliver the stuff insure they get some profit for their infrastructure investment.  Thus, it may be more important to watch TV Everywhere than Netflix.

That’s particularly true in the appliance sense.  Imagine if Apple were a “TV Everywhere” broker for its i-Stuff?  The mind boggles at the changes this could bring.

 

Public Policy and Broadband

The FCC just announced that US broadband is failing to meet the requirements set in the Telecom Act, which isn’t exactly a surprise given that’s what it’s been saying all along.  What’s infuriating about the release is the blatant manipulation that’s inside.  For example, they headline that over 20 million Americans are “denied access to jobs”.  Yeah?  Well, it’s not that simple.

There are over 26 million people who are in un-served areas, or about 9.2 million households.  These are typically deeply rural populations. But this data, which the FCC headlines, is based on census-tract information; the county-level analysis of data cuts this value in half.  So which is right?  Obviously the one with the most dramatic results.  Also, how many of these 26 million are even of working age?  The FCC doesn’t attempt to figure that out.

Then there’s the question of whether lack of broadband, or the Internet, denies one access to jobs.  I’ve tried diligently, correlating FCC data on broadband availability with data on economic activity, to uncover a correlation between employment and the Internet, and the only one I’ve found is that where people are unemployed in large numbers they’re less likely to pay for broadband.  Surprised?  In cases where broadband programs have empowered areas not previously empowered, I’ve been unable to find any sign of an increase in employment or economic activity.

The FCC’s data also shows that while about a third of households don’t have broadband, only about 9% don’t have broadband because they can’t get it.  The remainder have elected not to take it.  Further, the data shows that the population is generally clustered around the low-end options in terms of price.  That correlates with reports that where superspeed broadband is offered by cable or telco providers, the uptake on the service is minimal.

We need to face reality here.  There is a strong public policy drive to say that the Internet is a fundamental right.  OK, that’s fine with me if you arrive at the decision based on rational and truthful processes, but we’re not doing that.  A third of all traffic is Netflix.  Most of the time spent on the Internet is spent on social networks.  This isn’t the picture of an Internet being used to pull people out of marginal employment or to raise standards of healthcare.  It’s a picture of one that’s keeping the kids occupied, keeping the parents entertained.  The cost of providing rural broadband can be ten or more times that of providing broadband to urban areas.  Some of the rural users have moved into the wild by choice; do they get subsidies to give them urban comforts in their rural setting?  How about giving some trees or wildlife to urban dwellers, then?

Carrier Clouds, Amazon Tablets

Verizon has taken yet another “leadership” step in defining how operators see their revenue futures.  The company has indicated it would be likely acquiring small software companies to create SaaS offerings hosted on the Verizon cloud.  I don’t think that the significance of this move is being appreciated, and so I want to open this week by explaining it.

Everyone has been infatuated by the notion of cloud computing as anointing the small and destroying the strong—it’s been a kind of populist theme that’s evolved in parallel with the whole Internet revolution.  The problem is that it’s not a practical vision of the market.  The big money in cloud computing comes from two sources—PaaS-based offloading of SOA app components from enterprise data centers for backup and overflow work, and SaaS opportunities to SMBs and even some enterprises.   The big money’s still out there, and Verizon wants it.

AT&T is moving in this area too, and one interesting development there is that the company is working on the issue of asset creation/exploitation and not just the issue of APIs or cloud architecture.  They’re looking at how to take legacy assets in the OSS/BSS space and make them available as APIs for integration into higher-level services (by developers and, we’re told, by internal service architects).  They also want to integrate their smart appliances into their content services, not only as elements in a multi-screen strategy but as controlling tools to manage media and the experience.  Finally, they hope to formulate a general-purpose HTML5-based architecture for their proto-smart-device GUI so that applications will run across the full range of stuff that’s rolling out.

Amazon, meanwhile, is apparently getting into the tablet business, or so says PC Magazine and some other sources.  My own view is that Amazon is going to compete with the Barnes & Noble Color Nook, a product that I’ve gotten myself and find enormously interesting, powerful, and helpful.  The issue here isn’t becoming a tablet player, it’s defending the ebook space against a competitor who’s using a tablet feature set to enhance e-reader value.  Every Nook that’s sold is a B&N camel’s nose under the ebook opportunity tent.  Amazon can’t sell Kindle books to that market, and of course B&N profits from the lock.  So Amazon has to become a player in what I’ll call the “t-reader” space, a space that is almost a tablet but that lacks the ability to host competing e-reader software and so still locks the consumer in as a traditional e-reader would.

Amazon needs to make sure that they don’t lose customers to the Color Nook because they need to be sure that they don’t let B&N create a legion of book-hungry semi-tablet enthusiasts that can’t get Kindle without rooting their Nooks.  The question is whether they can do something at this point, when the B&N device is out there and competing effectively, without giving away too much and hurting their profits even if they win the t-reader race.

Taking the Pulse of Tablets

Some data from Nielson suggests that tablet users are perhaps more focused on social media than on streaming video.  The data shows that while e-readers outnumber tablets by an enormous margin, people are relatively unlikely to be e-reading while watching TV, but are rather likely (presuming they have a tablet) to be using a tablet.  It doesn’t take rocket science to figure out that if reading a book is difficult while the TV is on, reading an e-reader is likewise.  However, it’s probably even more difficult to watch a video on a tablet while watching TV, which means that all these tablet-TV crossovers are really doing Facebook or Twitter.  The larger form factor makes social network access easier.

This doesn’t mean that video streaming to tablets is without adherents.  Verizon is going to provide free hotspot services to offload traffic from its 3G/4G network, and that trend is accelerating worldwide.  WiFi is a great strategy for pulling cellular traffic out of expensive 3/4G facilities in locations where users are likely to settle for a while.  I’ve been calling the tablet user a “migratory” rather than mobile user because most tablet use will come in sites where users can sit and focus—home, work, or hospitality.  Some providers and some tablet players believe that there’s a strong tablet opportunity in WiFi alone, in fact.

Truth be told, we don’t know what the consumer will do with tablets—exactly—because the consumer doesn’t know.  That’s the big challenge of the mobile broadband revolution.  We’re building what I’ve previously called a “Life Fabric” that links us to services through appliances and ubiquitous broadband.  It’s like building an interstate highway system at a time when interstate travel was difficult or impossible.  What will it be used for?  We probably would think that hauling of goods would be the big application, but in fact it was just personal mobility.  We’ll probably get some surprises out of the evolving mobile broadband space too.

On the enterprise side, Alcatel-Lucent released a study that says that 74% of workers believed their productivity could be improved via UC/UCC tools, but that two-thirds of this group don’t have the tools they need.  I’m a bit skeptical of this kind of study for a couple of reasons.  First, my own thirty-year history in market research suggests strongly that people aren’t very good at conceptualizing the value of something they don’t have.  Everybody thinks something is holding them back from grasping the productivity brass ring.  Second, both my own research and other dispassionate university studies I’ve looked at show that most “collaboration” that takes place in business is pairwise (two-party) and there’s been no evidence that video or much of anything else really facilitates this sort of collaboration better than what we already have.

There is one exception to this.  Most companies do have a problem with collaboration created by the fact that the parties involved aren’t in a stable location, and also are not equipped with a consistent set of tools and access to data.  That’s where tablets could come in, but tablet empowerment is independent of UC/UCC in that the application of collaboration doesn’t change, only the appliance you collaborate through.  Like consumer use of tablets, though, business use is a work in progress and we’ll probably have to see how the space matures.

The Network Core: Opto-Electrical Wars

The optical networking conference this week is opening some interesting issues about the future of “the core”, and probably even more interesting issues regarding networking overall.  While the focus of media coverage has been 100G Ethernet, the real question is how networks are valued, or made valuable.

We might call the current situation in the network core a contrast in speeds.  Optical fiber is used for virtually all important transport applications, and the state of the optical art sets a per-fiber capacity.  To get the most bang for your fiber buck, you’d like to use that capacity, obviously.  At the same time, the electrical interfaces in networking have their own capacity level.  You’d want to use that too, but the big question in networking is the ratio between these speeds.  Right now, for example, the realizable capacity of a fiber strand using DWDM is many times the electrical interface speed, such as that of 100GE.  That means that to use a fiber effectively you need to optically multiplex multiple electro-optical paths onto a fiber.  The greater that multiplier, the more optical work you do and the more dollars are transferred from routing to optics.

Service providers have been pushing to create a more opto-centric core network for the simple reason that it’s less costly.  Optical multiplexing of wavelengths is by some carrier measures a fifth or less the cost of accomplishing the same thing through routing.  But it’s also less flexible and less agile in the face of an outage.  Nevertheless, pressure from operators (like Verizon, who has had an optical core RFI floating for almost a decade) has forced vendors like Cisco and Juniper to come up with tighter coupling between their routing layer and the optical core.  Yes, everyone agrees that this will reduce core cost and thus core vendor profit, but since somebody is sure to do what the carriers want, both vendors know they had to go along eventually.  It’s a demonstration that the core network is nothing but a bit pump, something that has always been difficult to differentiate and that will eventually become virtually impossible to differentiate.  Huawei, of course, is counting on that and looking to enter a suite of low-cost products in the deep-core electro-optical field at the right moment.

It used to be that the unit cost of a fiber bit was high enough that efficient utilization of fiber was the only issue, but now that cost is declining as fiber technology improves.  The value of aggregating traffic in electrical devices to efficiently fill transport pipes is declining with it.  Under-utilization of fiber might well be a cheaper option even today (as it is in the core) in some metro areas, and that’s likely to be true in about 75 of the US’s 250-odd standard metro statistical areas (the old LATAs) within two years.

If you can’t justify electrical aggregation in a low transport-cost-per-bit world, then what else is there?  One answer is “features”, but everybody has long realized that you can’t get customer- or service-aware in the core, or in fact in any aggregated stream.  You need to touch at the edge, where you don’t have to sort out traffic to do that.  Another answer is to assert that the management processes are somehow more effective in an electrical network, but operators resoundingly reject that—it’s more expensive to manage higher-layer devices.

The net-net here is that it is not possible to defend core routing.  But where core routing loses, edge routing wins.  Services are what people buy, and so creating them is what creates the network’s value.  Services touch networks where you can afford customer touch, which is at the edge.  While we always draw networks as layered structures, even when the old OSI model is long obsolete, the factual map of the network would show the “service layer” and “network layer” touching in the edge device, the edge switch or router.

Cloud services in any form, and in fact the service layer in any form, explicitly undermines the core because you can’t make the core anything other than a bit conduit in a service network.  Nobody will be able to defend a contrary position even two years from now, I believe.  BT’s discussion of vertical-market clouds is a proof point here; if you are going to focus not only on cloud computing but on cloud computing in an industry-segment slant, you clearly are moving to a level of service differentiation that’s beyond what the network can realize.  But because a vertical is a company characteristic, a connection to the company is an implicit connection to that vertical, and you can differentiate by industry at the edge.  Moral:  The future is all about services and their coupling to edge routers.  That’s what you need to watch to understand the fortunes of the big network vendors.

 

Dell, and Netflix: The Meaning

Intel has embarked on what might be the biggest battle of its corporate life, the battle to become relevant in the embedded system and appliance space.  While Intel has a license to produce ARM chips, it realizes that exercising it isn’t the answer to getting into the smartphone/tablet space.  Not only would it suffer in terms of profits after the license fees, it would be perpetuating someone else’s processor architecture in the hottest space in the market.  But wanting relevance isn’t getting it.

The big barrier for Intel to cross is getting big-name appliance OSs, which I’ve been calling “Embedded Control OSs” or ECOSs, ported to their architecture.  One reason why Intel got so into the MeeGo Linux model was that they could easily support the porting of that OS to their architecture.  They can do the same with Android (and in fact are doing just that) but it’s harder to get iOS moved over; Apple is in sole control there.  However, even getting the OS ported isn’t going to solve the problem because there are hundreds of smartphone and tablet models out there already, and more arriving every day.  Given that Intel won’t be ready with even a minimal offering until 2012 and won’t be competitive in performance until likely 2013 or even 2014, things could get tough for them.

The reason Intel cares is shown by another thread of discussion in its recent conference.  The company was very defensive about the future of the PC, saying it wasn’t going to become an irrelevant dinosaur in a world of tablet mammals.  Intel made the PC market, and still commands it (AMD’s efforts notwithstanding).  If that market takes a hit because consumers start buying tablets (which HP’s results say is already happening, but clearly there haven’t been enough tablets shipped to have had the effect), then the loss to Intel in PC chips has to be made up.  That means not just matching the volume of CPUs lost, either, because appliance CPUs have much lower prices and profits.  They have to command the appliance space.

The only thing Intel has going for it there is the fact that both the key appliances—smartphones and tablets—are going to enter a kind of “window of susceptibility” in late 2012.  In the smartphone space, the combination of 4G rollout and normal product cycles will put a large number of users in the market for a new phone.  In the tablet space, the Apple iPad onrush will have generated effective Android response, which means that mass-market rollout of tablets will be starting.  If Intel can be ready for that two-barreled market shift, they can be a player.  The question is how to do that.

What they need to avoid at all costs is linking up with Microsoft and Phone 7 on this point, something that we hear is being promoted by Microsoft/Nokia to Intel even now.  As tempting as re-launching the “Wintel” alliance might seem, Phone 7 isn’t the star Intel wants to hitch their wagon to.  Similarly, they need to abandon MeeGo in favor of Android simply because they can’t promote another OS at this stage; there are already too many out there and developers won’t latch on.

In a related matter, Dell reported its numbers and showed a sharp gain in profit contrasting to HP’s dismal numbers and outlook.  The difference, of course, is that Dell has much lower consumer exposure than HP and a narrower product line with less management cycles spent trying to organize all the profit pieces.  There are a number of interesting lessons to be learned here.  While the cloud is shifting compute focus back to a central data center model, it’s not driving the PC out of businesses.  Also, professional services aren’t the cure-all that many hoped they would be; a product company has to be a product company, or get out of that business and become Accenture.

Moving on to another topic, Netflix has been named the number one source of downstream traffic in the US, accounting for just under a third of all bandwidth consumed.  Obviously that means that video is the overwhelming majority of downstream traffic since there are many other sources than Netflix.  This only further highlights the problem that operators face.  Not only are they being asked to capitalize increased traffic that their current all-you-can-eat pricing model doesn’t monetize, they’re subsidizing the cannibalization of their own TV revenue opportunity.  That’s particularly true for the cable MSOs whose primary revenue stream has always been TV.  The markets are getting close to breaking here.

 

 

A Tale of Two Companies

HP has lowered its forecast for the year, and the threat of that move that broke yesterday caused tech stocks to shudder.  It raises a serious question about just what’s going on in tech, a question that doesn’t have any easy answers.  That means the future of tech as we know it may be…well…uneasy.  To understand the issue, we need to tell a Tale of Two Companies.

When companies like HP and IBM were founded, they were profitable based on the sale of business technology.  There was no personal computer, no tablet or smartphone.  In 1980, IBM and HP were duking it out for a growing business computer market, and Digital Equipment Corporation was number two, between these two giants.  Then, in 1982, IBM launched the PC and the market (and world) changed.  Within a decade, PC competitor Compaq had bought DEC, whose market position was compromised because its CEO couldn’t read the handwriting on the wall.  HP bought Compaq.  IBM shed its PC business—the business that started it all—to Lenovo and became a pure business computing play.  HP tried PDAs, bought Palm to take a run at smartphones.  IBM shed its networking group, and HP bought networking giant 3Com.  It sure seems like IBM and HP have gone in opposite directions, and certainly their current financial position seems very different.

Should HP have never gotten into personal computers?  IBM bailed, and won on that bet.  Same with networking.  But remember that IBM launched the PC and rode the PC wave convincingly for a time, and IBM networking (SNA) was the bastion of enterprise networking during the formative time of distributed computing.  IBM didn’t avoid new things, but they avoided things that had seen their best days.

For everything, there is a season.  The cost of consumer technology has fallen steadily, and that’s the most critical trend in the market.  With the price of gadgets falling from a time when they cost a worker an average of six months’ income to the point where they cost half-a-week’s income, people jump in and out of trends with alarming speed.  There’s no financial inertia to overcome.  IBM saw that, I think, and decided that market wasn’t going to sustain margins and was going to require making an increasingly large number of risky strategy bets (buying Palm comes to mind).  So they pulled back, betting on the more stable business market that they’d never walked away from.  HP, during that same period, had re-focused on the consumer, and it paid off for a while.  Not now, nor likely any time soon.

Consumers are now, so the classical wisdom goes, “abandoning PCs”.  Not so; they’re just doing what they’ve always done, which is to spend to self-gratify.  Every year PCs get more powerful, and yet every year the range of things you do on one has diminished not expanded.  PCs used to be the gaming system of choice, but they’ve been displaced by low-cost game consoles and portable devices.  They used to be powerful productivity tools, and we’re now dumbing them down to thin clients because we can’t afford to support their complexity.  If power doesn’t matter in PCs any more, what does matter?  Cheapness.  IBM didn’t want to be in that kind of market, and HP doubled down its bet there.

Would IBM be where it is today without the PC.  No way.  Would it be where it is today had it focused on the PC as HP did?  No.  IBM would also have wasted a zillion dollars and management cycles trying to defend a position in networking.  So what IBM did right wasn’t to avoid consumerism, or avoid new things, but to jump on the bus when it was going IBM’s way, and off when it took a strategic detour.  HP has, in contrast and all too often, gotten on too late and gotten off way past their stop.

What could HP do at this point?  There’s no easy answer if one defines “easy” as being facile to execute.  There’s an easy one in terms of ease of discovery, though.  They need a strategy, a vision, that unites their purchases.  IBM, whether in mainframes or PCs, in networking or out, always had that vision and still does.  HP never had a unifying mission for its elements, only a unifying theme of making money from them.  They couldn’t be symbiotic because there was no ecosystem to cooperate within.  That made the sum of the parts less financially valuable, and more risky, than the whole.  And it’s going to be darn hard to fix this problem quickly.