Three Times Three Equals the Future of IT

Most of you probably aren’t old enough to remember the song “Three Little Words” and I won’t bore anyone who’s not interested enough in the source of my blog topic by recounting the lyrics.  They don’t apply in any sense to my topic today, other than that I’m proposing three different three-word triads and weaving them into a tale on the future of computing.

My first triad is “IBM, HP, and Dell”.  The real debate on the future of computing is whether it will be based on commodity x86 servers in some form, and IBM seems to be coming down on the “Yes” side of the question by selling off its COTS business to Lenovo.  This puts IBM in the position of either being dependent on proprietary hardware architectures like its old mainframe line, or being committed to evolving to a software/service company.  It also raises the question of what HP and Dell might have to do next, whether IBM is right or not.

Lenovo is likely to be more aggressive in pricing its servers, and thus will put price pressure on its competitors.  While the fact that Lenovo is a Chinese company might also scare some buyers off, Lenovo is already a trusted source of PC products like the venerable ThinkPad brand that was launched and popularized by IBM.  I don’t think that many will turn away from “IBM servers” in a Lenovo skin.

IBM’s challenge is clear, I think.  Without x86 servers, IBM is an enterprise-only hardware company, period.  There’s nothing it can really hope to sell to the smaller businesses, but in some ways that’s not going to create a new problem for IBM because the skid in IBM marketing practices has disconnected it from most buyers too small to have direct IBM sales representation anyway.  So is IBM going to kiss off everyone else, or are they going to rely on “software” to address down-market opportunity?  That’s where the real issues for its competitors will arise, and for that I want to augment my Three Little Words with the second triad:  Linux, cloud, and mobility.

Thirty years ago, we saw the decimation of the minicomputer market because only the leading firm (IBM) was large enough to provide a software platform that had enough buyers to attract third-party developers.  Today, in a world of Windows and Linux, it’s pretty obvious that IBM can’t hope to be that kind of player.  But IBM can’t abandon x86 without competing with it.

One possible approach would be for IBM to endorse Linux, especially at the desktop.  That would hurt rival Microsoft, it would let IBM leverage the Linux software universe, and it would open opportunities for other platform rivals (like ARM).  To avoid becoming a new Red Hat, IBM could develop commercial for-pay software to run under Linux, making perhaps the first big player to develop a major commercial software portfolio for Linux.

If IBM is out of COTS then competitors like Dell and HP have a free shot at that space, though selling things off to Lenovo will put them under margin pressure.  The problem might arise if IBM were to decide to push Linux.  Dell seems to be cozying up more to Red Hat, so such a move by IBM would be a direct assault on Dell’s plans.  HP’s cloud strategy is totally dependent on Linux too.  At the least, a Linux strategy based on quality commercial software could give IBM an entrée into competitive Linux accounts.  If IBM could make itself the Great Father of Linux (which is possible) they could even hope to force Dell and HP to depend more on Microsoft, which would set the future up as a Linux-versus-Microsoft war with IBM on the Linux (and for the data center at least, the winning) side.  That could well get IBM its only good shot at a strategic rebirth at this point.

HP’s optimal, or even possible, responses to this situation are far from clear.  HP can’t run to the Linux side too quickly because they need Microsoft for the desktop, period.  That makes it hard for HP to jump too much into Linux to preempt an IBM strike.  And HP is very vulnerable in the cloud area because it sees its cloud position (which it characterized as the leading private cloud platform on its recent earnings call) as a major differentiator.  Yes, but HP’s cloud approach is really an OpenStack retread.  Were IBM to focus its commercial-Linux software push not only on Linux but on cloud-specific application architectures then IBM could almost put HP away.

Dell would face the same problems as HP, but with Dell the question is whether its “private” status would help or hurt its response.  Dell has the latitude to do something strategic now that it’s not facing quarterly scrutiny from the Street, but only if it doesn’t plan a quick re-IPO.  If it does something that hurts profits in the long term (or if something is done to Dell that has that effect) then it could end up in a downward spiral with no easy exit.

For Dell, though, the last of my latest triad is the key point.  Mobility is the only driver that can truly break the x86 market model.  The more reliant consumers and workers are on mobile devices the more they shift away from Windows and the fixed “Wintel” architecture.  Such a shift would be a strong benefit to IBM, which is why it would be logical for IBM to work hard to drive what I’ve been calling “point-of-activity empowerment” as a trend.  Dell and HP would see their PC business revenues fall, but would also see a focus on highly systemic software systems built around cooperative components.  This is a far cry from the simple cloud/OpenStack model that both HP and Dell want to embrace.  The question is whether IBM could push this trend, and whether Dell and/or HP could push back.  Which brings me to my last triad; song, services, and operationalization.

Singing, meaning marketing, is critical.  IBM’s greatest barrier to advance (and thus Dell’s and HP’s strongest defense) is IBM’s lamentable lack of marketing insight over the last four or five years.  By letting itself shift to a sales-driven mindset, IBM shed the COTS opportunity long before they left the business.  You can’t be a player in x86 if you can’t sell to SMBs, and you can’t do that without a retail brand.  IBM got quiet, and saw its brand become a kind of enterprise-elitist image.

Technically, the shift toward a mobility-driven future is a shift toward a new conception of “services” and “SOA”, not one based simply on SOAP and WS standards but one framed in the notion of agile, flexible, distributable, ad hoc, cooperative relationships among application elements.  IBM is probably the best player in all of the industry in making something like this work, but will it?

The barrier will likely be operational effectiveness.  The fact is that we have constrained our flexibility in networking and IT because the most flexible systems are too complex to manage at tolerable costs.  For most of the time I’ve been in networking, network management has been an almost career-killing assignment (and in most companies it still is).  It should be, must be, in the forefront of future service evolution simply because complexity rises too fast in agile systems—so fast it inevitably swamps results unless you completely rethink management.

Can IBM address these three points, or will Dell or HP do a better job?  The answer to that one will likely determine which of my first three words are on the top of the tallest building in tech five years from now.

What SDN and NFV SHOULD Do Next

We’ve completed a couple of critical meetings in the NFV space (Team Action Week for the TMF and the regular quarterly ISG meeting) and we’re now facing an ONF event in the SDN space.  The NFV ISG has begun discussions on the “what next?” topic since its formal charter was to complete its specification work by January 2015.  Some of the comments that have been made on that question apply, I think, to the general situation in the world of telecom standards and technology evolution.

I’ve been a participant in carrier standards processes for decades, and one thing that’s been true from the first is that the processes presume a kind of supply-side mindset.  The standards activities presume that their goal is to develop the market, which implies that the market will wait for them.  In telecom in the Internet age, the problem is that we have a demand-driven market and at the same time the universal dialtone of the Internet over which at least some of it can be satisfied.  As a result, we see a clear separation of activities—things that relate to “opportunity” have tended to become disconnected from traditional standards practices while things that don’t still fit nicely.  Carriers, who still think in terms of standards, have thus hurt themselves by embracing progress in a form that’s not related to opportunity.

This mindset thing is important, I think, because even if we could identify specific things that carriers and carrier-oriented standards activities need to do to become more relevant, we can’t make the players adopt them if they have a belief set that’s fundamentally out of tune.  And that’s what I think faces both the NFV and SDN worlds right now.

What has made the Internet succeed isn’t standardization, it’s been innovation that’s driven the bus.  Innovation is inhibited by standards unless the standards frame agreed-upon foundations and leave a lot of room for evolution to address opportunities, and this rarely happens in the formal processes today.  The best search paradigm or best social network doesn’t emerge from standards, it emerges from people delivering something for use, from user experience, and finally from user selection among a reasonably large set of options.  That’s what the NFV and SDN worlds have to get to.  How to get there is the question.

My view is that the answer stares us all in the face.  What is needed to advance both SDN and NFV is implementation, meaning the proof-of-concept process.  Both the ONF and the NFV ISG encourage this sort of thing, but it’s my view that neither is driven by implementation even to the extent that the IETF is, and the IETF represents the body who has the process of standardization closest to right.  The questions at this point are whether we can have implementation-driven activities in either area at this late stage, and if so how we could drive them.

My view on the NFV process has never wavered from the comments I made as soon as the Call for Action was released in October 2012.  It is critical that there be an implementation, a prototype, developed for any useful standard to emerge, because only the evolution of such a prototype can provide real field experience with the technical tradeoffs that are inevitable in standardizing something.  The NFV process has gone a long way without that, and in the SDN world we’ve created a prototype that addresses only a small and largely unambiguous piece of the total functional picture—the “SDN Controller”—and ignored most of the knotty questions north of that famous set of northbound APIs.    As a result, we have a set of conceptions on how SDN and NFV must work that have yet to be proved to be optimal or even practical.  In fact, it’s my opinion that we’re only now beginning to have discussions in both the SDN and NFV spaces that should have been fundamental to the nature of both processes.

We can’t go back and undo the past, but I do have a message for both the SDN and NFV people; it’s time to focus on implementation, to the point where even the nature of the work done already becomes subordinate to that which implementation can teach.  We should, in NFV for example, say that every PoC is to be aimed at disproving what’s been specified at least as much as at proving it.  We should be pushing the boundaries of all our assumptions to insure that we’ve addressed the right problems and in the optimum way.  If we do that, we can create standards or specifications that will do the best for the market that we can do.  If we do our best, address the opportunities in the optimum way, then we leave nothing on the table in terms of benefits and drive the most effective evolution of the network—whether SDN or NFV is driving it.  If we don’t somehow test out and either promote or reject all our assumptions, then we’re letting a bunch of theory drive a market evolution, and we already know from the OTT world that isn’t going to work at all.

What Should We Do About Comcast/TWC?

The announcement that Comcast wants to buy TWC isn’t a surprise to those in the industry, but it’s still raising a lot of hackles because of fears that it would create a giant who would stomp on other competitors and on consumer rights.  The fact that Comcast has been a regular sparring partner with public advocacy groups over things like net neutrality doesn’t help, and what helps even less is that they bought NBC and so own a network/content property.  Now they want to mass up even more.  Should we let them?

The driver behind most M&A is the need to create higher levels of efficiency, and we know that the wireline voice, Internet, and TV markets are under a lot of pressure these days.  The question is whether this pressure is enough to really force M&A; is there another way out?  If there is no other way that cable companies like Comcast can respond to market changes, then blocking M&A could hurt both the industry and the consumers.  If there is, then the question is how regulators could force cable players like Comcast to apply some of the remedies.

The story we hear a lot about cable TV is that Netflix is eating their lunch.  You’ve seen the charts that show the number of customers for Netflix versus the cable, satellite, and telco TV providers, I’m sure.  They’re crap.  The important truth is that all the viewing data shows that people are not shifting from linear TV programming to Internet TV in any significant numbers; viewing hours for traditional TV swamps that of Internet TV.  Yes, people do watch more Internet TV, but that’s not a consolidation pressure on Comcast because they still pay their cable bill.

The real problem for the cable companies is that households don’t grow on trees.  We add to subscriber counts for any form of linear TV by adding households, and that number grows slowly.  If you want to gain revenue and you can’t gain customers, what you’re left with is growing ARPU.  Cable companies added phone and Internet to do that, but the wave of additional service revenues has crested at this point.  The real truth is that it’s not Netflix that’s hurting cable, it’s mobile broadband.  People have shifted in their view of what the Internet is, shifted from it being an at-home “second screen” to it being the Internet they really need.  As they spend more on mobile broadband they want to control spending on things like cable TV and Internet.

There is nothing preventing the cable companies from getting more into mobile broadband.  They can bid on spectrum, they can offer mobile services, and they don’t.  Even though they’ve been making noises about having a national WiFi network to create an alternative to LTE, there’s not a lot of progress on that front.  Cable companies are reluctant to get into the business of mobile services because it’s a high-capex process with (for cable companies) lower margins than cable TV.  Thus, the pressure to grow revenues becomes a pressure to earn more on TV.  That’s why Comcast bought NBC, and why it wants to merge with TWC.  They can wave a bigger stick in negotiations with networks, but they can also likely raise prices.

Comcast and other cable companies could improve their revenue future by offering new services.  They could improve their profit future by reducing their operations costs; cable companies are infants in OSS/BSS despite the fact that they have many of the same operations challenges as the telcos have.  They could accept, as telcos do, a lower level of profit and growth.  Comcast has a P/E multiple about a third higher than Verizon and a higher profit margin even without the benefit of mobile services.  Are these other remedies better for the industry?

Therein lies the issue.  In my view, Comcast presents a classic example of the “good-for-shareholders” versus “good-for-customers” dilemma.  For Comcast’s shareholders all of the stuff that the company could do to make it a better business in the future would make it a worse investment in the present, and remember that we live in the “one-quarter-at-a-time” age of investment.  The fact is that basic cable is tapped out on the number of households, people aren’t buying more pay channels, and on-demand video from cable companies (or telcos) isn’t competing with OTT video.  So investors who want a return (which is all of them) are looking for Comcast to do something, and a merger with TWC is the answer.

It may sound like I’m voting against an approval of the merger, but that’s not strictly true.  I think that Comcast is simply playing the same kind of game that every other public company in the US is playing, which is to pander to the hedge funds whose trades drive their share prices.  If we don’t like the way that companies like Comcast make their business decisions—decisions to favor short-term trading profit over long-term growth—then we have to change regulations to fix it.  Not FCC regulations or anti-trust regulations either.  We’re barking up the wrong tree with the notion that the FCC should be doing this or that.  All the FCC can do in this case is fend off any egregious risk of undue market power and tune the M&A terms a bit.  They can’t guide Comcast and the cable industry in the right direction.  For that we’d need the SEC, or Congress, and it’s not going to happen because financial lobbies are too strong.

What could the FCC and the FCC tune in the way of terms?  Well, one thing they can do is to get Comcast to agree on the 30% cap, which I think the company is already signaling they would accept.  Another is to get Comcast to make some progress on WiFi, perhaps by saying Comcast must agree to meet a specific hotspot coverage goal, provide for WiFi roaming, price WiFi services independently of cable TV.  Finally, they could get Comcast to work on their Selling, General and Administrative expenses, which are twice that of Verizon for half the revenue.  These would be fair concessions, I think, and ones regulators should consider.

Cisco’s Internet-of-Everything Feet of Clay

Cisco reported their numbers yesterday, and the results can best be described by paraphrasing the theme the Street took up; “they cleared a low bar.”  Cisco had reduced estimates and met or slightly beat them as a result.  Underneath, though, there was no indication that the company has come up with a way to address their problem with fundamentals.

In assessing a network vendor’s fortunes, I don’t care whether you believe that the cloud or SDN or NFV is the way of the future.  What I care about is whether you believe that transformation is cost-driven or revenue-driven.  In the former case, no matter what technology is the path you take on your road, you end up with lower everything because your buyers’ goals are focused on buying less of the stuff you make.  So no matter how many times Cisco talks about more mobile traffic or more connected devices, it doesn’t add up to anything better for Cisco unless somebody is paying more.

The number one problem Cisco has was reflected by a sentence on their earnings call:  “First, the Internet of Everything has moved from an interesting concept to a business imperative driving opportunities across every major vertical.”  Baloney.  The Internet of Everything is a media event and nothing more, and that’s the problem Cisco has.  They have focused so much on telling people that they need to buy more gear to carry more traffic to support the glamorous future, they’ve forgotten that their buyers have to hold their own earnings calls or balance their own budgets.  You can’t tell the rest of the food chain to lie down on a platter so you can facilitate your own devouring role.

To be fair, Cisco doesn’t have an easy problem to solve.  As a market leader, they can’t expect to gain much in the way of market share, which means that even if they promote a massive transformative vision of future benefits to drive future spending, the process of navigating the transition to that future is as likely to inhibit current spending as to expand it.  For a competitor like Juniper or Alcatel-Lucent, a great vision could drive a market-share shift if buyers believed your approach better prepared them.  Cisco can’t hope for that, and in our world of live-by-the-quarter-die-by-the-quarter, Cisco doesn’t even want to get a mild cold in terms of downside.  That’s why they’re increasing their dividend.

So what should Cisco do?  I think the answer is obvious.  They have to prepare for the next generation of the cloud, the generation where it focuses not on reducing the cost of what we do now, but on expanding the value of, and facilitating the transition to, what might be done down the road.  The future is a set of small application components that are dynamically composed to create support for workers and experiences for consumers.  It’s bound together with everything-as-a-service not with the Internet of Everything.  Functionally it’s a blend of the cloud, SDN, and NFV in a single grand package.

They’re not going to do it, though.  Cisco is going to fast-follower itself into a position where following is going to be difficult, and the reason for this is that it still can’t come to terms with software as the framework of the future.  That vision of a dynamic, composed, future that I talked about started with “small application components”, remember?  This is a software notion, and like all the network companies Cisco is weak on the software side.  They’re particularly weak on the management vision, and if there’s any specific single thing that’s critical to creating that glorious vision of “everything-as-a-service” it’s service management.  That’s the bridge between the “now” and the “future” because the more atomic and transitional you make a service, the more cost-effective your process of creating and sustaining it must be.  There’s not a network vendor alive who really values, really understands, network/service management and operationalization.  Even Ericsson, who bought OSS/BSS giant Telcordia and so has the greatest stake in the management game, sees that as a move to promote professional services not as the fundamental basis for its own—and the industry’s—transformation.

Cisco mentions “the cloud” a lot, but the cloud is driven in its present form by the notion it reduces spending on technology.  They talk about SDN too, and that has the same cost-based justification.  They never mentioned NFV on their call, and NFV alone might have the ability to bridge the world of network equipment with the world of management in a way that Cisco could exploit.  I think Alcatel-Lucent sees that possibility, which is why they want to use NFV and telco servers as their link between their current bit-pushing position to a service-pushing position.

The challenge for Cisco, and even for Alcatel-Lucent, may be that NFV isn’t enough anymore, and ironically Cisco’s call may be the best evidence that’s the case.  I get PR all the time from players who want to talk about their “NFV orchestration”, which is essentially OpenStack.  Well, here’s a flash for everyone.  Don’t send me this stuff because I don’t believe it for a moment.  OpenStack is not NFV orchestration and it never will or can be.  What OpenStack is, at the most, is a platform for deploying the components of multi-component virtual functions, a little implementation piece inside a grand model of future services that still remains undefined.  It doesn’t address management, it doesn’t support all the service models needed, it doesn’t define a single virtual vision of how resources, services, and operations all come together in the world of the future.  But OpenStack is poisoning NFV because it’s letting vendors get away with NFV claims that will never actually reap the benefits that operators want.  Cisco could jump on this point and run with it, but as a fast follower, they would have to follow NFV to where it’s being taken, and that is not the place Cisco needs to get to.

Cisco’s most obvious technical theme on their earnings call was “architectures”.  They “believe our focus on architectures is really paying off. As the pace and complexity of IT increases, Cisco’s ability to bring together technologies, servers and solutions across silos should continue to drive differentiation, preference and, over time, gross margins.”  Nice sentiment, John, but not true.  What drives gross margins is the ability to generate benefits, not help your buyers spend less on what you sell.  You are running out of time to figure that out.

New Announcements, New NFV Directions

I’ve been talking all this week about the evolution of NFV as a concept, leading up to the ISG meeting in Malaga next week.  One topic that’s been covered is the slippery distinction between “cloud hosting”,“NFV hosting”, or just plain hosting, of something.  Some of the ISG use cases, such as IMS, EPC, and even CDN are really inherently multi-tenant service applications that are instantiated and then simply sustained.  That’s much more like a cloud application, in my view.  So a logical question to ask is “What about other use cases that aren’t multi-tenant?”  Turns out they can pose some interesting issues, and we had an announcement yesterday from NetSocket that illustrates that “NFV” can mean a lot of symbiotic things.

The best examples of single-tenant use cases are found in “service chaining”, where features like firewall, DNS, DHCP, load balancing, encryption, and application acceleration are offered as service elements not only per user but also per access point.  These services are typically aimed at businesses where the price of the service and the cost of the appliance are higher, and service chaining aims to replace premises boxes by hosted software features linked into a chain of connectivity via NFV.  Presumably, as is the case with many early PoCs and trials of this, the virtual functions are hosted in the cloud.

But should they be?  NetSocket doesn’t think so, at least not in all cases.  They’ve announced the “Virtual Edge”, what the company calls a “MicroCloud Server” that replaces traditional service termination devices with an x86 platform that can host virtual functions just as well as a data center could.  The MicroCloud concept aims to provide the same kind of agility in service features, immunity from box change-outs, and COTS hosting that NFV aims to provide overall, but with what might be two important differences—customer specificity and SDN integration.

Edge hosting is a two-edged sword in terms of NFV issues.  On the one hand, things hosted in an edge device on the customer premises aren’t using shared resources and so can be managed as discrete devices by both customers and providers.  If you have shared infrastructure you have to use policy proxy management (though that’s not yet accepted, it’s true nevertheless).  But on the other hand, if virtual functions are spread out more end-to-end, it’s hard to avoid the conclusion that NFV will have to connect and deploy functions across legacy infrastructure and SDN, which extends the management scope to legacy technology.  That’s not in-scope to the ISG.

NetSocket proposes to address the connectivity issues in an innovative way that mixes tunnel VPNs and vSwitch technology to configure local connectivity and also manage WAN connections.  This makes the entire network connectivity offering into a managed service, and as I’ve noted you have end-to-end surveillance because you’re located on the customer premises so “managed service” can mean just what it says.

The notion of edge-hosting NFV functionality isn’t new.  RAD announced what it calls “Distributed NFV” earlier, and Cisco and Juniper both have the ability to host service functions on their edge routers (there are ETSI PoCs involving all three of these vendors).  The major difference is the nature of the edge device; with NetSocket you’re creating both SDN and NFV in at least proto-form with a software overlay that’s centrally managed.  You can see that this could result in lower real-estate costs at the edge and also potentially offer a range of edge platform combinations (servers and local switching) to fit the requirements of all the sites.  How all this will price out—whether MicroCloud will be cheaper than edge boxes with Linux boards—isn’t clear at this early stage, but it seems likely that a server solution would generally offer lower price points.  That means that the overall service management efficiency will likely set operator prices and profit levels, getting us back to my point that it’s management practices that make or break NFV overall.

Some sort of hosted managed-service strategy is critical for the SMB space because high labor costs and lost time due to errors and maintenance boost SMB network TCOs to about double that of large enterprises.  SMBs are often eager to unload network support and features from local devices to a managed service, and according to my surveys are also the class of business most likely to be looking to add new features, beyond even what they currently host in a traditional way.

Edge-driven agility poses some interesting questions for NFV.  An edge device (a server like NetSocket offers or a Linux board inside a traditional network device) could easily host agile service elements and sustain current management practices.  If the vendor provides a good central tool for deployment of the agile features, it meets all the basic goals of NFV without requiring a resource pool.  One of the strongest value propositions for the edge-driven model of “NFV” for example is the fact that you don’t have to build that resource pool to deploy virtual functions.  Every customer gets a MicroCloud server, so costs expand at the same rate that revenue expands.  With central hosting you build a pool and hope you can fill it with revenue dollars quickly.

You could base edge-driven NFV on NFV, of course.  In fact, you could use edge-driven NFV to launch services, in order to manage first costs, and migrate gradually to a shared resource pool as customer volume or service demands dictate.  You could also use the edge-driven model to host virtual functions in areas where customer density is too low to support a resource pool at all.  This approach would raise a question, though; do all edge-driven NFV hosting points have to look like virtual data centers, clouds, run on x86 and Linux, or what?  Agility at the edge seems smart, but to exploit it you have to have an open framework for coordinating service feature deployment to both central cloud resources and to what might be diverse edge facilities.

We may have to address these questions pretty quickly, too.  The tradition of the Internet—bill-and-keep rather than settlement for transport—has essentially killed any hope of end-to-end QoS because you can’t distribute payments for it.  If neutrality is dead, then QoS settlement might come along, which would then allow overlay VPNs to provide similar service to the more expensive provisioned VPNs of today.  That would favor an edge-driven model of managed services, and would also tend to distribute agile NFV-like functions more to the edge, changing the whole notion of function hosting.  Exciting times are ahead, I think.

Can Mobile Extricate Itself from Nonsense?

With Mobile World Congress coming along there’s plenty of buzz in the mobile space, and more than a few announcements are likely at the show.  Some will probably mix SDN and NFV with mobile, maybe with each other, with the cloud, and possibly with the Internet of Whatever’s Next.  Maybe it’s time to do a little level-setting in advance of the show.

Mobile is hot because it’s profitable; ARPU is still growing for many operators in an industry where growth seems a thing of the past.  Nevertheless, mobile isn’t immune to the issues that have plagued operators overall, it’s just not as far along the curve of decline.  The mobile operators in my survey indicated that they believed that major steps would have to be taken to protect mobile profits by next year, which puts it about two to two-and-a-half years beyond wireline in terms of the inflection point.

One of the hot buttons with mobile is lowering the cost of IMS and EPC, the overlay technologies needed for advance mobile services.  However, the canny planners in my survey indicated that the real challenge with mobile wasn’t from the cost side but from the services side.  They point out that mobile social advertising, for example, could have been theirs to address had they been more agile.  True, of course.  The question at this point, and the source of any linkage between mobile and the other hot technologies, is how this gets done.

It’s fairly unlikely in my view that operators will be able to leverage what could be called “IMS applications” meaning those with a specific linkage to the IMS environment.  RCS and other such applications have simply not been able to compete with OTT services, and the problem is less one of agility than one of uniqueness and utility.  Why pay for something from an operator when you can get something like Google Apps/Docs for nothing, and have it work on any of your devices?

I remember back in the ISDN days, when CIMI had signed the NIST Cooperative Research and Development Agreement and I was involved in carrier ISDN service planning.  Operators would come running into a meeting, excited, and shout “I’ve discovered a new ISDN application!  It’s called…file transfer!”  Well, I tried to explain that was already being done, without much success.  The point is that you can’t, as an operator, jump into “new” mobile services that are just versions of stuff that OTTs already create.  IMS could be a factor in such services but so far narrow thinking on everyone’s part has prevented any useful vision of how that could happen.

So we have a lot of focus on costs instead, and this aligns with the SDN and NFV stuff.  There is little question that SDN could be used to create a different model of EPC, and little question that elements of both IMS and EPC could be hosted by NFV.  We already have claims in both these areas, most of which are pretty much unadulterated hype.  The big issue is that if you replicate every element of EPC using hosted technology or simply substitute one kind of tunnel for another using SDN or something else, you’re not moving the ball much.  You really need a complete metro makeover, something that unites all the services into a common structure, including CDN, mobile, wireline, cloud…you get the picture.  Need it, won’t get it.

We have the same kind of lowball mindset in network evolution that we have in mobile service evolution.  It’s easier to plant an NFV or SDN flag on a product that isn’t rightfully either of the two than to actually do something constructive.  That’s why I don’t bother to go to trade shows anymore; they’re just mindless attractive billboards.

Virtualization should bring about a whole new model of networking—from the service level down to transport, from logic to management.  In fact, it has to do that or it won’t change things enough to bring about the kind of flood of new benefits, new revenues, new spending that everyone seems to want.  You don’t define the next generation of Internet by arguing that more things will be on it, you define it by defining the framework that lets it be so much more valuable for what it does, not for what it carries that people can’t help but invest in it.

A study just reported that Euro telcos will have to drive massive cost cuts to remain sustainable businesses.  That’s not true; what they need is to drive massive changes on the revenue/services side.  Cost management helps hedge fund investors but not the telcos or their customers, unless it can be paired with reasonable service upsides.  A business that survives by continual cost-cutting vanishes to a point.

What could be done about this?  Well, I have a suggestion that’s not been tried before.  Right now, telcos in the US and Europe both face a common problem, which is that they can’t collaborate on technology without risking anti-trust action.  That forces them to take initiatives designed to promote their own collective health out into a standards activity that will end up being run by vendors because there are more of them than there are telcos.  Suppose regulators were to allow pure carrier standards activities, something like a “CableLabs”, that would drive collective technology changes to facilitate growth in revenues?

We have a broken industry, and it’s because we have a broken process.  We are incapable of funding innovation either from the buyers’ or sellers’ side, and largely because we’ve created business structures and regulatory policies that have anchored us in practices of the past while the consumer and OTT competitors have moved on.  Fixing the problem won’t be easy, but it will darn sure be easier to do now than it will be five years from now.

We, the “networking public” can do something too.  Stop reading garbage, believing junk claims.  Demand better, because if that’s not done then it’s impossible to sustain the industry momentum needed to make our revolutions real.

NFV: Competitive and Cloud Dynamics

I wrote on Friday about the demand-side evolution of network function virtualization and the importance of framing an NFV story within a broader revamping of service management and operations.  Today we have two more NFV-related data points that again demonstrate that moving to a software-driven vision of network features is complicated—and likely to be competitive.

Alcatel-Lucent, Fujutsu, and NTT announced a research partnership aimed at developing the ideal telecom-industry server, and said that this relationship would expand “with the goal of creating advanced technology, increasing its level of sophistication, and achieving early feasibility checking. The project aims to establish new server architecture that will enable the three partners to develop service applications at an early stage…..The desire of the involved companies is to see this technology spread and become a global standard.”

It’s pretty obvious that NFV is a “service application”, and Alcatel-Lucent’s CloudBand is the only full-scale NFV implementation that’s both public and has specific credibility with operators in my surveys.  Of course, all of NFV and the cloud runs on servers, and Alcatel-Lucent doesn’t make servers.  The fact that this venture would produce servers and server standards suggests to me that Alcatel-Lucent has determined it needs to be more than just a supplier of software for NFV and carrier cloud, it has to be a full-service solution provider.  That could speak volumes for how a telecom giant sees the NFV space.

NFV is a standards-based initiative, right?  Well, if we assumed that NFV were driven by completely open standards and processes you could argue that having a complete NFV solution from servers to software wouldn’t be necessary.  In fact it might be an expensive diversion of resources.  But if NFV is in fact going to be rolled out by major vendors in generally proprietary ways, then you either have a complete solution or you’re a market non sequitur.  Alcatel-Lucent’s partnership here would be a good defense against that kind of marginalization.

Why might they believe in at least semi-proprietary NFV, though?  I think there are two reasons.  First, the exploding scope of benefits operators are looking for NFV to produce, and second a specific competitive dynamic in the market.

NFV started with the notion that hosting functions on COTS would be cheaper than middle-box technology from proprietary sources.  That’s true to at least some extent, but the key point is that it’s a benefit that can be realized almost per-middle-box.  I’ve pointed out that further discussion with operators, including those actually driving NFV, shows that they now see improvements in opex and service velocity as the key benefits.  These benefits are achieved only if you redo operations practices overall, not just host some functions.  That means a lot of scope and change are needed, and most of it is technically out of the scope set for itself by the ETSI NFV ISG.  That’s why I suggested a TMF partnership would be in order.

But so would a full-service NFV strategy, because for now at least, we have no standards to define the holistic kind of revolutionary operations future that operator goals now dictate.  Operators tell me they are more than eager to generate lower opex and faster service velocity.  That, plus no standards, equals lots of specialized vendor opportunity.  As a leader in telco infrastructure and integration, it would be amazing if Alcatel-Lucent didn’t see dollar signs here.

Even more amazing if they didn’t see competition, and guess who I think Alcatel-Lucent is almost surely focusing on?  Cisco, the arch-rival in the routing layer.  Cisco, according to my surveys, has been showing operators a proprietary approach to NFV and of course Cisco’s UCS means they have servers to include in their offering.  Cisco also has the ability to marry SDN and NFV for end-to-end services, so they can take a step outside strict ETSI NFV to enhance benefits.  Obviously, Alcatel-Lucent’s SDN strategy also supports that kind of extension, making the two obvious rivals for the Big Pie.

For Alcatel-Lucent to fight Cisco with their server hands tied behind their back is a big risk.  Of course, that means that for server players like Dell and HP to fight for NFV share without a specific and highly differentiable software strategy is similarly risky.  We may be seeing the first steps in the bulking up of NFV into complete solutions.  If so, the question of just how wide the benefit scope of these “complete” solutions will be will loom large.

Against this backdrop, we have the second development—Metaswitch’s announcement of Clearwater Core, a subscription-service professionally supported version of its open source Project Clearwater IMS.  I’ve noted a number of times that Metaswitch had the only example of a cloud-and-virtual-function-ready network service I could find, and with Clearwater Core they’re stepping out of being a validation framework into being something you can deploy.  Again the question is “why now?” and I think the answer is a combination of evolution of NFV and regulatory policy.

You could adapt Project Clearwater to a standard management and orchestration framework, if there were one.  If NFV is indeed going to be deployed using more proprietary tools to address the operations and service agility goals, then the Project Clearwater code will likely have to be customized for the specific the major NFV platform providers—likely the major network equipment vendors.  Clearwater Core provides a professional services framework in which that can be done, which is smart.

Regulatory changes are creating a demand-side driver too. The evolution from TDM to IP voice for wireline voice services raises the question of how you provide next-gen telco-grade voice.  If you look at wireline voice over IP, you find it’s a kind of immobile subset of mobile LTE IP voice.  You can build a wonderful telco-quality voice service on a combination of WebRTC, SBCs for session data path assurance, and IMS for subscriber management.  You don’t need mobility management for wireline so you don’t need full IMS.  Clearwater Core could be just the ticket for semi-OTT voice services, not as laissez-faire as traditional Skype-like voice services and likely more easily harmonized with the FCC’s and other regulators’ goals of maintaining high-reliability voice services with lawful intercept and E911.

Clearwater Core may be demonstrating how close “the cloud” and “NFV” really are.  Some network applications that are very static, like IMS, may well be more cloud-like.  Some cloud applications like CDN or transaction processing may be more NFV-like.  Certainly an implementation of NFV that supersets virtual functions with high-level service composition could be extremely powerful in the cloud.  So the cloud could funnel away some applications of NFV, and could empower a larger mission at the same time.  I bet vendors are looking at both possibilities.

Can the TMF and the NFV ISG Unite in Purpose and not Just Chronology?

The next two weeks are going to be very important for network functions virtualization (NFV), and also for the evolution of operations practices overall.  Next week, the TMF launches its Team Action Week activity, which will include the presentation of a number of “Catalyst” demonstrations on NFV management.  The following week is the quarterly meeting of the ETSI NFV ISG, and there are sure to be many discussions around the growing field of proof-of-concept demonstrations of elements of NFV implementation.

The reason this is all so important for NFV is that the concept is evolving in a mission sense even as it’s developing in a specification sense.  Operators, as I’ve noted, are no longer seeing reductions in capex as being the primary drivers behind NFV deployment.  Instead they see reductions in opex and improvements in service agility as the benefits they’ll try to reap, and the challenge is that both these are likely at least in part if not largely out-of-scope to the ISG’s work.

It’s not that the ISG doesn’t want to manage services or create them quickly.  The ISG’s focus is on how to deploy services by instantiating virtual functions on servers instead of connecting physical devices.  This necessarily includes how you would define services based on network functions and how you’d organize and deploy the software components that make up the virtual functions in the first place.  It also includes how virtual network functions are managed, and how existing network features become “virtual network functions” to be deployed.  All this is in the original ISG mission statement; replace purpose-built appliances by software running on commercial-off-the-shelf (COTS) servers.

The challenge is that most services, even in the long term, will include many legacy components, and the management and deployment of the service as a whole are part of this “out-of-scope” stuff.  The smaller the contribution of virtual functions to a given service, the less impact ISG work can have on overall service operations—both cost and agility.  You can make the NFV part of a service as cheap to operate and as agile as you like, but if you can’t deal with the majority of service elements then they anchor your practices and costs and you don’t reap much overall benefit.  In the critical period of transition to NFV, when virtual-function-based service components are minimal, the orchestration and management tools designed to facilitate their deployment would have minimal impact on operations overall.  That could mean little or no early benefit, making it harder to roll out NFV.

What makes this month so interesting is that juxtaposition of TMF and NFV ISG activity, a coincidence in timing that seems to cry out for a unity of purpose.  The TMF is normally the standards-setter for management, and among the Catalyst proposals for the June TMF meeting in Nice are some proposals that include the melding of NFV management with TMF principles.  Unlike the ISG, the TMF has the broad responsibility for service operations and agility so everything that represents the current goals of NFV is within TMF’s charter.  However, the TMF’s turf is rooted in current OSS/BSS systems, and the practices here have traditionally moved at a pace that makes “glacial” seem downright speedy.  So what we have going on at the end of February is the intersection of two activities that have a common goal but different missions and different practices.

There is a liaison between the two bodies, but it’s not clear how the traditional liaison process could drive the kind of almost-interlocking cooperation needed.  The operationalization of networking depends, in my view, on the creation of a series of functional-system models that might then decompose into lower-level systems.  For example, the function “firewall” should be modeled and managed uniformly regardless of implementation, but when you look at the function closely, it could decompose into the provisioning of a device or into the deployment of some number of NFV VNFs.  This is the process I’ve always called derived operations, and the model I’m describing is an evolution of the TMF’s GB942 contract-mediated operations.  TMF, then, has defined the high-level future of operations, and the ISG is working on the application of (unspecified) operations principles to virtualization, the latest problem to be faced.

The cooperative challenge, I think, derives from this TMF-top-down, ISG-bottom-up decoupling.  The logical place for the two to meet (in the middle, hint! hint!) is somewhere neither are assured of getting to any time soon, if at all.  I think that it would have been pretty easy to have set TMF-evolved frameworks for the higher-layer functional-system modeling and then let the ISG define the dissection of the NFV side of the implementations.  It might not be as easy to harmonize to that vision at this point, either technically or politically.

Operators for sure think that the ISG is more theirs to control than the TMF, so they may well see the ISG work as the preferred path.  However, the whole contains all of the parts and not the other way around, so the fact that the TMF does have the scope to present a total solution could be the deciding factor.  That’s particularly true if the ISG doesn’t shed its tendency to narrow its focus to accelerate its process.  The media likely doesn’t care much as long as they can say somebody is losing big, so you’ll likely read about that point as winter turns to spring.

I think the best answer would be to somehow combine the activities in a constructive way.  Orchestration principles, management principles, could be applied to the NFV work in particular, but to networks in general too.  Everyone who’s read my blog here knows that I architected such a model of derived operations, basing it on GB922/GB942 principles and the ETSI ISG E2E architecture.  As the homepage of the CloudNFV website declares, “Tom Nolle, the original Chief Architect, contributed the entire design for open publication and use without restrictions.”  There are likely other models that would also serve at least as well, but just the fact that there is one proves that technical unity is possible here.  I think the evolving NFV goals of operators, the opex improvement and service agility, say that NFV/TMF unity of purpose is essential.

Is Our Market Insight Based on the “New Math?”

Analysts all have something in common, which is prognostication.  One of my missions is to predict, to analyze, and there are a lot of people out there who think this sort of thing borders on just “making things up”.  Personally, I welcome a chance to look at real objective data and see how it compares with what I’ve been saying.  I just wish we had some, and you can see why in some data on Twitter and the hybrid cloud.

Twitter reported their numbers, and their customer growth failed to impress the Street, who took their stock down as a result.  Twitter and some of the media have blamed this slowing of growth on the difficulty in using the service.  Gosh, correct me if I’m wrong but haven’t we been using Twitter for ages here?  Why is it that the learning curve that everyone went through is suddenly too steep?  Could it be that there’s another explanation?

Here’s my off-the-wall speculation.  Twitter’s growth is slowing because most of the people who want to be bothered with it are already engaged.  Social media is a form of entertainment, too, and at some point people have actual lives that they have to get on with—work, keeping up their homes or apartments, eating and sleeping…you get the picture.  Some people don’t want to know what their friends’ dogs and cats and kids are doing at every moment.  Facebook has experienced similar challenges for what I think is the same reason.  We don’t all have the time to Tweet back and forth, or we simply don’t want to.  So if Twitter changes its UI or process to make it easier to use, they’ll likely then have to come up with a different reason for lackluster subscriber growth a couple quarters down the line.

The Twitter situation also demonstrates the issues associated with online advertising as the revenue driver for everything.  There are two factors that influence the revenue yield for advertising—one is the effectiveness of the media in reaching the audience desired and the other is the total ad budget, which is related to the retail value of the market being addressed.  We’re not going to grow GDP in response to changes in Twitter’s UI, and the time available to be online is a bit of a zero-sum game, so what Twitter gains Facebook loses.  At some point, don’t we have to actually grow revenue somewhere to create an enduring market?  Maybe I missed that part of economics.

The cloud has its own issues of realization.  While we’ve had reports of extravagant growth in public cloud services and private cloud adoption, the problem is that nobody much breaks out audited financials on public cloud revenue and statistics on private cloud adoption are, well, cloudy by nature.  Fortunately we now have a survey, from Jefferies, that offers a bit of statistical rigor, but only a bit.  This one focuses on the hybrid cloud, and it’s a survey of VARs.

The survey says that hybrid clouds are mainstream now, that almost a third of workloads of the VARs’ customers were on private clouds and a bit less than a tenth on public clouds.  It says that new applications for the cloud outrank migrated apps by about 2:1 in the cloud, that most usage of the public cloud is for development and the least for mission-critical apps.  There’s a lot of interesting data here, in fact, but this is one of those cases where I have some issues with the findings.

I’ve done end-user surveys of IT and networking for over 30 years now, and I’ve learned a couple things along the way.  One is that people don’t want to say “I don’t know what X is” or “I don’t have any plans to use X” when X is some hot thing like cloud computing.  Years ago I worked with a big tech publication in analyzing the results of a survey they did on networking, and a third of all those who responded said they used a technology that was not even commercially available at the time.  The point is that if you ask a question you’re likely to get a very large number of positive responses clustered around what people think the smart or cool people would say, not around the truth.

Another problem with surveys is that they can be self-biased, through what insurance companies call “adverse selection”.  If you survey VARs you don’t get a picture of the average user because the average IT dollar isn’t spent on VARs.  VARs tend to serve the SMB space more than the enterprise space, for example.  SMBs are not the kinds of companies who typically attract and hold highly technical IT gurus.  Now, here we have a survey that says that VAR customers are hosting a third of their workload on the cloud (total public and private) when logically we’d know that the average SMB has zero chance of having a private cloud data center or even knowing what a hybrid cloud is.  And how many SMBs do you know who do their own software development?

There’s a common point to be made here, which is that in the end everyone ends up facing reality.  All hot trends look hot at the beginning because they’re growing from zero to something, which any algebra fan will tell you generates an infinite growth rate.  All trends in the end follow the classic “S-curve” of adoption that plateaus at a point of market saturation.  That point is set by the size of the benefit case, the money on the table to drive changes in practices or behavior.  There’s never going to be a company who can drive continual explosive growth through a single market paradigm.  Not Facebook or Twitter or IBM or Cisco.  There’s never going to be a technology that does that either.  Phones exploded as an alternative to letters, mobile exploded versus landline, chat and social media exploded versus even mobile calls.  At some point people will get tired of stupid pet tricks and what their friends are eating, and we’ll move on to something else hot and different.  When that happens, Tweet me.  It will help out all those Twitter investors.

Could a Hydrogen Daylight Illuminate Microsoft and Apple?

We, and in particular the media, would love technology to move in grand majestic sweeps.  Instead, the market moves in little lurches and hiccups, and so sometimes you have to read through the obvious to find useful signs of a trend.  Today we have three news items to ponder; Apple’s rumored CDN, Microsoft’s new CEO, and the Hydrogen release of OpenDaylight.  Could Hydrogen fuel some revolutions for the others?  We’ll see.

The Apple story is based on a report by a Wall Street analyst but it has some credibility in a technical sense.  It’s clear that Google is getting more and more into video and Google has built what’s essentially a private Internet core that links its servers to the important access ISPs.  They even have their own cache points inside (in some cases) the ISP edges.  That offers them a better video path to customers.  Amazon, another competitor of Apple in at least the tablet space, also has CDN capability though not as “deep” as Google, and their cloud capabilities are being used to augment their delivery of even web content.

The problem with the story is that building something competitive to Google’s network would tend to drag down Apple financials and not necessarily raise their revenues.  Networking is expensive, and Apple already gets CDN services from Akamai and Level 3, so what they stand to gain is really just the difference between the cost of services and the cost of CDN infrastructure.  Unless, that is, something changes.  One possible change could be settlement on the Internet, created by the end of the neutrality order.  That might force the CDN providers to pay for connectivity, and Apple might believe they have (through their i-stuff) more leverage with the operators.  Another is the cloud.  Apple might in fact be looking to a more point-of-activity experience management future, which would demand lower latency.  We’ll have to see what happens here, but I think CDN is too simplistic a driver.  Apple, I think, needs to be looking more at Siri/”Her” and transformational experiences.

Then there’s Microsoft’s new CEO Satya Nadella, a cloud guy who’s supposed to be the kind of techhie that Microsoft insiders respect.  The buzz is that his cloud experience is going to drag Microsoft into the New Age of the Cloud.  Well, maybe.  The problem here is that it’s about two years too late for any of the obvious moves.

Microsoft had a dazzling cloud opportunity with Azure at first, because “basic” cloud computing is all about offsetting costs.  IaaS offsets only hardware cost, so it’s inherently harder to justify and less profitable.  SaaS potentially offsets a lot of cost, but you need to offer it for every software application or the buyer still needs computers and software and you’ve not made much of a business case.  PaaS is a wonderful in-between, but it could work only if you have a platform that’s almost universal—like the Windows Server stuff.  Microsoft could have hosted everything in that platform, built from that into hosting Office and other tools, and driven the cloud revolution.  But now the cloud is moving on to the platform services space where you add cloud value by adding web-service features.  Microsoft is in fact worse off in that kind of market unless it elects to target any operating system and middleware with its own platform services.  That would be interpreted as a repudiation of Windows.

Nadella has little choice here, cloud-wise.  All he can really do at this point is to try to add platform services to Azure, but do so in a way that makes them accessible to Linux or whatever.  If he does that, he’s immediately eroding the differentiation of Windows and taking a step toward making all operating systems and middleware into commodity elements.  But if he doesn’t do that, then he’s disconnecting Windows decisively from the cloud, and that will ultimately be fatal as platform services develop.  Microsoft, in short, is heading for some very troubled times and no new CEO ever lived that could pull them out quickly.

Finally, we come to OpenDaylight.  SDN is overall a vast ball of market hype surrounding a kernel of utility.  The coverage of everything SDN tends to focus on obvious stuff, and there’s nothing as obvious as a good cat fight among vendors.  So we’ve accused Cisco of trying to use OpenDaylight to control SDN, to have pushed Big Switch out.  We also accused Big Switch of running out in a fit of pique because their own controller wasn’t picked as the basis of OpenDaylight, something that would have given them a great exit strategy.

I have always believed that OpenDaylight is the superior controller.  I think that anyone who believes that OpenFlow alone will let buyers shift into SDN mode is delusional.  What you need for SDN to work is two things.  One, the ability to control the widest range of devices possible through the widest range of behaviors possible.  OpenDaylight was designed to do that, and uniquely so, which is why I think it’s the best approach.  But the other thing is the ability to define a full set of “behaviors”, meaning cooperative models of network forwarding, and impose them on that wonderful range of devices.  There, I fear, nobody in the SDN space is doing much that’s useful.

We started in the basement with SDN, with control of devices and not a sense of what we wanted a controlled collection of devices to do that differentiates it from other collections based on other technologies.  We need to reinvent networking to be sure, for the era of the cloud and hacking and content and so forth, but to do that we have to start by conceptualizing the service goals and from them harness devices in the best way.  Otherwise, low-level evolution may not add up to the best way to support high-level services, which reduces the value of those services and all you’ve done to support them.  In fact, low-level evolution might not add up to any logical services at all.  OpenFlow was a bad place to start, and it’s already IMHO proved to be the wrong way to do device control across the full range of equipment.

Network Functions Virtualization has also started at the bottom, but it may still be able to climb out of the furrows to see what the crop is supposed to be.  The TMF is actually, and decisively, holding the high ground at the moment because service management is half-service even semantically.  The question is whether these two initiatives can grow together or whether one at least will embrace the full notion of services.  If they did, they could create the network foundation for SDN, for cloud and Microsoft, and even for Apple—whether their goal is as pedestrian as CDN or as futuristic as “Her”.