Unraveling What Might be Verizon’s Plan for Improved Profits

It should be clear by now that network operators are facing a profit-margin problem on basic connection services.  Everyone has been happy to offer advice; vendors think operators should either accept declining margins or address them in some way that doesn’t impact capex.  The media thinks the operators should elevate their revenues and practices, become more like OTTs.  Verizon’s recent M&A may have given us a hint of what the operators themselves think, how they believe they can change the shape of that revenue/cost-per-bit curve.  But it’s complicated…real complicated.

In a past blog, I made the point that there was one rule that had to govern any operator shift of focus to higher service layers—the network itself must remain profitable.  You can’t say you’ll earn new revenue that in part at least will subsidize a network loss, because competitors with no network and no losses will then have a pricing advantage you can’t make up.  What higher-layer services could do is reduce the cost-reduction pressure on networks, allowing the focus to become “break-even” and not “profitable” because profits are increasing elsewhere.  I believe this is true today.

Verizon’s quarter wasn’t impressive.  In fact, as the article just referenced shows, the Street is questioning whether Yahoo can add much to Verizon’s bottom line or whether Verizon can really invest in making Yahoo great.  Wireless, which is where Verizon’s OTT game is focused, is off in revenue and operating income even more than wireline.  Revenue declines exacerbate the problem with the collapse in profit-per-bit because they’re rarely accompanied by a corresponding cost reduction.

Even if Yahoo does build OTT revenue eventually, there’s still the problem of declining profit in the network itself.  Another acquisition by Verizon, reported HERE, might offer a hint of how Verizon could bridge its network-to-OTT gap.  Fleetmatics, the company being acquired, has a strong position in the vehicle tracking/GPS market, primarily for managing fleets of trucks or company vehicles.  Verizon got into this space with its acquisition of Telogis this year, and Hughes Telematics four years ago.  The fact that these companies all link into wireless and to vehicles is a strong story alone, but they can also play into Verizon’s IoT plans.

Like many other operators, Verizon got obsessed with IoT’s potential to build up a new set of wireless customers.  If we’re exhausting the market potential in traditional wireless because we’re running out of humans to empower, shifting focus to the “thing market” is at least financially attractive if perhaps a bit too much speculative.  Obviously there’s no limit on the number of things we could network, but as a practical matter the value of cellular wireless for thing-networking is greatest when you’re talking about things that are inherently mobile.  Vehicles fit that model, obviously.

Fleet wireless could build the number of wireless customers and thus increase network revenue.  For that, the real value of all these fleet deals is that they establish the value of an application set that users pay for, and that application set can include in its fee a wireless subscription.  That works, as a number of offerings going back to OnStar proved, even in the personal vehicle space.  But OnStar’s attempt to offer a third-party add-in was eventually dropped, proving that a service provider may have a struggle in making the market work.  Verizon has to overcome the limitations on vehicle telematics somehow.

The long-term model Verizon is said to favor in vehicle telematics is one that exploits the mobile-device WiFi trend.  By adding WiFi to the mix, the value of the offering to the consumer—even incremental to their personal wireless plans—can be enough to make a vehicle telematics package attractive again.  In both the short- and long-term, though, true fleet applications offer a nice revenue stream that can in the end also build new applications to enhance the consumer vehicle value proposition.  They also can jump off into container telematics, rail telematics, and even (in theory) aircraft.

The question is whether Verizon thinks that this is all there is to IoT, the cellular connection and managing it.  If they do, then they’ll be eaten alive by players like GE Digital with Predix, or by Google who has their own much more sophisticated IoT vision in the works.  IoT is about big data and analytics, which Verizon still doesn’t seem to be addressing.  These fleet deals could validate applications, but will they be so specialized they don’t frame a useful IoT architecture?  That would be like a bunch of little NFV services that didn’t add up to a general NFV model.

Not complicated enough for you yet?  Well, another development that came out on the Verizon investor call was the possibility that a hybrid of fiber (FiOS) and 5G could significantly reduce the cost of serving high-bandwidth video and Internet to the home.  Verizon’s McAdam said, referencing a question on the cost impact of the hybrid “From a pure cost perspective, again I think it’s a little too early to tell. But what I’ll tell you is about half of our cost to deploy FiOS is in the home today.”  That could have a significant impact in two ways.

Lower cost for FiOS could help counter a problem that Verizon is already facing.  Today about 40% of customers elect the “Custom” TV offering to lower the cost by dropping channels.  That’s putting a lot of pressure on FiOS revenues, and the pressure will get worse as customers come off contract to find that their classic bundles are a lot more expensive.  If Verizon used the cost reduction to reduce pricing on TV overall, they could perhaps manage the size of those price increases and keep more people on a large bundle.  History shows that the more channels you have the less chance there is that you’ll go to a streaming video service for something to watch.

Another benefit of the cost reduction would be that it could allow Verizon to extend the FiOS footprint, something the company had previously said it would not do.  This is separate from the issue of wireline/copper viability, to the point where it might be possible for Verizon to enter a market in the TV/5G space it had previously abandoned as a provider of wireline telephony.

Apart from these direct opportunities, Verizon could draw from the FiOS/5G symbiosis is the potential for creating a bunch of microcells that could then be used for other 5G missions, even for providing 5G coverage in metro applications.  The nice thing about these microcells is that they’d tend to get deployed where residential density was high, which is just where you’d like them to be.

These values could tie into the telematics and IoT stuff too.  Obviously a bunch of 5G microcells would provide fertile connectivity ground for vehicles, and if Verizon were to link their residential telematics (home security and control) with the initiative they could leverage some of the same applications and capabilities in both the home and in vehicles.  Most users want vehicle WiFi to be secured to prevent third-party intrusion, but in a way other than sharing WiFi passwords.  This isn’t unlike the process of inventorying IoT elements and authorizing/authenticating communications.

Symbiosis seems to be at the heart of Verizon’s plans.  I think Verizon is working to leverage its networks with OTT applications, including vehicle telematics and even IoT.  It’s also looking to use these applications to justify extending the network, or at least for funding part of the expansion of fiber and 5G.  The interesting thing is that this seems to commit Verizon to wireline TV delivery at time when rival AT&T seems to be moving to a satellite TV plan.  It’s a big bet.

The bet seems specifically on how to control the home, not in an IoT sense but in a market sense.  TV has been the profit cow essential in making wireline, or “fiberline” work.  The Verizon comment that 40% of their TV subscribers were clustering on their lowest-end plan strongly suggests that they stand to lose TV customers if they continue to shop on price.  Satellite TV is the cheapest TV, usually.  But Verizon says fast Internet is catching on, and fiber can deliver that better.  Satellite TV could tap revenue away from fiber deployment that could improve Internet broadband and potentially expand a 5G footprint.

Which just might be the key issue here, because if TV viewing is racing to the bottom it’s intersecting with there with the streaming alternative, which is the very traffic that’s generating the profit-per-bit compression in the first place.  Less traditional TV means more streaming traffic.  The media may love the notion of streaming for everything, but for operators the challenge is to somehow monetize traffic that’s carried on a zero-marginal-cost basis.  Advertising seems the only hope.  So might Verizon be looking at Yahoo primarily to adopt a variant on the strategy I suggested at the beginning of this article?  Use OTT and advertising to cover the cost of that incrementally free streaming?  Yeah, it might.

Then there’s the Fundamental Truth of Television Today.  Which is “TV sucks.”  The cause of viewers flocking to minimal FiOS offerings may well be that there’s nothing worthwhile on any channel.   Everyone I know, including myself, feels that television programming is getting worse.  Even cheap satellite channels won’t wow somebody who can’t find anything, anywhere.  Streaming at least offers something interesting.  Verizon’s model might immunize the company against a failure of the live TV model.

Gaming all of this is clearly a major challenge, and executing on it will be even more of a challenge.  Verizon is going to need to execute on this possible symbiosis, and execute well.  Any little slip in hanging all the pieces, and the whole story could fall apart, and Verizon is no different from the other operators in that respect.  It’ll be interesting to watch the industry dance here.

Following Google’s Lead Could Launch the “Real Cloud” and NFV Too

I think most people would agree that there is a fusion between cloud computing, virtual networking, and NFV.  I think it’s deeper than that.  The future of both network infrastructure and computing depends on our harmonizing all that stuff into a single trend.  Even Google, who is probably as far along as anyone on this point, would admit we’re not there yet, but Google seems to know the way and they’re heading in that direction.  I had an opportunity to dig deep into Google’s approach late last week.  We all need to be looking carefully at their ideas.

The cloud is exploding, taking over the world, so you’d think from reading the story.  Wrong.  Right now, we’re in the bearskins-and-stone-knives phase of cloud computing and we’re still knuckle-dragging in the NFV space.  The reason, arguably, is that we’ve conceptualized both these concepts from a logical but suboptimal starting point.

Cloud computing for businesses today is mostly about hosted server consolidation, meaning it focuses on taking an application and running it in a virtual machine.  Yes, that’s a positive step.  Yes, anything that’s going to make use of pooled server resources is going to eventually decompose to deploying something on a VM or into a container.  But what makes an ecosystem is the network relationship of the elements, and the classic cloud vision doesn’t really stress that.  We already see issues arising as we look to deploy systems of components that cooperate to create an application.  We can expect worse when we start to think about having applications actually exploit the unique capabilities of the cloud.

NFV today is about virtualizing network functions.  Since the presumption is that the functions run on some form of commercial off-the-shelf server, it’s true that virtualizing the functions so that can be done is an essential step.  But if we shed the boundaries of devices to become virtual, shouldn’t we think about the structure of services in a deeper way than assuming we succeed by just substituting networks of instances for networks of boxes?

Virtualization is about creating an abstract asset to represent a system of resources.  An application sees a virtual server rather than a distributed, connected, pool of servers.  The application also sees an abstract super-box instead of a whole set of nodes and trunks and protocol layers.  What’s inside the box, how the mapping occurs, is where the magic comes in.  My point is that if we constrain virtualization to describe the same stuff that we had without it, have we really made progress?

Paraphrasing what a Google pundit said at an ONF conference, network virtualization enables cloud benefits.  You can’t build the network the old way if you want the cloud to be a new paradigm.  Whatever is inside the compute or virtual-network abstraction, networking connects it and makes an ecosystem out of piece parts.  In short, we have to translate our cloud goals to cloud reality from a network-first perspective.

Well, what are the goals?  Google’s experience has shown that the keys to the cloud, in a technical sense, are to reduce latency to a minimum, and in a related sense to recognize that it’s easier to move applications to data than the other way around.  Service value is diluted if latency is high, of course.  One way to reduce it is to improve network connectivity within the cloud.  Another is to send processes to where they’re needed with the assurance that you can run them there and connect them into the application ecosystem they support.  Agile, dynamic, composition.  I think that’s also a fair statement of NFV’s technical requirements.

If this is the cloud mantra, then network-centricity isn’t just a logical approach, it’s the inevitable consequence.  Google is increasingly framing its cloud around Andromeda, its network architecture.  Even OpenStack, which arguably started by building networks to connect compute instances, seems to be gradually moving toward building networks and adding VMs to them.  If you sum all the developments, they point to getting to network virtualization using five corollary principles or requirements:

  1. You have to have a lot of places near the edge to host processes, rather than hosting in a small number (one, in the limiting case) of centralized complexes. Google’s map showing its process hosting sites looks like a map showing the major global population centers.
  2. You have to build applications explicitly to support this sort of process-to-need migration. It’s surprising how many application architectures today harken back to the days of the mainframe computer and even (gasp!) punched cards.  Google has been evolving software development to create more inherent application/component agility.
  3. Process centers have to be networked so well that latency among them is minimal. The real service of the network of the future is process hosting, and it will look a lot more like data center interconnect (DCI) than what we think of today.  This is handled with an SDN core in Google’s network, and they use hosted-BGP technology that looks a lot like virtual-router-control-plane at the edge of the core.
  4. The “service network” has to be entirely virtual, and entirely buffered from the physical network. You don’t partition address spaces as much as provide truly independent networks that can use whatever address space they like.  But some process elements have to be members of multiple address spaces, and address-to-process assignment has to be intrinsically capable of load-balancing.  This is what Google does with Andromeda.
  5. If service or “network” functions are to be optimal, they need to be built using a “design pattern” or set of development rules and APIs so that they’re consistent in their structure and relate to the service ecosystem in a common way. Andromeda defines this kind of structure too, harnessing not only hosted functions but in-line packet processors with function agility.

A cloud built using these principles would really correspond more to a PaaS than to simple IaaS, and the reason is that the goal is to create a new kind of application and not to run old ones.  What I think sets Google apart from other cloud proponents is their dedication to that vision, a dedication that’s led them to structure their own business around their own cloud architecture.  Amazon may lead in cloud services to others, but Google leads in the mass of cloud resources deployed and the service consumption their cloud supports.

So let’s say that this is the “ideal cloud.”  You can see that the biggest challenge that a prospective cloud (or NFV) provider would face is the need to pre-load the investment to secure the structure that could support this cloud model.  You can’t evolve into this model from a single data center because you violate most of the rules with your first deployment and you don’t even validate the business or technology goals you’ve set.

You can probably see now why I’ve said that the network operators could be natural winners in the cloud race.  They have edge real estate to exploit, real estate others would have to acquire at significant cost.  They also have a number of applications that could help jump-start the ROI from a large-scale edge deployment—most notably the mobile/5G stuff, but also CDN.  Remember, CDN is one of the biggest pieces in Google’s service infrastructure.

Some of the operators know the truth here.  I had a conversation perhaps three years ago with a big US Tier One, and asked where they believed they would deploy NFV data centers.  The response was “Every place we have real estate!”  Interestingly, that same person, still at the same Tier One, is doubtful today that they could realize even 20% of that early goal.

Some vendors, reading this, will shout with delight and say that I’m only now coming around to their view.  Operators, they say, have resisted doing the right thing with NFV and they need to toss cultural constraints aside and spend like sailors.  Of course, incumbents in current infrastructure sectors have been saying operators needed to toss constraints and reinvest in legacy infrastructure to carry the growing traffic load, profit notwithstanding.  I’m not saying any of this.  What I’m saying is that the value of the cloud was realized by an OTT (Google) by framing a holistic model of services and infrastructure, validating it, and deploying it.  The same thing would have to happen for anyone trying to succeed in the cloud, or in a cloud-centric application like NFV.

The reason we’re not doing what’s needed is often said to be delays in the standards, but that’s not true in my view.  The problem is that we don’t have the same goal Google had, that service/infrastructure ecosystem.  We’re focused on IaaS as a business cloud service, and for NFV on virtual versions of the same old boxes connected in the same old way.  As I said, piece work in either area won’t build the cloud of the future, carrier cloud or any cloud.

AT&T and Orange, who have announced partnering on NFV to speed things up, are probably going to end up defining their own approach—AT&T has already released its ECOMP plan.  That approach is either going to end up converging on the Google model, or it will put AT&T and those who follow ECOMP in a box.  So why not just learn from Google?  You can get all the detail you’d need just by viewing YouTube presentations from Google at ONF events.

This same structure would be the ideal model for IoT, as I’ve said before.  Sending processes to data seems like the prescription for IoT handling, in fact.  I believe in IoT based on the correct model, which is a model based on big-data collection, analytics, and agile processes.  Google’s clearly got all those things now, and if operators don’t want to be relegated to the plumbers of the IoT skyscraper, they’ll need to start thinking about replicating the approach.