It’s always fascinating to get a Wall Street view of networking, so I was happy to review the William Blair tech outlook. While I don’t always agree with the Street, they certainly have capabilities that I don’t in analyzing the financial trends. Even where we disagree, there’s value in contrasting the Street view with the view of someone who does fundamentals modeling (me).
The report paints a picture of the industry that I’d have to agree with. There are pockets of opportunity (WiFi is one, created by the onrush of smartphones and tablets that have data demands that simply can’t be satisfied through traditional mobile RANs) and some specific areas of systemic technical risk (SDN obviously, but NFV more so and I think Blair overplays the former and underplays the latter). Things like storage and some chips, technologies that are low on the food chain and directly impacted by demand factors, are better than things higher up. That’s true in the OSI sense too; optical transport is a better play than the higher layer.
In the cloud, I’m happy to say that the report conforms to the survey of enterprises I just completed. That survey shows that enterprises are using IaaS but are targeting SaaS and PaaS for future cloud commitment. The simple truth is that everyone who’s run the numbers for the cloud recognizes that the problem with IaaS is that it doesn’t impact enough cost. If you have low utilization for any reason, IaaS is enough to build a business case for the cloud. If you have more typical business IT needs then you need to be able to target more cost. Maintaining software on a third-party-hosted VM isn’t different enough from maintaining it on your own.
Another area where I think the Blair comments make sense is in the monitoring space, but there I’m not sure it goes far enough. The fact is that as networks come under revenue-per-bit pressure there’s a need to optimize transport, which tends to drive up utilization and create greater risks for QoS and availability. The normal response to this is better management/monitoring, but the problem I see here is that getting information on network issues isn’t the same as addressing them. “Monitoring” is an invitation to an opex explosion if you can’t link it to an automated response.
Service automation is something dear to my heart because I’ve worked on it for a decade now (IPsphere, the TMF SDF, ExperiaSphere, NFV, CloudNFV) and come to understand how critical it is. The foundation of lower opex is service automation. The foundation of service automation is a service model that can be “read” and used as a baseline against which network behavior can be tested and to which that behavior can be made to conform. We’re still kicking the tires in terms of what the best way to model a service might be. We’re even earlier than that in the critical area of linking a model to telemetry in one dimension and to control in another. That’s an area we’ve been working on in CloudNFV, and one where I think we might be able to make our largest contribution. Blair, for now, seems too focused on the telemetry part.
In terms of network infrastructure, I think the Blair theme is that there are things that directly drive traffic and thus would encourage deployment of raw capacity. I agree, and I’d add to that the fact that as bandwidth becomes cheaper at the optical level, the value of aggregation to secure bandwidth efficiency at the electrical level reduces. That’s particularly true when, as I’ve already noted, the higher layers tend to generate a lot of opex just keeping things organized and running. SDN is a theme, IMHO, because of this factor. If you can simplify the way that we translate transport (optical) into connectivity (IP) then you can reduce both capex and opex. The question that’s yet to be answered is whether SDN processes as they’re currently defined can actually accomplish that because it’s not clear how much simplification they’d bring.
Network infrastructure is where NFV comes in, or should. The Blair view is that NFV addresses the “sprawling number of proprietary hardware appliances”, which is certainly one impact. In that sense, NFV is an attack on an avenue equipment vendors hoped to address for additional profits. As services move up the stack, vendors move from switches/routers to appliances—or so the hope would be. But I think that NFV is really more than that. First, it’s a kind of true cloud DevOps tool, something that can automate not only deployment but also that pesky management task of service automation that’s the key to opex reduction.
I’ve blogged before that opex savings are what have to justify NFV, and operators are now starting to agree with that point. The challenge is that while COTS platforms are cheaper than custom appliances in a capex sense, the early indications are that they might well be more expensive in an opex sense, and unless the capex savings are huge and the opex cost differential is small, the result would be a net savings (at best) too small to justify much change. I think that the success of NFV may well depend on how easily it can be made to accommodate (or, better yet, drive) a new management model. The success of that will depend on whether we can define that new model in a way that accommodates where we are and where we need to be at the same time.
Blair’s picks for tech investment are largely smaller players, and that fits the theme I opened with. Networking is in the throes of a major systemic change that will challenge most those who are most broadly committed to the space. If you’re wide as a barn, some of the pellets of a double-barreled broadside of change are bound to hit you somewhere. But even narrow-niche players have their issues. Strategic engagement with the buyer seems, in both carrier and enterprise networking, to be very hard to sustain with a narrow portfolio. So the fact is that while all big players are challenged, all little players are narrow bets made in an industry whose directions and values are still very uncertain. For sure, we’re in for an interesting 2014.