Are Cisco’s “Six Pillars of IoT” a Strategy or a Placeholder?

I guess that like many out there, I regard Cisco announcements with a mixture of interest and cynicism.  I remember well the times when Cisco would announce a “five-phase strategy” that was (whatever the technology focus) always something that they were already in Phase Two of and never something that was actually delivered in “Phase Five” at the end.  In effect, it was a placeholder for a real Cisco push, one that would develop when Cisco was sure the market was ready.  Cisco is a “fast follower”, remember?

Cisco seems to have outgrown that five-phase approach, but I admit that when I saw their latest IoT announcement had six pillars, I was taken back to the good old days in a different wrapper.  The obvious question is whether that cynicism is warranted, and the answer in my opinion is “Yes”, at least at some level.  That means we have to look a bit deeper.  For those who want a specific reference to the Cisco announcement to follow my points, their press release is HERE.

The six pillars Cisco identifies are network connectivity, fog computing, security, data analytics, management and automation, and application enablement platform.  At a very high level this is a fair statement of IoT needs, but hardly one that’s insightful.  I’ve had some discussions with network operators and they have some fairly specific views of how IoT has to be done.  You can fit the operator visions into Cisco’s model as long as you stay vague.

Operators start with the notion that IoT has to partition sensor, control, and application elements in private networks.  The stability, security, privacy, and governance risks of “unbridled IoT” are so profound that nobody in their right mind would accept them.  If you start IoT with the presumption that things that talk to you about conditions (sensors), things that command actions or processes (control elements), and things that can convert one into the other are partitioned and exchange information only in a very controlled way, you’re covering the basic risks.

Their second point is that to applications, IoT is not a network of devices at all, but a hosted repository of “big data”.  You cannot make IoT work if you assume that applications just cast about in the world for stuff to talk to (or listen to).  You have to structure information so that applications can find and make use of stuff without threatening the underlying elements in any way (deliberate or by error).

Their third point is that IoT is most likely to be useful when it’s seen as a tool in exploiting mobility-generated opportunities.  Sensors and controllers out there in the world make sense if you’re out there too, not sitting at a desk somewhere.  This means that while it’s not necessary for IoT and mobile services to be considered a single problem.  LTE connection of a sensor or controller is just a question of finding the most convenient and cost-effective approach, not a question of creating an enforced unity of implementation.

The final point is that even if IoT is pervasive and important, it can’t be a service silo.  Whatever architecture defines support for IoT has to be generalized enough to be applied elsewhere, and tools from elsewhere have to make up as much of the IoT framework as possible.

You can see even at a glance that it’s possible to map Cisco’s approach to these four points.  The challenge is that the mapping isn’t convincing because Cisco hasn’t supplied a lot of detail to go with the basic notion of pillars.  This is where I think Cisco is falling back to “followership” in their approach, even though they’ve made the notion of IoT (as Internet of Everything) into a marketing slogan.

At a high level, it looks like IoT to Cisco is a combination of some new-age networking concepts and “fog computing”, which is Cisco’s name for deployment of hosted service elements at or close to the edge.  That seems to me to be a given, so it’s not telling me much.  The big question is how the applications (in the “fog”) and the sensors and controllers (in some sort of network) are linked.  Increasingly, operators I’ve talked with make the point that indiscriminate deployment of IoT elements raises way too many regulatory issues, and they’re rapidly settling on an IoT-as-a-database model.  Cisco’s “data analytics” isn’t defined that way in their announcement, but it could be positioned like that.

Cisco’s 15 IoT product announcements don’t provide a lot of clarity either.  Most of them are related to what I’d call simple issues—connecting stuff and managing stuff.  The notion of “fog data services” again skirts the edge of IoT reality, but it’s tied to a “data-in-motion” model that to me seems to suggest that IoT is a set of data flows and not a database.

What I’m wondering is whether “six pillars” is a Cisco solution to something that they see as having firm demand but vague deployment models, where “five stages” were to address something vague in a demand sense.  I personally think that large-scale rollout of IoT is more likely to happen through network operators than through any other source, but Cisco may think there are other early models that might make up in timing what the lack in convincing scale.  It’s in a way similar to the situation with NFV, which exposes “can it work” issues before it exposes “can it pay off” issues.

I think the risk to Cisco here is that they’ve grabbed apples that are a bit too low, IoT-wise.  My experience with operators shows that there are real early opportunities for a realistic IoT model based on the points I described.  I’m looking at a couple now, in fact, that map to those points almost exactly.  And Cisco’s not the only player talking in this space.  HP did an IoT announcement at MWC and followed up with more detail in their Discover event in June, and the HP approach seems to map to the operator points pretty directly.  As a major cloud and NFV player, HP is certainly in a position to get their story out there, and their story included using their IoT architecture as a framework for other services, including virtually any form of contextual service to mobile users.

Another risk is created by Cisco’s dance around NFV support.  Most operators think that applications using IoT data would be deployed and sustained through NFV, but that’s not part of Cisco’s six pillars.  They talk about management, but in the general sense that’s appropriate to the “I-don’t-know-the-buyer” position I suggested they might have.  While an NFV tie here would make their IoT story palatable to operators they may feel the specificity would turn off other possible early adopters.

Which raises an Oracle risk.  Oracle was interviewed in a New IP story on NFV that opened with the comment that NFV hadn’t progressed to make the business case for its own deployment, and that more operations centricity was the right path to address that.  It’s been clear for a while that Oracle’s own NFV approach is OSS/BSS-driven and analytics driven.  It’s a small step from analytics to deliver management data and analytics to deliver IoT data.

It’s smart for Cisco to spread its marketing wings wide in a world that’s changing as rapidly as networking and IT are likely to be changing.  Their fast-follower approach has also worked pretty well for them over time.  The risk, though, is that any kind of follower can wait too long to start, and watch others cross the finish line well ahead.  Whether that will happen here depends on the pace of the market, which means the collective pace of Cisco’s competitors.  That’s a risk, no question about it, to Cisco.

Some Early M&A Signals on the Impact of Virtualization in Networking

SDN and NFV, meaning “network virtualization”, is obviously going to have a significant impact on networking overall, even parts of networking that might not seem to be obvious targets.  We’ve had some announcements and M&A that illustrate this, and that offer us a chance to think about just how profoundly network virtualization could change things.

One of the most interesting M&A moves was Cisco’s announcement that they’re acquiring OpenDNS.  Many of us are familiar with OpenDNS as an alternative provider of DNS services.  While it’s not widely known, many of the “Internet problems” users experience aren’t their ISP’s network but their ISP’s DNS.  The default behavior for most Internet clients is to obtain a DNS address from the provider, and that will almost always be the provider’s own DNS.  If it’s overloaded or down, you’re in trouble.  OpenDNS and Google DNS are alternatives that will nearly always work better for you.

That’s not why Cisco bought them, of course.  While most people know OpenDNS for…well, obviously, DNS services, they got into security services starting three or four years ago, and it’s security that Cisco is most interested in.  Given that Cisco has a pretty thriving security business you might wonder why, and I think that SDN and NFV are a part of the mix.

The big problem with Cisco’s security strategy, and almost everyone else’s, is that it depends on devices or functions that become a part of the network.  In an age of virtualization, it’s harder for this approach to work, not because you can’t put functions into a virtual network but because you can put anyone’s functions there.  The security function/appliance space is going to get very crowded, competitive, and commoditized.

OpenDNS is almost an analytic view of security, derived from understanding Internet addressing and activity. It’s holistic, it’s outside the traditional “network” of a user, and it’s an asset that would be much harder for a competitor to commoditize.  It also works under nearly all of the foreseeable virtualized network models, even models that use SDN to segment networks into application or service-specific pieces (it’s not as useful in that case, IMHO, but it could still add value).

Perhaps the most interesting thing about OpenDNS’ approach is that it would in theory be possible to link the data that OpenDNS provides (via convenient APIs) with remediation software that might involve controlling legacy Cisco gear or even an SDN controller.  If OpenDNS tools detected a DDoS attack it would be able to quench it, at least at a point close to the site being attacked.  If the capability to quench was offered by operators as a service, it’s possible you could quench close to the source.

It’s also possible to use DNS tools to back-check IP addresses that are contacted by malware or to check source IP addresses of intruders.  It’s not a normal DNS function, but if you assume that an access device has the ability to validate “new” incoming IP addresses or ones emerging from apps, it could reduce intrusions and keep Trojans from calling home.

You also have to wonder whether Cisco might have its eye on other DNS-based services that would be impacted by network virtualization.  Load balancing is essential in NFV if you’re going to have failover or scaling of VNFs, and we know from the Metaswitch Project Clearwater example that you can do the job with a modified DNS.

Of course, all of this might be idle speculation.  Cisco has bought a lot of companies that could have presented great strategic stories but nothing came of them.  We’ll have to track the developments, and in particular how Cisco positions the security APIs, to get an idea.

The other interesting announcement is, in comparative industry terms, “deeper” because it involves network monitoring.  NetScout is “combining” its monitoring business with Dahaner’s communications business, which includes Tektronix Communications, Arbor Networks, and Fluke Networks.  Tektronix Communications has a broad portfolio of carrier-oriented stuff including some monitoring.  Fluke Networks has monitoring products, and Arbor Networks is primarily a security company.  The combination of these companies would create the biggest monitoring player by far and bring in related network technologies too.

Network monitoring hasn’t been exactly a hot sector, in no small part because most traditional monitoring tasks are simply too difficult for users to undertake even without the complication of virtualization.  The advent of things like the cloud and SDN and NFV have generally caught the monitoring community unawares.  The question I used to get when asked about monitoring in the virtual age was “what do the protocols look like?” indicating that the people thought all they had to do was understand the format of a new protocol or two.

SDN and NFV have a profound impact on monitoring.  You almost have to think of the future in terms of “virtual probes” because everything in SDN and NFV moves around, and you don’t want to hairpin through physical probe points.  I proposed the notion of “Monitoring as a Service” in the CloudNFV work in 2013, but nothing came of the effort.

MaaS was based on the idea that if you have virtualization in place on a large scale, you can deploy monitoring virtually and avoid the hairpinning.  You could also establish specific probe points where you’d equipped your network with either taps or your infrastructure with high-performance hosting so that introducing DPI-based monitoring would be of limited impact.  You could also link in edge elements that had knowledge of packet associations with services or applications, and of course tie in the service-to-resource bindings.

IMHO, there is no way to make traditional monitoring into a viable business for the same reason that security can’t keep on in the same old way.  SDN and NFV change the game too much, and without a strategy to incorporate those changes into products, the NetScout/Danaher combination is simply consolidation.

We’ve not seen the end of this.  There are going to be massive changes down the line, starting as early as next year, if SDN and NFV build as much momentum as they could.  These two industry events prove that the big and small, “shallow” in technical terms and “deep”, are all going to have to face a virtual future unless both our revolutions stall for lack of support.  Defensive vendors may hope for that outcome, but opportunity is its own reward and some vendors will surely take the aggressive track, leading the industry with them.