Shrinking Big Data and the Internet of Things

If you like hype, you love the cloud, and SDN, and now NFV.  You also love big data and the Internet of things.  I’m not saying that any of these things are total frauds, or even that some of them aren’t true revolutions, but the problem is that we’ve so papered over the value propositions with media-driven nonsense and oversimplification that it’s unlikely we’ll ever get to the heart of the matter.  And in some cases, “the heart of the matter” demands some converging or mutual support from our elements.

The “Internet of things” is a good example of a semi-truth hiding some real opportunity.  Most people envision the Internet of things to mean a network where cars, traffic sensors, automatic doors, environmental control, toll gates, smart watches, smart glasses, and maybe eventually smart shoes (what better to navigate with?) will reside and be exploited.  How long do you suppose it would take for hackers to destroy a city if we really opened up all the traffic control on the Internet?

The truth is that the Internet of things is really about creating larger local subnetworks where devices that cooperate with each other but are largely insulated from the rest of the world would live.  Think of your Bluetooth devices, linked to your phone or tablet.  The “machines” in M2M might use wireless and even cellular, but it’s very unlikely that they would be independently accessible, and in most cases won’t have a major impact on network traffic or usage.

These “local subnetworks” are the key issue for M2M.  Nearly all homes use the public address 192.168.x.x, (a Class C) which offers over sixteen thousand addresses but only 256 per network.  There are surely cities that would need more, but even staying in IPv4 the next public address range is a Class B with over 65 thousand addresses, and there’s a single Class A with over 16 million addresses.  Even though these addresses would be duplicated in adjacent networks there’s no collision because the device networks would be designed to contain all of the related devices, or would use a controller connected to a normal IP address range to link the subnets of devices.

What this is arguing for is a control architecture, and that’s the real issue with M2M or the Internet of things.  If we have local devices like wearable tech, the logical step would be to have these devices use a local technology like WiFi or Bluetooth to contact a controlling device (a phone or tablet).  The role of this controlling device is clear in a personal-space M2M configuration; it’s linking subordinate devices to the primary device.  In sensor applications of M2M, this controller would provide the central mediation and access control, the stuff that lets secure access to the network happen or that provides for control coordination across a series of sensor subnets.

To me, what this is really calling for is something I’ve already seen as a requirement in carrier and even enterprise networks—“monitoring as a service”.  The fact that you could monitor every sensor in a city from a central point doesn’t mean that you have to or even want to do it all at the same time.  In a network, every trunk and port is carrying traffic and potentially generating telemetry.  You could even think of every such trunk/point as the attachment point for a virtual tap to provide packet stream inspection and display.  But you couldn’t make any money on a network that carried all that back to a monitoring center 24×7, or even generated it all the time.  What you’d want to do is to establish a bunch of virtual tap points (inside an SDN switch would be a good place) that could be enabled on command, then carry the flow from an enabled tap to a monitor.  Moreover, if you were looking for something special, you’d want to carry the flow to a local DPI element where it could be filtered to either look for what you want to see or at least filter out the chaff that would otherwise clutter the network with traffic and swamp NOCs with inspection missions.

This to me is a good example of what we should be thinking about with virtual networking.  If networking is virtual, then network services should be defined flexibly.  Who says that IP or Ethernet forwarding are the only “services”?  Why not “monitoring as a service” or even “find red car in traffic?”  NFV, in particular, defines network functions as hosted elements.  Cloud components and all manner of traffic or sensor-related functionality are all hosted elements too, so why not envision composite services that offer both traffic handling and device control (as most networks do) but also offer functional services like monitoring-as-a-service or “find-red-car?”

In at least the monitoring applications of M2M, “big data” may be more an artifact of our lack of imagination in implementation than an actual requirement.  OK, some people might be disappointed at that conclusion, but let me remind everyone that the more complicated we make the Internet of things, the more expensive it is and the less likely it is to ever evolve because nobody will pay for it.  We have to rein in our desire to make everything into an enormous tech spending windfall because there is absolutely no appetite for enormous tech spending.

SDN and NFV cooperate through network services, too.  Simplistic NFV design might use SDN only to create component subnetworks where virtual functions live.  But why stop there?  Why not think about all the services that completely flexible packet forwarding could create?  And then why not combine SDN connection control with NFV function control to produce a whole new set of services, services truly “new” and not just hype?  Could we perhaps find an exciting truth more exciting than exaggeration?  Well, stranger things have happened.

Leave a Reply