A Cynic’s View of CES

The buzz with the Consumer Electronics Show is that everything gets to be connected, and that may or may not be a good thing.  It’s not surprising that we’d see a drive to connect; Cisco for example has been pushing it on the theory that anything that creates more traffic sells more routers.  The question is whether M2M or everything-connect is something that pays off more than it costs.

There are things that we need to have connected, things that can be useful, and things that are probably silly.  The problem is whether “silly” is a standard we can agree on.  Let’s take watches for example (Apple is said to be looking at a connected watch).  If you presume that the only purpose of a watch is to tell time, you could definitely put this in the “silly” category.  If you assume that perhaps you could make it vibrate to tell you that you have a call or text, then some would say it’s silly because your phone does that, and others might find it helpful because they can’t hear their phone or feel it vibrate if it’s in a bag.  If it’s reading your vital signs to alert somebody to a problem, maybe it’s well up in the “useful” category.

The next challenge is what it costs, not so much for the device per se but for what it would take to make it useful, or as useful as the intrinsic application allows.  Let’s take our watch-watching-vitals as an example.  This is “useful” if there’s some way of reporting the result.  Presumably it gets near-fielded or Bluetoothed to your phone so it can generate an alert on your behalf.  Is it secure?  Will the battery last long enough for the user to actually gain from the device, or will it end up failing and perhaps hurting those who rely on it?

Let’s take another notion, which is the connected car.  First, we have connected cars now, with OnStar for example, and we also have the potential for putting apps on a phone that could serve many of the purposes of connecting the car; what you’re missing is telemetry from the car itself.  So perhaps what we should do is to use one of the “sync” notions we already have to link cars to phones to obtain that information.

Then there’s the question of whether connecting the car is in our interest.  Clearly having the car subject to any form of outside control invites hacking the car and sending people careening into who-knows-what.  But even real-time entertainment or video calling in the car is a distraction to drivers.  I’ve noticed that we have a fair number of people walking into objects and other people because they’re distracted even as pedestrians!  Get these people behind a wheel at a similar level of attention and we start reducing the population in a hurry.

I’m not against gadgets, or gadgets online, but I’m against hype.  How many articles will be written about networked watches, or networked cars, or networked refrigerators, microwaves, chairs and tables…?  We’ll darn sure have people writing about how the cloud will facilitate our networked watches.  Maybe they can be linked to SDNs too?  The sky’s the limit here.

Then, maybe I’m missing the point here.  Maybe this is all theater, with no presumptive link to reality much less a presumptive goal of utility.  A show like CES is just an outing, a chance to blow off steam, eat some nice meals on the tab, and tell stories.  I guess we’ll see what real stuff comes out of the show, and decide on that basis.  So far, I have to say that I’m glad I’m not there.

We do have one maybe-bright-spot, the so-called “phablet”, a phone that’s bigger than an iPhone and smaller than a 7-inch tablet.  I’m not yet convinced that we have figured out what the best size for a mobile device like a tablet is, and I’m darn sure that cell-connected tablets of any size should be able to pair with a Bluetooth headset and used to make calls.  Does it make sense to shrink a tablet to a vertical 16:9 factor that’s maybe six or seven inches long and thin, then hold it up to our ear/mouth?  Maybe.

It makes more sense than talking into your watch.  Sorry, Dick Tracy.

Yes, There IS Another SDN Dimension!

All the talk about “cloud networking” and “virtual networks” and “software defined networks” seems to be diverging as often as converging, and I’ve talked about the “network model” issues that might be behind a good part of that.  There’s another factor worthy of mention, too, and that’s the perspective issue.  If you’re the cloud looking down at the network, you see things one way.  If you’re the network looking up at the cloud, you have a different vision.  Clearly the two need to harmonize, and understanding how they differ may be a good place to start.

From the cloud perspective, everything pretty much has to be about virtual networking, unless you’re talking about cloud-hosted network functionality and not cloud applications.  The cloud sees a network as a connection topology and associated QoS.  We know what the latter means, but the former means the relationship between users as defined by their traffic.  There are three broadly recognized connection topologies—Line (point to point), LAN (multipoint) and Tree (multicast).  Each of these topologies connect a community of endpoints—users if you like.

From the network’s perspective, we’re almost certainly going to have a widely connective native infrastructure that supports everyone who might be part of a virtual network, and so one of the mapping properties between cloud-viewed networks and network-viewed clouds is this subsetting.  VPNs and VLANs have this property; they subset a larger connection community.  Tunnels are “Lines” that connect two endpoints, and so we can see from both these examples that there are mappings we can already define to unify the viewpoint of the cloud and the network.

The deeper challenge comes with the process of creating QoS, meaning creating a virtual network that has services that can appear to be traffic-engineered to suit application needs.  The cloud sees QoS as a requirement; the network has to make it a property of a virtual network service.  That’s where we start to see issues with our mapping, because many of the examples of virtualization (LANs created with overlay technology like Nicira, or tunnels used for point-to-point, for example) are overlays that can’t control the underlying service.  Thus, they can’t traffic engineer themselves, they’d have to be given some specific service by the network.  But if they’re just traffic to the network—as overlays are—then how does the network know one virtual network from another?

I think this is where the models of software-defined networking may end up differentiating themselves, or at least we can use this to define a new differentation of SDN.  One SDN model is the EXPLICITLY INTEGRATED model, where the SDN creates the virtual network within the physical network.  Here, because the physical network is aware of the virtual (it’s a part of it, like a VPN or VLAN), the SDN can offer both segmentation of the connectivity space and QoS.  In the other model, the COUPLED MODEL, the virtual network model has to be coupled to the network, meaning that there has to be a central or management process that “knows” virtual and also knows the real network.

Where this may be important is in understanding how a player like Cisco is likely to approach its SDN play.  If Cisco doesn’t want to depend on OpenFlow, whose per-device forwarding control lets you build unlimited virtual networks and engineer them any way you like, then it has to either base its SDN on something like VLAN/VxLAN to segment the network down where the devices can see traffic and supply QoS, or it has to couple the overlay virtual network models down to the network handling of traffic.  The only way to do that and retain the current protocols and devices is to use policy management (PCC), which is what Cisco seems to be doing.  So is there method to their madness?  Maybe.  And maybe others will, of necessity, inherit it.

 

And Now, “Why Don’t We…?”

Yesterday, I asked the question “Why is it…?” regarding a number of news items.  Today, I’d like to ask another question, again framing it in the context of recent events.  The question is “Why don’t we…?”

Let’s get it started with an obvious one.  Network operators have been promoting the notion of “network functions virtualization” or NFV.  The idea is to remove service logic from network devices and host it instead on generic servers, probably in “the cloud”.  Got it?  Well, here’s the question:  Why don’t we hear anything about the network devices that coexist with this hosted virtual functionality?  We move services off switches and routers, and put them in servers.  So what happens to the switches and routers?  We don’t hear about it, and no matter how important software in the network becomes, you’ll still have to push bits with network boxes.

You probably guessed the reason.  It’s not that next-gen hardware isn’t important, but that it IS threatening.  No matter how much vendors line up and kiss NFV babies (which they have to for the same reason politicians kiss real babies), they aren’t excited about the notion of dumbing down devices.  Flight of service features would mean flight of differentiation, flight of business value, flight of profit margins, flight of big executive bonuses…you get the picture.  So despite the fact that vendors offer support for NFV in an abstract sense, they don’t offer it in the sense of talking about what an NFV-compliant network box would look like.

What would it look like?  The answer is probably pair of logical engines, a “policy table” and “condition/action host” in combination.  Policies, consisting of condition and action components would be compiled somewhere and loaded into tables in a device.  There, they’d not only forward packets but also perform other network service functions, kicking off to a hosted functional component where needed.  There’s plenty of room for innovation here—in silicon to handle the policy table, in “languages” to write the conditions/actions in, and in the centralized software that manages this.  But it’s not your mother’s network any more, and vendors aren’t anxious to jump out there.

Another question.  Why don’t we see enterprises rushing out to deploy virtual network technology in their clouds?  We’ve heard that virtual networking like Nicira’s brand is essential for the cloud.  We’ve heard that it’s even the foundation of SDN.  But why aren’t enterprises rushing out to deploy it?

Because they don’t need it, in the main.  Virtual networking is largely valuable in scaling up the segmentation technology built into Level 2 (Ethernet) networks to separate networks in multi-tenant data centers like those that are used by cloud providers.  In enterprise networks, even private clouds, there’s a good chance that separating their applications into ships-in-the-night networks isn’t the goal.  You can in fact connect the applications at Level 3, but if you’re doing a lot of horizontal communication that’s putting a lot of traffic through a gateway.  And who knows when you’ll be doing horizontal communications, even if you’re not now?  Data integration is a big part of SOA and of worker empowerment.

Here’s a good one.  We know that Cisco can benefit from cross-product symbiosis so much that they can buy companies to broaden their reach and the Street loves them for it.  Why don’t we the Street and pundits praising Alcatel-Lucent for their product breadth, even asking them to add MORE and not to sell off?

Enterprises tend to value solutions.  They don’t want to do (and pay for) a ton of network integration, so they tend to buy from vendors who can sell them a complete network architecture, or at least the complete product inventory for the technology area they’re currently trying to address.  So if you’re a Cisco selling to enterprises, you offer business-driven solutions that integrate products.  Given that you’re doing the heavy lifting in selling the customer on the deal, it makes sense to keep the products in house so you can keep all the money.

But if you’re Alcatel-Lucent, you have a buyer (the network operator) who first of all tends to be a “zone purchaser” of technology, and second a self-integrator.  The first point means that network operators buy products by the area of network they install in; edge, core, etc.  That means that having a product that’s NOT in that area doesn’t offer much symbiotic value in the deal.  The second point means that the operator has their own vision of how things should go together, so they are less likely to value the vendor’s package.  Best of breed, they say.

The net of the “Why is it” and “Why don’t we” questions is that there’s a lot of movement and opportunity under the surface.  We’re an industry, and probably a global economy, that’s stuck skimming when we should be diving.  That would make a good New Year’s Resolution.

Why Is It….?

Some of the news bits I saw this week raised an interesting point (to me, at least), which is that they reflected an underlying point that was being overlooked.  So I want to look at them today, and ask “Why is it…..?”

The top item is that big Netflix outage over the holidays.  Everyone knows that Amazon’s cloud hosts Netflix, and everyone knows that the cloud is supposed to be the ultimate step forward in reliability/availability?  Why is it that we have such radical cloud failures if that’s the case?  Here again we have two answers.

Top of the hit parade, answer-wise is that the cloud isn’t intrinsically more reliable.  The general truth in “reliability” is that anything is as reliable as its least-reliable component.  Cloud technology that is going to provide for automatic augmentation of resources when loads peak will depend on its resource-allocation process.  Is that a device or computer?  Is it “redundant?”  How can you make the thing that’s supposed to recognize load issues and respond immune to loading or failure?  It’s possible I guess, but we really don’t pay enough attention to the architectural elements of a cloud computing deployment so we truthfully don’t have any notion of what its real availability model would look like.

The second point is that performance of any sort costs.  Again, we have a misconception about the cloud, which is that the public cloud is always cheaper.  Not true.  The public cloud is cheaper just as hosting web servers is cheaper—when resources are under-utilized.  Operating at a large scale, enterprises or media companies can deliver similar economies of scale, and that means that companies like Netflix are having to push the economics of their hosting relationship to be profitable and keep Wall Street happy.  So do they buy the most reliable stuff?  Do you, in your own business, protect yourself absolutely against failures, or do you roll the dice just a bit?  The latter is what nearly every enterprise does, and likely what Netflix does.  You get what you pay for.

Then we have an SDN question.  When we read about SDN we almost always read in terms of “it’s OpenFlow” or “it’s proprietary” as the polar alternatives.  Why is it that we have framed a concept as general as “software defined networking” into such an explicit implementation framework as OpenFlow, particularly given that OpenFlow isn’t sufficient to create an SDN to start with?  Guess what; two answers.

First, you can’t underestimate the role of the media here.  “News” means “novelty”, it doesn’t mean “truth” (for you cynics, “Pravda” means “truth”).  The press is an industry with its own priorities, generating clicks on URLs to serve ads that pay to keep the lights on.  There are a limited number of reporter-hours available to push material out, and if you want the most clicks you push the most “readable” material, not necessarily the most relevant or useful.  In the SDN space, the promise of a complete network revolution is more “readable” than the truth that we have to evolve to SDN or everyone has to toss their gear and start over.  But that’s a network evolution story, and evolution takes millions of years in biology.  It might as well take that long in technology, from a press perspective, because nobody is going to read through the dry details.

The second point is that nobody understands what’s inside an SDN to start with.  I pointed out yesterday that buyers wouldn’t be able to draw a complete diagram of SDN that had more than two boxes; “software” and “network”, which is hardly dazzling insight.  So if we don’t understand the functional elements of an SDN, how can we understand what these elements map to in terms of standards or protocols or even products?  There is every chance that what will emerge as an “SDN” architecture will be a combination of OpenFlow, the Policy Charging and Control framework of mobile/3GPP, and Network Functions Virtualization.  Until we understand how those things relate to individual functional boxes, though, we can’t make much progress.  And absent reality, a good fable is enough to drive the media coverage.

We’re at the root of the problem here, though.  There is no reason why all of the points I’ve noted here couldn’t be fixed in a heartbeat.  All that’s needed is to face reality. Shall we give that a try for the new year?

Is There an SDN in Your 2013 Future?

Where are we with respect to SDN?  That’s a question a lot of people are asking, and that a lot of vendors are watching.  The answer, at least according to my survey this fall, is “We don’t know”, and that’s an important point to consider if you want to understand what might happen in the SDN space in 2013.

I measure “buyer literacy” for a given technology based on their ability to articulate the benefit case for the technology sufficiently to justify a project, and to draw the deployment architecture well enough to be able to engage vendors in purchase dialogs.  Based on that measurement, we’re at about 8% buyer literacy in SDN.  In the past, a successful market has demanded about a 33% buyer literacy, which says that with SDN we have a long way to go.

Why is this such a mess?  One reason is that there are three different models of SDN value being tossed around wrapped in common technology nomenclature.  We call “virtual networking” a la Nicira SDN.  We call OpenFlow networks SDN.  We call vendor-sanctioned software-driven network behavior exercised through traditional protocols (Cisco’s ONE) SDN too.  Users, meaning enterprises, rarely understand the distinction much less the “value focus” for each of these approaches.  How then could you justify a project?  Because of this, SDN for enterprises is very much a defensive issue; if my vendors don’t have an SDN strategy I’m risking stranding assets if SDN takes hold.  Blow a kiss at the SDN baby and most buyers are happy (which is why network vendors’ lips are pursed into permanent puckers).

But the big problem in the survey wasn’t the issue of the value proposition; buyers think that either SDN is a cost play or it’s linked to cloud deployment, and both of those things are substantially true even if buyers don’t understand why that is.  What the buyer doesn’t understand is how to draw an SDN, how to create a simple functional diagram.  If you ask them, the model that emerges most of the time is two blocks, with the top labeled “software” and the bottom labeled “network”.  OK, functionally perhaps that’s true, but if you then try to go out and buy your SDN tools, how far do you get by saying “I need software” and “I need network?”

Google’s SDN deployment offers us a vision of the issues of SDN.  What Google has is an SDN enclave (a core, in their case) surrounded by “gateway” points.  These gateway points talk IP-control-plane to the rest of the network, and take the topology/forwarding information they obtain back to a central point where it’s converted into route policies, meaning preferred paths between points.  These aren’t user flows, they’re trunk routes, and they are then converted by an OpenFlow controller into commands that drive forwarding plane changes on devices Google custom-built.  How much of this do you suppose a typical enterprise understands?  Where are the commercial products that could support these various functional missions?

I’m not arguing against OpenFlow or SDN; you all know by now that I’m a believer.  But I am arguing for an end to the totally vacuous discussion we’re having on the subject.  Nobody in this day and age is going to make a technology investment for no reason other than to consume a new technology.  Nobody is going to buy into an architecture they can’t even block-diagram.  We’re creating a fog around SDN, or at least allowing a fog to descend on it, and it’s not just blocking the details of SDN, it’s threatening the opportunity.  The vendor who sings this particular technology song very well may have an immense opportunity…if they can get the song heard in the real world, by real buyers.