We all know the phrase “You can’t get there from here.” It’s a joke to illustrate that there are some logical destinations that don’t have any realistic migration path. Some wonder whether a new network model of any kind, whether it’s white-box-and-OpenFlow, hosted routing, or NFV, is practical because of the migration difficulties. Operators themselves are mixed on the point, but they do offer some useful insights. Another useful data point is move giant Intel is making. Can we “get there”, and just where are we trying to get?
One of the biggest issues with network transformation is the value and stability of the transition period. Since nobody is going to fork-lift everything, what you end up doing in transformation is introducing new elements that, initially in particular, are a few coins in a gravel pit. It’s hard for these early small changes to make any difference overall, so it’s hard for the transition phases to prove anything about the value of transitioning in the first place.
Let’s start with the financial side. Say you have a billion dollars’ worth of infrastructure. You have a five-year write-down, so 20% of it is available for replacement in a given year. Say you expect to add 10% more every year too, so you have in the net about 30% of that billion available. Now you get a project to “modernize” your network. Logically, you’d expect to be able to replace old with new to the tune of 30% of infrastructure, right? According to operators, that’s not the case.
Normal practice for a “modernization” would be to do a series of trials that ended in a controlled field trial. Operators say that such a trial, on our hypothetical billion-dollar infrastructure, would likely involve no more than two percent of the total, regardless of the financial displacement theories. That controlled field trial would be tasked with proving the technology and business case, and that could be hard.
What is the business case, meaning the value proposition? If it’s a simple matter of “Box Y costs 30% less than Box X and is fully equivalent”, then you could in face prove out the substitution with a field trial of that side. You pick some displaceable representative boxes, do the substitution, and see whether the stated full equivalence is real. If that happy simplicity is complicated by things like “Box Y isn’t fully equivalent” or “part of the savings relates to operations or agility improvements”, then the scale of the trial may not be sufficient to prove things out. It’s this focus on the relationship between features/capabilities, benefits, and proof points that brings technology into things.
Justifying transformation gets easier if you can take a portion of a network that was somewhat autonomous and implement it fully using Box Y. In other words, if part of your 10% expansion strategy could be separated from the rest and used as the basis for the field trial, you might be able to treat it as a little network of its own and get a much better idea of how it could make the benefit case.
In some cases, the 10% expansion targets might include a mission or two that lends itself to this approach. 5G is a good example. If operators plan 5G trials or deployment in 2019 and the plans include using white-box cell-site switching, it’s very possible you could define the specific requirements for interfacing those devices so that a single cell or a set of nearby cells might be able to form a sandbox for testing new technology.
Operators tell me that this is the way they’d like to prove in new technology. You don’t fork-lift because you don’t want to write down too many assets. You don’t just replace aging gear with transformed boxes because they spread out through the network and can’t work symbiotically with each other. The need to fit them into all kinds of different places also raises risks. So, what you want to do is find a place where something new can be done a new way.
This is one reason why 5G and similar initiatives that have at least some budgeting support could be critical for new technologies, and probably explains why operators like AT&T are so interested in applying open-networking principles with the 5G edge. Edges are always a convenient place to test things out, because they are easier to integrate there. There are also more devices at the edge, which means that edge locations may be better for validating operations savings and practices associated with a new technology.
IP networks also have a natural structure of Autonomous Systems or domains. You’d use an interior gateway protocol (like OSPF) within a domain, and an exterior gateway protocol (like BGP) at domain boundaries. The reason this is important is that domains are opaque except for what the exterior gateway protocol makes visible, which means you could almost view them as intent models. “I serve routes,” says the BGP intent model, but how it does that isn’t revealed or relevant. Google uses this in their SDN network implementation, surrounding an SDN route collection by a ring of “BGP emulators” that advertise the connectivity the way real routers would, inside the black box of the BGP intent-model-like abstraction. That means you could implement a domain (hopefully a small one) as a pilot sandbox, and it could be used with the rest of your IP network just like it was built with traditional routers (assuming, of course, that the new technology worked!)
Cloud computing, hosting of features using NFV, content delivery, and a bunch of other applications we already know about would be plausible sources for new domains, and would thus generate new candidates for the pilot testing of new technologies. Best of all, these new service elements tend to be focused in the metro network, which is also where operators are planning the most fiber deployment to upgrade transport capacity. The network of the future is very likely to be almost the opposite of the network of the present, with most of its capacity in metro enclaves at the edge, and a smaller capacity fabric linking the enclaves. That’s the logical model for edge computing, event processing and IoT, and content delivery.
IoT is the big potential driver, more important to next-gen infrastructure than even 5G. While 5G claims a lot of architectural shifts, most are inside 5G Core, the part that’s on the tail of the standards-progress train and that has the most problematic business case. IoT is a compute-intensive application that relies on short control loops and thus promotes edge computing over centralized cloud data centers. It boosts metro capacity needs and also creates the very kind of domain-enclaves that lend themselves to testing new technology options.
Interestingly, Intel just announced it was spinning off Wind River to have the latter focus on IoT, edge computing, and intelligent devices. This is the kind of thing that could accelerate IoT adoption by addressing the real issues, almost all of which relate to edge processing rather than inventorying and managing sensors. However, Wind River has been focusing on operating systems and middleware in general terms, rather than on assembling any specific enabling technologies. Would they create a model of functional computing to rival Amazon’s Lambda and Step Functions? They could, but whether they’ll take that bold step remains to be seen.
Operators are slightly more optimistic about IoT as a driver to next-gen technology and transformation than they are about 5G, but only slightly. This, despite the fact that the service-demand-side credibility of IoT is much higher than 5G, which is more likely to make current cellular networks faster and a bit more efficient than to launch myriads of new services. I think a big part of the reason lies in the classic devil-you-know. What an IoT service infrastructure looks like is tough to visualize. For Wind River, and for other vendors who need transformation to succeed, making IoT easier to visualize may be their best mission statement.