Has AT&T Won a Race to the Wrong Finish?

AT&T has been the arguable champion of transformed infrastructure, a major contributor to open networking projects and an adopter of open technology from cellular edge to core, as well as open operations automation and orchestration. The company has announced it would be virtualizing its network, and has recently said it’s met the majority of those goals. That, coupled with the recent executive changes, has led some to speculate that AT&T may ease off the gas on open-model networks. I think the truth is a lot more complicated, because the reasons why AT&T might consider easing off are more complicated.

AT&T stands out among US Tier Ones, and perhaps even global Tier Ones, for its challenges. Its demand density (roughly, revenue opportunity per square mile) is low in comparison with almost all the other Tier Ones, and it seems to have fumbled its M&A a bit. In the recent spectrum auctions for C-band 5G, they had to bid high, creating some Wall Street concerns that their debt was becoming an issue.

All of this is actually a pretty good reason to want things like open-model networks and service lifecycle automation, so on the surface it would seem that AT&T’s business challenges might push them further into their open-model commitment rather than cause a reverse. That’s where the complications come in, though, because AT&T has made some mistakes in its early transformation and virtualization projects, and while they may not back away from open-model networks, the might have to reconsider some of the stuff they’ve been doing.

The heart of AT&T’s transformative vision lies in the traditions of networking. We build networks from “nodes” and “trunks”, the latter representing point-to-point connection paths and the former devices that can shuffle packets from one path to another. That process creates “routes” through a network, and it’s also what gave us the name “router” for the nodes. AT&T planners, like most network planners, are conditioned to think in terms of nodes and trunks, and so their view of a transformed network was really a vision of transformed nodes and trunks. Since a trunk is a rather featureless bit pipe, the focus is on the nodes.

What is a transformed node? There are three basic models that have emerged. The first is the “white box”, which is a hardware device built to be a generic switch/router when equipped with the proper software. The fact that hardware and software are separated in this model, presumably creating a situation where you can play mix-and-match with the combination, this is also called “disaggregation”. The second model is the virtual model, where the software functionality of a node is hosted on a resource pool, meaning a server or virtual machine.

Both these basic models emerged from a presumption that networks had to be built a box, meaning node, at a time, so the transformed nodes had to look like traditional ones (routers) from the outside, and function as one if they were introduced into a random point in the network. Our third model, the SDN model, took a different view, taking “disaggregation” further to separate the control and data/forwarding planes. The control plane, rather than being distributed as it is traditionally to every router, was centralized so that knowledge of routes and topology was concentrated in one place. The forwarding plane was simplified to be a pure shuffler of packets, with no intelligence to decide what a route was.

Before you wonder what this has to do with AT&T (if you’re not already wondering that!), let me close the loop. AT&T took a white-box view, the first of our three models. Mobile networks, especially 5G, have taken up the virtual model, where functional elements are hosted. Data centers have tended to adopt the third SDN model. This is the first of AT&T’s problems; the future of networking is surely a combination of the thing that’s budgeted (5G and the virtual view) and the thing that’s the credible revenue future, the data center where higher-level features can be hosted. They’re nailing their conception of the future network to the node-and-trunk rigidity of the past, in the face of indications that it’s going a different way.

Part of the reason behind the rigidity is that the virtualization track, our second model, was also bound with the old-network vision rather than facing forward to the ultimate virtual destination, the cloud. Network Functions Virtualization (NFV) was a box maven’s notion of a virtual network, discarding modern visions of cloud-native in favor of node-and-trunk-think elements like “service chaining”. NFV-think was written into 5G, and it was also a theme in the way AT&T thought about “orchestration”, something we now see as a cloud-and-container concept. We’ll get to that shortly.

The second problem AT&T has goes back to those days of old, the nodes and trunks. Operations was at one time a purely manual process. Even thirty years ago, “provisioning” meant sending a human to diddle with wires and connections. Just as early IT focused on post-processing records of manual tasks that had already been done, so early operations visions had a batch focus. AT&T operations people had this mindset when they set out on the adventure that became ONAP.

The Open Network Automation Platform, ONAP, is the Linux foundation project successor to AT&T’s own ECOMP (Enhanced Control, Orchestration, Management, and Policy). It’s absorbed some of the NFV work, and a lot of the same thinking.

The big problem with ONAP is that it’s really a batch system with some real-time tweaks, largely in queuing. It lacks any real connection with current cloud-think, to the point where it’s hard to point out any real accommodations to cloud-native design. It’s not based on a service model or template, though they seem to be trying to retrofit one in, and it doesn’t use the old TMF NGOSS Contract mechanism to steer events to processes through state/event tables in the data model. This means that it’s not inherently stateless and scalable/available.

Done right, ONAP could take on the role of the “Near-Real-Time RAN Intelligent Controller”, which AT&T suggests it’s thinking about, but ONAP is not done right. Let me see, we’ve got NFV Management and Orchestration (MANO), we’ve got 5G RICs (two of them, in fact) that orchestrate, and we have ONAP. The mere fact that we seem to have multiple levels of orchestration/orchestrators developing is a pretty clear sign that people aren’t thinking about infrastructure as much as pieces of infrastructure, and they’re not summing the parts to see if they add up to the whole.

Just because today’s IP networks are made up of pieces doesn’t mean that you can address transformation piecemeal. There was a vision behind the IP network of today, and there will have to be a vision behind any transformed equivalent, or the result will be, at best, incapable of realizing the full benefits of transformation. At worst, it simply won’t work, won’t scale.

The smart thing to do, whether you’re AT&T or another big operator, is to think of everything in terms of a separate, cloud-hosted control plane, not only for IP but for 5G, CDNs and other complex services. You then couple it to a white-box data plane built like SDN’s, designed to work with a control plane forwarding table that’s delivered from the Great Above. That’s a melding of the three models I’ve described for open networking, with a respectable dose of cloud awareness thrown in. But that’s not what they’re doing, at least so far.

I think it’s what they’ll end up doing, though, under what’s likely to be considerable pressure. The problem with AT&T’s open initiatives isn’t executives or realized goals, but the lack of realization that the goals were meaningless because their “progress” was in the wrong direction. NFV is virtualization and it would be a terrible idea to apply throughout a network. ONAP is similarly a terrible way to do orchestration. They’ve gotten out in front by neglecting critical whole-network requirements assessment, and they’ve taken a leadership position, but mostly one that will be known for going in the wrong direction.

AT&T’s recent comments on the C-band auction and its 5G direction have been cautious in comparison with remarks made by other bidders, according to another Light Reading story. It may be less that AT&T is cautious than that others have played up to the marketing dimension of 5G more than they should have. But caution here also begs the question of what AT&T intends to do to realize some revenue and improve profit-per-bit in their 5G deployments. If the classic 5G drivers like IoT aren’t real, then either AT&T needs something that is, or they need to accept that their spectrum bids may in fact have hurt their financial picture, and Wall Street skeptics might then be right.

If AT&T is changing its open-source commitment to reflect the fact that they’ve got to rethink their model, that’s a bitter pill but one worth swallowing. If they are indeed thinking they’re nearly done with their virtualization, or if the key executive support for open-model networking has departed and gutted efforts, that’s a very bad thing.

Every promise AT&T has made, every deployment, every vendor commitment, every contribution to open source or open hardware, needs to be reviewed right now, because it’s almost too late. I hope that the AT&T exec quoted in the article I cited above is wrong and they’re stopping their current initiatives and thinking hard about where they’re headed, because it’s not where they need to be.

That’s something for the vendor community to think about too. It should be clear from the 5G Open RAN activity that operators are determined to adopt open-model networking. Vendors need to figure out how they’ll support that goal, and one smart starting point would be to get themselves positioned at the point where open-model evolution will inevitably take AT&T and others. Right now, that’s wide-open space, free for any to lay claim to, but it’s not going to stay like that. Inevitably, somebody gets smart, even if it’s a happy accident.