Ciena just posted the first billion-dollar quarter in its history, and that’s surely big (and good) news for the company and investors. What sort of news is it for the networking market overall? I think it’s a sign of some major shifts in thinking and planning, things subtle enough to be missed even by some network operator planners, but important enough to impact how we build networks overall. And, of course, the vendors who supply the gear.
If we look at packet networking’s history, we can trace things back to a RAND study in the mid-1960s, noting that overall utilization was improved by breaking data into “packets” that could be intermingled to fill up a link with traffic from multiple sources. For most of the history of data networking, we were focused on traffic aggregation to achieve transport economy of scale and to provide uniform connectivity without having to mesh every user with every source. That’s not going away…at least, not completely.
Fiber transport has gotten cheaper every year, and as that happens, the value of concentrating traffic to achieve economies of scale has lessened. Operators started telling me years ago that it was getting cheaper to oversupply bandwidth than to try to optimize how it was managed. Since then, we’ve been doing electrical handling of packets more and more to support full connectivity without meshing.
The problem with packet networking is that it introduces complexity, and the complexity means that operations costs tend to rise and operations errors rise too. Network configuration problems, misbehavior of adaptive protocols designed to manage traffic and routing, and simple scale have combined to make opex a big problem. Historically, network-related opex has accounted for more cents of every revenue dollar than capex, in fact, and operators have worked hard to drive opex out of their costs, to sustain reasonable return on infrastructure.
Fiber transport is perhaps the single most essential element of the network. There are a lot of ways we can push packets, but they all rely on fiber to transport data in bulk, and in many cases (like FTTH) even to individual users. Ciena’s quarter may be telling us that network trends are combining to make fiber more important, even relative to the packet infrastructure that overlays it. The best way to understand what that would mean for networking overall is to look at what’s driving a change in how we use fiber itself.
One obvious change in fiber is in the access network. Whether we’re talking about fiber-to-the-home (FTTH) or to the node (FTTN) or tower (backhaul), we’re still talking about fiber. As per-user capacity demands increase, it’s more and more difficult to fulfill them without taking fiber closer, if not right up to, each user. Copper loop, the legacy of the public switched telephone network (PSTN), has proved unable to reliably deliver commercially credible broadband, and fiber is exploding in access as a result.
Another change that’s less obvious is the dominance of experience delivery as the primary source of network traffic. Video is a bandwidth hog, and for years regular reports from companies like Cisco have demonstrated that video traffic growth is by far the dominant factor in Internet traffic growth. With lockdowns and WFH, even enterprises have been seeing video traffic (in the form of Teams and Zoom calls, for example) expand.
Cloud computing is the other factor in enterprise bandwidth growth. To improve sales and customer support, businesses worldwide have been expanding their use of the Internet as a customer portal, and pushing the customized experience creation process into the cloud. The Internet then gathers traffic and delivers it to the data center for traditional transaction processing. Cloud data centers, once almost singular per provider, are now regional, and we’re seeing a movement toward metro-level hosting.
Edge computing is part of that movement. There are multiple classes of applications that could justify edge computing, all of which relate to real-time real-world-to-virtual experiences. While we use the term “edge” to describe this, the practical reality is that edge computing is really going to be metro computing, because metro-area hosting is about as far outward toward the user that we can expect to deploy resource pools with any hope of being profitable for the supplier and affordable for the service user. That truth aligns with our final fiber driver, too.
That driver is metro concentration of service-specific features. We already cache video, and other web content, in metro areas to reduce the cost and latency associated with moving it across the Internet backbone. Access networks increasingly terminate at the metro level, whether we’re talking wireless or wireline. In the metro area, we can still isolate user traffic for personalization, but still concentrate resources efficiently. It’s the sweet spot of networking, the place where “hosting” and “aggregation” combine, meaning the place where network and data center meet.
All of this so far represents first-generation, evolutionary, changes to the role of fiber in the network, but once we concentrate traffic in the metro and reduce latency from user to metro, we’ve also changed the fundamental meshing-versus-packet-aggregation picture. If service traffic concentrates in metro areas, then global connectivity could be achieved by meshing the metros. It’s that second-generation change that could be profound.
There are somewhere between three and fifteen hundred metro areas in the US, depending on where you set the population threshold and how far into the suburbs you allow the “metro” definition to extend. My modeling says that there are perhaps 800 “real” metro areas, areas large enough for the metro and fiber trends I’ve talked about here to be operative. This is important for fiber reasons because while we couldn’t possibly afford to fiber-mesh every user, and it would require a quarter-million fiber runs to fully mesh all metro areas, we could apply optical switching that adds a single regional hop and get that number down. Define 8 regions, and you have 3,200 simplex fiber spans to dual-home your 800 metro areas, and less than 60 to fully mesh the regions themselves.
What I’m describing here would create a network that has no real “backbone” at the packet level, but rather focuses all traditional packet processing in the metro centers (regional networking would be via optical switching only). We’d see higher capacity requirements in the metro area, and likely there we’d see a combination of optical and electrical aggregation applied to create capacity without too much introduced handling delay.
This structure is what would transform networking, because it could make applications like IoT and metaverse hosting practical on at least a national scale, and likely on a global scale. It would demand a lot more optical switching, a focus on avoiding congestion and other latency sources, and likely a new relationship between hosting and networking to avoid a lot of handling latency. But it would make low-latency applications freely distributable, and that could usher in not only a new age of networking, but a new age of computing and the cloud as well.
We’re not here yet, obviously, but it does appear that a metro focus, combined with regional optical concentration, could transform global networks into something with much lower latency and much greater cost efficiency. I don’t think that hurts current packet-network vendors, because whatever they might lose in packet-core opportunity, they’d gain in metro opportunity. But it does say that a metro focus is essential in preparing for the shift I’ve outlined, because otherwise good quarters for fiber players like Ciena could come at the expense of the packet-equipment players.