According to Light Reading, 6G is getting messy. Well, 5G is messy too, and I think it’s time to accept a revolutionary point, which is that we’ve passed the point where the traditional international standards processes like the 3GPP work. We need to think differently, totally differently, in considering how to evolve networks and services.
Let’s start with an obvious point. Wireless generations, the “Gs” we always read about and that include both 5G and 6G, are the wrong way to go about things. 5G standards have taken over a decade, and optimists hope that 6G could follow in another decade. This, in the era of the Internet? Nothing we know about tech at the start of a given decade has much value by the end of it. The process of evolving mobile and other services should be gradual evolution, not successive, obsolete-before-they-take-hold, revolutions.
The next question, also obvious, is how the gradual evolution should progress. We all know the answer to that one; software. The consumption side of network services is totally software-dependent, and we all know that software has a short development cycle compared to hardware. Applications can often be deployed in less than a year, and changes in applications in a matter of days. We have a whole science, sometimes called “rapid development” and sometimes “CI/CD” for “continuous integration/continuous deployment”, that aims for short turnaround between business needs and software to support them.
Software is also what’s responsible for virtualization and cloud computing, the second concept being an application of the first. Virtualization and the cloud defined an ongoing evolution of how we build and deploy software applications for maximum efficiency and availability. There is probably no space in all of technology that’s progressed as far, and as fast, as these two areas.
The 5G evolution we’re in at the moment, defined as all mobile evolutions are by the 3GPP, recognized that software and virtualization were the keys, and so was the first of the 3GPP standards to really be based on function virtualization. It was a noble thought, a noble effort, and largely a failure, and we can see some of the effects of that failure today.
First, virtualization in 5G is linked to the ETSI Network Functions Virtualization (NFV). NFV was a 2013 project that’s still churning on, but that produced a box-centric vision of what virtualization and software-centric network features would look like. There was no real knowledge of the public cloud involved in NFV specifications; the “functional diagram” translated into a set of flows between monolithic elements, with no provision for incorporating even 2013 cloud awareness. Since 2013, of course, the cloud and virtualization have continued to evolve quickly, and so NFV is now hopelessly behind.
You can see that in the Open RAN (or O-RAN) initiative. O-RAN took a key element of the 5G RAN that was defined by the 3GPP as a typical virtual box, and turned into a cloud-ready and perhaps even cloud-native, software application. Had 5G standards truly exploited virtualization and the cloud, the O-RAN work wouldn’t have been necessary.
Then there’s the recent story about the security flaw in 5G network slicing and function virtualization. The article quotes the people who identified the problem as saying that network bodies that adopt software and cloud technology have to change their mindset. Darn straight, but the truth that such a change isn’t possible even in the decade-at-a-time generational evolution that’s common in telecom. Cloud people wouldn’t have done 5G this way, wouldn’t have left the holes for other projects to fill, and wouldn’t have exposed the security problems that were left behind by the 3GPP.
The truth is often unpleasant, and it’s going to be that in the case of 5G/6G, but here it is. We have to throw out the 3GPP process completely, and in fact throw out most telecom standards processes completely. Everything these days is a software and cloud project, period, and it’s time to accept that means that software and cloud project practices have to be the foundation of everything. The old stuff is obsolete.
You’re probably thinking at this point that I’m going to say that open-source software project structures are the answer, and you’re half-correct. Open-source software is the answer, but we have had many bad open-source software projects in the networking space. The NFV management and orchestration (MANO) stuff is proof that open, collaborative, development doesn’t overcome poor design. Same with ONAP. We have actually tried open-source a number of times in the telco world, and almost all the attempts have failed.
How do things go wrong so consistently? Knowing that is probably key to making them go right, so let’s start there. In both NFV and ONAP, the problem IMHO was a series of truths that led to a bad place.
Truth one. Telco people are not cloud people. That’s particularly true of people involved in telco standards. They don’t automatically think in cloud terms. Networks, through most of their careers, were built by connecting boxes, so they’re box people. When box people do what they think is “cloud”, they do collections of boxes (virtual or real) that don’t move the architectural ball at all.
Truth two. Requirements insidiously morph into specifications. Technical people of all sorts are problem-solvers. Ask them to set out requirements, and the temptation to define those requirements in terms of a presumed implementation is irresistible. A list of functional requirements becomes a functional flow chart. A flow chart becomes a software implementation model.
Truth three. You have to design for the cloud to create cloud software. I was heavily involved in NFV from the first US meeting in the spring of 2013, and I fought from that meeting forward to bring the cloud into NFV. By July it was hopeless, in retrospect, because box momentum was by that time irreversible. It could have worked, though, had any major vendor or major operator gotten decisively behind the effort.
That brings up truth four. Network vendors dominate network evolution, and they’re protecting their own short-term interests. Network operators tend to look to the vendors, because they’re accustomed to building networks from commercially available technology, not promoting technology from concept to commercial availability. The network vendors, having incumbent positions and quarterly sales to project, aren’t in a hurry to transform anything.
That lays out my vision. What 5G needs, what transformation needs, what we need to stop the “Gs” and move network software into the present (much less future), is competing vendor vision. What the telco would should do is mandate that their critical software be open-source, then encourage cloud software vendors to create an open-model architecture for their review. The vendors make a pitch, and the winner is the player who leads the implementation project. The project is open-source, other vendors and interested parties are invited to contribute, or to leave the project in control of the winning design source, which of course would be competitive disaster.
A good, complete, 6G cloud architecture would take a team of three to five architects about six months to create. Add in three months for review of competing approaches. Within another 15 months we’d have code, and that two year period wouldn’t have been long enough for a standards body do accomplish anything other than draw diagrams that would unreasonably constrain the future they were trying to promote.
The reason this would go so fast is that cloud architects would see telecom projects as things that need to be solved in the context of current cloud tools and techniques. A major part of the work is done already. It’s just a matter of recognizing that, and recognizing what does it. We even have cloud software vendors with special initiatives for service providers, but all too often, these initiatives are handicapped because the operators expect them to include many of the NFV and ONAP concepts that were wrong from the first.
I’ve said before that cloud providers were a threat to network operators, and some posted their disagreements on LinkedIn. I hold to that position. No, cloud providers won’t become access competitors, but they’ll become service competitors who will threaten to tap off any high-margin opportunity. If operators, and network vendors, don’t want that to happen, they’ll need to get their own houses in order and present a vision of the future that actually solves the problems, not just blows kisses in the appropriate directions.