Can Openness be Merchandised, Even in Networks?

Everyone loves open technology, except of course vendors who have to compete with it. Still, even vendors seem to embrace it or at least rely on it in some areas, and there’s growing interest in having open technologies drive us into areas where innovation seems to have stalled out. With all of these positives, though, we have our share of negatives. One is the “mule is a horse designed by committee”, a second is “generalized tools can be more expensive than specialized ones”, and another is “you can’t monetize something you give away.” Can we overcome these barriers, and are there more waiting for us?

There’s an old adage that says “The IQ of any group of people is equal to the IQ of the dumbest, divided by the number in the group.” Putting this in a more politically correct way, it says that groups require cooperative decision-making, and that requires compromises to accommodate everyone in the group, which is harder as the number increases.” Anyone who’s been involved in open-source projects or standards development has seen this, but we seem powerless to eradicate it.

Some have suggested to me that the solution is to have a single person or company launch something and then “open” it, which means that the broad membership inherits an approach set by a more controlled group of people. I’ve seen that work and also seen it fail, so I don’t think that’s the solution. The real problem, from my own experience, is that projects of any sort that get off to a bad start are very difficult to turn around. A large group, having necessarily committed a large effort, don’t want to invalidate their collective work. You’ve got to start right to go right.

How, though? My best answer is to say that an open project should begin by having a single insightful architect frame the approach. Of course, identifying who that might be is a challenge in itself. An alternative is to create a number of sub-groups (no more than four) and have each contribute a high-level approach model, which for software would mean an “architecture”. The group would then discuss the advantages and disadvantages of each, and pick the model. Then, the full group takes the idea to the next level, and if it’s going well at that point, a commitment to the approach is formalized. If not, one of the other models is picked and perhaps refined based on the lessons learned.

What this seems to do is eliminate a problem that’s dogged the footsteps of virtually every network-related project I’ve worked on, which is approaches biased by old-think. When there’s a model for something in place, as there is with networks, there’s a tendency to think of the future in terms of the present, the familiar. I’ve seen three highly resourced new-network-model projects toss away much of their potential value through that fault. One was ONAP and another NFV, by the way. None ever recovered, so that’s why it’s critical not to have the problem solidified into a barrier at the start.

The second issue could be called “the curse of generalization”. NFV had a bit of this from the first, with the original goal being to transform networks by hosting virtual functions on general-purpose servers. Well, general-purpose servers are not the right platform for the most demanding of network functions, and perhaps not for very many of those that live in the data plane. White boxes with specialized chips are better, and recently it’s been reported that the cost of general-purpose CPU chips is so much higher than the cost of a specialized and even proprietary CPU that it prices “open” devices out of the market.

This problem is more insidious than the chip example, though. Software design is a delicate balance between a generalization that widens the scope of what the software can do, and a specialization that supports the initial target mission most efficiently. We see in today’s market a tendency to look toward “cloud-native” and “microservice” models for something because they’re more versatile and flexible, but in many cases they’re also alarmingly inefficient and costly. I’ve seen examples where response times for a general solution increased by a factor of 25 times, and costs quintupled. Not a good look.

These are both major concerns for open-model work of any sort, but the last of the three may be the most difficult to address. Something is “open” if it’s not “proprietary”, so open technology isn’t locked to a specific supplier, but free to be exploited by many. Given that, how does anyone make money with it? In the old days of open-source, companies took source code and built and supported their own applications. Even this approach posed challenges regarding how participants could achieve a return for their efforts, without which many key contributors might not sign on. Add in the growing interest for open-source tools among users less technically qualified, and you quickly get support problems that free resources can’t be expected to resolve.

We seem to have defined a workable model to address this problem in the server/application space, the “Red Hat” model of selling supported open-source by selling the support. However, the model fails if the total addressable market for a given open element isn’t large enough to make it profitable to the provider. Still, it’s worked for Nokia in O-RAN; their quarter disappointed Wall Street but they beat rival Ericsson, who’s less known for open components.

The big question that even this hopeful truth leaves on the table is whether a broad-based network change could be fostered by open network technology. O-RAN hasn’t been exactly the hare in the classic tortoise-vs-hare race, and the broader networking market has a lot of moving parts. But Red Hat surely supports a hosting ecosystem that’s even broader, so are we just waiting for a hero to emerge in the open network space? I know both Red Hat and VMware would like to be just that, and maybe they can be. If VMware were to be acquired successfully by Broadcom, the combination might jump-start the whole idea.