Fixing the Internet: Nibbles, Bites, Layers, and Parallels

The recent Facebook outage, which took down all the company’s services and much of its internal IT structure, certainly frustrated users, pressured the company’s network operations staff, and alarmed Internet-watchers. The details of the problem are still sketchy, but there’s a good account of how it evolved available from Cloudflare. Facebook said that human error was at the bottom of the outage, but the root cause may lie far deeper, in how we’ve built the Internet.

Most networking people understand that the Internet evolved from a government-research-and-university project. The core protocols, like TCP and IP, came from there, and if you know (or read) the history, you’ll find that many of the aspects of those early core protocols have proven useless or worse in today’s Internet. Some have been replaced, but others have evolved.

If something proves to be wrong, making it right is the obvious choice, but it’s complicated when the “something” is already widely used. When the Worldwide Web came along in the 1990s, it created the first credible consumer data service, and quickly built a presence in the lives of both ordinary people and the companies and resources they interact with. That success made it difficult to make massive changes to elements of those core protocols that were widely used. We face the consequences every day.

Most of the Internet experts I talk with would say that if we were developing protocols and technologies for the Internet today, from scratch, almost all of them would be radically changed. The inertia created by adoption makes this nearly impossible. Technology and Internet business relationships are interwoven with our dependence on the Internet, and to liken the Internet to a glacier understates the reality. It’s more like an ice age.

BGP appears to be at the root of the Facebook problem, and most Internet routing professionals know that BGP is complicated and (according to many) almost impossibly so. The workings of the Domain Name Service (DNS) that translates commonly used URLs into IP addresses, also played a part. BGP is the protocol that advertises routes between Internet Autonomous Systems (ASs), but it’s taken on a lot of other roles (including roles in MPLS) over time. It’s the protocol that many Internet experts say could benefit from a complete redesign, but they admit that it might be totally impossible to do something that radical.

It’s demonstrably not “totally impossible”, but it may be extraordinarily complicated. SDN, in its “true” ONF OpenFlow form, was designed to eliminate adaptive routing and centralize route control. Google has used this principle to create what appears to be an Autonomous System domain of routers but is actually an SDN network. The problem is that to get there, they had to surround SDN with a BGP proxy layer so that the Google stuff would work with the rest of the Internet. Could another layer of SDN replace that proxy, letting AS communications over some secure channel replace BGP? Maybe.

Then there’s DARP, not the defense agency or DARPA that (in its earlier ARPA form), but Distributed Autonomous Routing Protocol. DARP was created by startup Syntropy, who has a whole suite of solutions to current Internet issues, including some relating to fundamental security, fostering what’s called “Web3”. DARP uses Syntropy technology to build a picture of Internet connectivity and performance. It’s built as a kind of parallel Internet that looks into/down-on the current Internet and provides insights into what’s happening. However, it can also step in to route traffic if it has superior routes available. This means the current Internet could be made to evolve to a new state, or that it could use DARP/Syntropy information to drive legacy node decisions on routes.

The security issues of the Internet go beyond the potential issues like BGP, of course. Many feel that we need to rethink the Internet in light of current applications, and its broad role in our lives and businesses. The Web3 initiative is one result of that. It’s explained HERE and hosted HERE, and it has the potential for revolutionizing the Internet. Web3 has a lot of smarts going for it, but working against it is the almost-religious dedication many in the Internet community have to the protocol status quo. The media also tends to treat anything related to changing the Internet as reactionary at best and sinister on the average.

The scope of Web3 is considerable: “Verifiable, Trustless, Self-governing, Permissionless, Distributed and robust, Stateful, and Native built-in payments,” to paraphrase my first link content. There’s a strong and broad reliance on token exchanges, including blockchain and even a nod at cryptocurrency. The first of my two references above offer a good explanation, and I don’t propose to do a tutorial on it here, just comment on its mission and impact.

There is little question that Web3 would fix a lot of the issues the Internet has today. I think it’s very likely that it would create some new issues, simply because some players with something as enormous and important as the Internet is going to respond to change as a threat, and will try to game the new as much as they have the old. The fact that Web3 has virtually no visibility among Internet users, and only modest visibility within the more technical Internet/IP community, demonstrates that the concept’s biggest risk is simply becoming irrelevant. People will try band-aids before they consider emergency care.

That’s particularly true when the band-aid producers are often driving the PR. Network security is now a major industry, and the Internet is creating a major part of the risk and contributing relatively little to eliminating it. We layer security on top of things, and that process creates an endless opportunity for new layers, new products, new revenues. This has worked to vendors’ benefit for a decade or more, and they’re in no hurry to push an alternative that might make things secure in a single stroke. In any event, any major change in security practices would tend to erode the value of being an incumbent in the space. Since most network vendors who’d have to build to Web3 are security product incumbents, you can guess their level of enthusiasm.

They have reason to be cautious. Web3 is so transformative it’s almost guaranteed to offend every Internet constituency in some way. The benefits of something radically new are often unclear, but the risks are usually very evident. That’s the case with Web3. I’ve been involved in initiatives to transform some pieces of the Internet and its business model for decades, and even little pieces of change have usually failed. A big one either sells based on its sheer power, or makes so many enemies that friends won’t matter.

Doing nothing hasn’t helped Internet security, stability, or availability, but doing something at random won’t necessarily help either, and in fact could make things worse. I see two problems with Web3 that the group will have to navigate.

The first problem is whether parallel evolution of Internet fundamentals can deliver more than layered evolution. When does the Internet have too many layers for users/buyers to tolerate? When does a cake have too many layers? When you can’t get your mouth open wide enough to eat it. The obvious problem is that a single big layer is as hard to bite into as a bunch of smaller ones. Things like the Facebook problem should be convincing us that our jaws are in increased layer jeopardy, and it may be time to rethink things. The trick may be to make sure the parallel-Internet concepts of Web3 actually pay off for enough stakeholders, quickly enough, to catch on, rather than die on the vine.

The second problem is the classic problem with parallelism, which is how much it can deliver early on, particularly to users who are still dependent on traditional Internet. It seems to me that Web3 could deliver value without converting a global market’s worth of user systems, but more value when most browsers embraced it. Is the limited value enough to sustain momentum, to advance Web3 to the point where popular tools would support it? I don’t know, and I wonder if that point has been fully addressed.

My view here is that Web3 is a good model, but the thing that keeps it from being a great model is that it bites off so much that chewing isn’t just problematic, it’s likely impossible. What I’d like to see is something that’s designed to add value to security and availability, rather than something that tries to solve every possible Internet problem. The idea here is good, but the execution to me seems just too extreme.