To Control Complexity, Abstract It!

What is the biggest challenge in tech?  This happens to be a question that unites the service providers and enterprises, the network and the data center and cloud.  According to the people I’ve chatted with over the last quarter of 2020, the answer is complexity.  Both groups say that their need to address complexity grows from both existing and emerging technologies, and both say that they believe that the latter is increasing complexity rather than reducing it.

Complexity issues manifest themselves in a number of ways, some obvious and some less so.  The obvious, and number-one, impact of complexity is increased operations costs.  Companies say that their staff size and worker qualifications have increased, which increase their operations costs.  Ranking second is the transitioning to a new technology, the process of adopting and operationalizing something different.  A less-obvious problem is assessing new technologies properly in the first place.  Companies say that often they stay with existing stuff simply because they can’t gauge the value of, or impact of, something new.

If the challenges of complexity are fairly easy to identify, the path that leads to it and the remedies for it aren’t.  A full third of enterprise network and IT professionals think that vendors deliberately introduce complexity to limit buyer mobility.  My view is that this is less a factor than many believe, and that we have to understand how we got complex if we want to get out of the box it creates.

Modern network and IT complexity arises from the fundamental goal of optimization of resources.  By “resources” here, I mean capital equipment.  Both enterprises and service providers spend more on operations than on the technology elements themselves, but neither group has paid nearly the attention to operations cost impacts that they’ve paid to capex.  CIOs, for example, have consistently told me that their capital budgets get more scrutiny than their operating budgets, and that the justification needed for capital projects focus on the capital side, treating operations impacts as a secondary issue.

In networking, the capacity of optical trunks is managed by the packet layer, which also provides connectivity by threading routes through nodal points that can cross-connect traffic.  Today, more is spent here optimizing fiber capacity than is spent creating it.  In computing, virtualization allows more effective use of server capacity, but adds layers of software features that have to be managed.  We’re not at the point where server management costs more than servers in a capital sense, but many enterprises say that platform software and operations costs related to it are already approaching server costs.

Why does this happen?  There are three reasons, according to both service providers and enterprises.  The biggest problem is that new capabilities are introduced through layering a new technology rather than enhancing an existing one.  Often, a new layer means new devices (security is a good example), and everyone knows that just having more things involved makes everything more complex.  Next in line is that functionality drives adoption, which induces vendors to push things that open new business cases.  This means that there’s less attention given to how functionality and capabilities are evolving, and less opportunity to optimize that evolution.  The third reason is that vendors and buyers both gravitate to projects with limited scope, to limit cost and business impact.  That creates the classic tunnel vision problem and reduces incentives and pressures to think about cost and complexity overall.

The facile answer to the “How do we fix this?” question is to do better architectural planning.  If careful design creates a solution that can evolve as problems and technology evolve, that solution can then optimize the way technology elements are harnessed for business purposes.  Unfortunately, neither buyer nor seller has been able to adopt this approach, even though it’s shown up as a theoretical path to success for three or four decades.  With software and hardware and practices in place to support decades of growing disorder, we can’t expect to toss everything and start over.  Whether it’s the best approach or not, we have to fall back on something that can be adopted now and have some short-term positive impact.

That “something”, I think, is a form of virtualization, a form of abstraction that’s commonly called “intent modeling”.  A virtual machine is an abstraction that behaves like a bare-metal server, but is really implemented as a tenant on a real server.  A virtual network looks like a private network but is a tenant on a public network.  Virtualization has been a major source of complexity, because it introduces new elements to be acquired, integrated, and managed.  Properly applied, it could be an element in the solution to its own problem.  How that happens is through intent modeling.

An intent model is a functional abstraction representing an opaque and unspecified implementation of capabilities.  Think of it as a robot, something with human form and behavior but implemented using hidden technical pieces.  An autonomous vacuum, for example, cleans the floor, which is a simple external property.  The average user of one would have no clue as to how to get one to work, but they don’t have to, because the “how” is hidden inside the abstraction.

I’ve blogged often about how intent models can be applied to operations automation, but they have a direct impact on all the aspects of complexity, and to understand why, we need only consider those vacuuming robots.

How many robot vacuums could we expect to sell if we offered only directions on how to build one, or even a kit from which such a thing could be built?  What people want is a clean floor, not an education in applied electrical engineering.  The first secret in managing complexity is to subduct it into intent models, where it’s safely separated from functionality.  We see enhanced robots (like the dancing ones shown on a popular Internet video) as enhanced features, not as additional complexity, because most people don’t consider or even care about how the features are delivered.  A tomato is something you slice for your salad, and the complexities of growing it aren’t in your wheelhouse.

The next obvious question is why, if intent models and virtualization are such obvious winners, they’ve not exploded into prominence in the market.  There are several answers to this, but the most relevant are the problem of the economics of presentation and the granularity of adoption.

If you’re a vendor, you’re part of a polar ecosystem of buyer/seller.  You’ve adapted your product strategy, from design through sales and support, to the mindset of the prospective buyer.  Here, those three reasons for complexity growth that I introduced earlier apply.  Simply put, you’ve sold your stuff based on what your buyer wants to hear, and your buyer is used to complexity.  They might even owe their jobs to it.  Selling “intent-modeled Kubernetes” to a certified Kubernetes expert is going to be harder than selling it to the CIO, but the Kubernetes expert is the one doing the RFP.

In any event, the value of intent modeling and virtualization as a means of reducing complexity is as limited as the scope over which you apply the techniques.  It would be practical (if not easy) to adopt intent modeling with a new project, but most new projects’ scope is limited and so would the benefits.  Having one assembled robotic vacuum among a host of kits to build others wouldn’t make the combination particularly easy to sell, or reduce the skills needed to assemble all the others.  But “scope creep” is a great way to lose project approvals, as anyone who’s tried to push a project through the cycle well knows.

The fact is that buyers aren’t likely to be the drivers behind intent-model/virtualization adoption, it has to be the sellers.  It would be unlikely, at this stage in the evolution of intent models, to have a general solution applicable to all possible technology elements.  It would be possible to have a vendor introduce an intent-modeling approach that was linked to a specific area, like networking, to a specific technology (AI/ML) or both.  It would be optimal if a leading vendor in a given space were to “intent-ify” its own solutions.

The big virtualization vendors, meaning VMware, Red Hat, and HPE, could well field an offering in this space, either broadly across one of their existing technologies or in a narrow area (like 5G Open RAN).  Network vendors could do the same for the data center, 5G, or broadly.  Juniper, who recently acquired Apstra, a data-center intent-based, player, might be a promising source of something here.

A final possibility is that a standard like OASIS TOSCA could be the basis for a solution.  TOSCA has the ability to define/architect intent models, but it would be a bit of work to do even one such model, and it would likely require extension/expansion for TOSCA itself.  Thus, I think this avenue of progress will likely have to wait for a vendor-driven approach to promote intent modeling interest.

Something is eventually going to be done here, and I think intent modeling and virtualization will be a piece of it.  I’d like to predict that it will happen in 2021, and I think there’s a good chance that the first vendor positioning will indeed come this year.  Where one vendor steps, others will likely want to mark territory themselves, and that might finally move us toward addressing complexity.