The dominant compute model of today is based on the IBM PC, a system whose base configuration when announced didn’t even include floppy disk drives. It would seem that all the changes in computing and networking would drive a different approach, right? Well, about eight years ago, HPE (then HP Labs) proposed what it called “The Machine”, which is a new computer architecture based on a development that makes non-volatile memory (NVM) both fast and inexpensive. Combine this with multi-core CPUs and optical coupling of elements and you have a kind of “computer for all time”.
Today, while we have solid-state disks, the performance of the NVM is far slower than traditional memory, which means that you still have to consider a two-tier storage model (memory and disk). With the new paradigm NVM would be fast enough to support traditional memory missions and of course be a lot faster for flash/rotating media missions. It’s fair to ask what the implications could be for networking, but getting the answer will require an exploration of the scope of changes The Machine might generate for IT.
One point that should be raised is that there aren’t necessarily any profound changes at all. Right now we have three memory/storage options out there—rotating media, flash, and standard DRAM-style volatile memory. If we assumed that the new memory technology was as fast as traditional volatile memory (which HPE’s material suggests is the case) then the way it would likely be applied would be cost-driven, meaning it would depend on its price relative to DRAM, flash, and rotating media.
Let’s take a best-case scenario—the new technology is as cheap as rotating media on a per-terabit basis. If that were the case, then the likely result would be that rotating media, flash, and DRAM would all be displaced. That would surely be a compute-architecture revolution. As the price rises relative to the rotating/flash/DRAM trio, we’d see a price/speed-driven transition of some of the three media types to the new model. At the other extreme, if the new model were really expensive (significantly more than DRAM), it would likely be used only where the benefits of NVM that works at DRAM speed are quite significant. Right now we don’t know that the price of the stuff will be, so to assess its impact I’ll take the best-case assumption.
If memory and storage are one, then it makes sense to assume that operating systems, middleware, and software development would all change with respect to how they use both memory and storage. Instead of the explicit separation we see today (which is often extended with flash NVM into storage tiers) we’d be able to look at memory/storage as being seamless, perhaps addressed by a single petabyte address space. File systems and “records” are now like templates and variables. Or vice versa, or they’re both supported by a new enveloping model.
One obvious benefit of this in cloud computing and NFV is that the time it takes to load a function/component would be shorter. That means you could spin up a VNF or component faster and be more responsive to dynamic changes. Of course, “dynamic changes” means you’d also be spin up an instance of a component faster.
The new-instance point has interesting consequences in software development and cloud/NFV deployments. What happens today when you want to instantiate some component or VNF? You read a new copy from disk into memory. If memory and disk are the same thing, in effect, you could still do that and it would be faster than rotating media or flash, but wouldn’t it make sense just to use the same copy?
Not possible, you think? Well, back in the ‘60s and ‘70s when IBM introduced the first true mainframe (The System 360) and programming tools for it, they recognized that a software element could have three modes—refreshable, serially reusable, and reentrant. Software that is refreshable needs a new copy to create a new instance. If a component is serially reusable it can be restarted with new data without being refreshed, providing that it’s done executing the first request. If it’s reentrant, then it can be running several requests at the same time. If we had memory/storage equivalence, it could push the industry to focus increasingly on developing reentrant components. That concept still exists in modern programming languages, by the way.
There are always definitional disputes in technology, but let me risk one by saying that in general a reentrant component is a stateless component and statelessness is a requirement for RESTful interfaces in good programming practice. That means that nothing used as data by the component is contained in the component itself; the variable or data space is passed to the component. Good software practices in creating microservices, a hot trend in the industry, would tend to generate RESTful interfaces and thus require reentrant code. Thus, we could say that The Machine, with a seamless storage/memory equivalence, could promote microservice-based componentization of applications and VNFs.
Another interesting impact is in the distribution of “storage” when memory and storage are seamless. We have distributed databases now, clusters of stuff, DBaaS, and cloud storage and database technologies. Obviously all of that could be made to work as it does today with a seamless memory/storage architecture, but the extension/distribution process would break the seamlessness property. Memory has low access latency, so if you network-distribute some of the “new memory” technology you’d have to know it was distributed and not use it where “real” memory performance was expected.
One way to mitigate this problem is to couple the distributed elements better. HPE says The Machine will include new optical component coupling. Could that new coupling be extended via DCI? Yes, the speed of light would introduce latency issues that can’t be addressed unless you don’t believe Einstein, but you could surely make things better with fast DCI, and widespread adoption of the seamless memory/storage architecture would thus promote fast DCI.
The DCI implications of this could be profound for networking, of course, and in particular for cloud computing and NFV. Also potentially profound is the need to support a different programming paradigm to facilitate either reentrancy/statelessness or REST/microservice development. Most programming languages will support this, but many current applications/components aren’t reentrant/RESTful, and for virtual network functions it’s difficult to know whether software translated from a physical device could easily be adapted to this. And if management of distributed seamless memory/storage is added as a requirement, virtually all software would have to be altered.
On the plus side, an architecture like this could be a windfall for many distributed applications and for something like IoT. Properly framed, The Machine could be so powerful an IoT platform that deals like the one HPE made with GE Digital (Predix, see my blog yesterday) might be very smart for HPE, smart enough that the company might not want to step on partnership deals by fielding its own offering.
The cloud providers could also benefit mightily from this model. Platforms with seamless memory/storage would be, at least at first, highly differentiating. Cloud services to facilitate the use of distributed seamless memory/storage arrays would also be highly differentiating (and profitable). Finally, what I’ve been calling “platform services” that extend a basic IaaS model into PaaS or expand PaaS platform capabilities could use this model to improve performance. These services are then a new revenue source for cloud providers.
If we presumed software would be designed for this distributed memory/storage unity, then we’d need to completely rethink issues like placement of components and even workflow and load balancing. If the model makes microservices practical, it might even create a new programming model that’s based on function assembly rather than writing code. It would certainly pressure developers to think more in functional terms, which could accelerate a shift in programming practices we already see called “functional programming”. An attribute of functional programming is the elimination of “side effects” that could limit RESTful/reentrant development, by the way.
Some of the stuff needed for the new architecture are being made available by HPE as development tools, but they seem to want to make as much of the process open-source-driven as they can. That’s logical, providing that HPE insures that the open-source community focuses on the key set of issues and does so in a logical way. Otherwise it will be difficult to develop early utility for The Machine, and there will be a sensitivity to price trends over time if pricing factors can be expected to change the way the new memory model is used, because these changes could then impact programming practices.
A final interesting possibility raised by the new technology is taking a leaf from the Cisco/IBM IoT deal. Suppose that you were to load up routers and switches with this kind of memory/storage and build a vast distributed, coupled, framework? Add in some multi-core processors and you have a completely different model of a cloud or vCPE, a fully distributed storage/compute web. Might something like that be possible? Like other advances I’ve noted here, it’s going to depend on price/performance for the new technology, and we’ll just have to see how that evolves.