IBM and Red Hat Take on the Edge

The carrier cloud and edge computing stories just never seem to end.  Just yesterday, I noted the trend in edge computing that promised a bit of action in the edge, relating to 5G and carrier cloud.  It looked like public cloud players might be seizing the 5G edge.  Now, IBM and Red Hat have thrown their hat into the same ring.  The picture they paint is very similar to the one Microsoft painted with its own Azure Edge Zone story, and the issues they face are similar to those I cited in yesterday’s blog.  The difference is that IBM and Red Hat have jumped all the way to that generalized edge model that I talked about yesterday, and that might create a very interesting market dynamic.

The key line in the announcement is that IBM and Red Hat “announced new services and solutions backed by a broad ecosystem of partners to help enterprises and telecommunications companies speed their transition to edge computing in the 5G era.”  Microsoft seems to have taken their shot at the edge by example, by demonstrating the application of Azure to operators’ 5G virtual infrastructure mission.  IBM/Red Hat are shooting for an architecture, an edge computing model that enterprises can adopt too, and viewing 5G as a broader driver of edge computing.

The key technical point in the announcement is that containers are the vehicles that will take applications to the edge.  IBM links their story to Red Hat OpenShift, which is a container and Kubernetes ecosystem.  That effectively makes edge computing a resource in a container world, and unifies the edge, the cloud, and the data center.

One reason this is important is that it solidifies the view held by many (including me) that containers are going to displace virtual machines in nearly all applications.  You can still do VMs with OpenShift, of course, but enterprise IT is going to containerize so completely that those who don’t use containers and Kubernetes are going to be at a growing strategic and platform software tool disadvantage going forward.  If you’re not Kubernetes, you’d better get there.

The other reason this is important is that it shows IBM believes that edge computing will not be dominated by specialized appliances, but will instead be an extension of the cloud.  The specialized edge model, including the use of real-time operating systems like the RTOS approach Microsoft favors, would be expected to run a static application set, not one requiring container/Kubernetes dynamism.

This is an enterprise perspective.  For the telecommunications companies, specialized open devices seem to be favored.  However, while you can host specialized static applications on generalized cloud resources, you can’t run generalized applications on specialized appliances.  IBM is betting the edge market will converge, not fragment into mission-specific approaches.

This doesn’t mean IBM is abandoning the telco space, of course.  In fact, they announced the “IBM Telco Network Cloud Manager – a new solution offered by IBM that runs on Red Hat OpenShift to deliver intelligent automation capabilities to orchestrate virtual and container network functions in minutes.”  This seems to blow politically motivated kisses at the telco virtualization specifications while delivering virtual function hosting in what’s a general cloud-and-container model.

IBM is also addressing the IoT space, something that the telecommunications companies may eventually get to.  Their IBM Edge Application Manager is designed to support the management of up to ten thousand edge nodes by a single administrator (for basic pattern and policy-based commands).  This seems to be a refactoring of the basic OpenShift operations tools, aimed at offering businesses with large-scale IoT plans the confidence that operations complexity won’t bury their staff.

As far as applications go, IBM has two layers of support.  Basic edge platform tools, including IBM Visual Insights, IBM Production Optimization, IBM Connected Manufacturing, IBM Asset Optimization, IBM Maximo Worker Insights and IBM Visual Inspector, are available to augment the container deployment framework.  In addition, they announced the IBM Edge Ecosystem, which is a partner program to create vertical and horizontal solution frameworks and promote them to buyers.

There are some semantic points here too, that even though they seem terminological in nature are quite important from a market perspective.  The first is that you’ll note that the announced products and programs are IBM-branded rather than branded or co-branded by Red Hat.  Some might see this as simply a reflection of the fact that IBM bought Red Hat, but I think it’s more fundamental.  Edge computing is a big-player opportunity, and IBM has special strategic credibility and influence in the big player space.

IBM is the only vendor to have ever achieved dominance in strategic influence over enterprise IT spending plans.  They don’t have the same level of influence today as they had at their peak, but they’re still a powerhouse among the larger enterprises, utilities, and financial players.  These spaces have strong potential for edge computing, so having IBM take the lead makes sense.

The next point is that the announcement names Vodafone Business, Samsung, and Equinix as early adopters, and provides a brief summary of the edge offering(s) each has begun to deploy.  The applications demonstrate the edge and hybrid cloud, 5G exploitation, and remote work.  None of these are the host-my-virtual-functions-for-5G, which again demonstrates that what IBM is doing is defining an edge architecture not building a one-off edge package.

This may be the smartest thing IBM could have done, overall, even if IBM in the long run wants to take a big bite out of the 100,000 data center’s worth of work that carrier cloud represents.  5G virtualization infrastructure is just one driver of carrier cloud, a driver that’s probably not worth chasing if hosting virtual functions is the only business opportunity.  IBM isn’t even betting that 5G will have broad systemic impact in things like IoT.  What they’re betting on is a form of edge computing where the edge is just the scouting force of hosting resources pushed out toward the IT frontier.

They may be reflecting the historically dominant trend in IT, the thing that’s advanced IT and IT spending since the 1950s.  Getting processing closer to the worker, providing IT at the point of activity, is the mission of any IT giant who wants to ignite a new wave of productivity-driven server and software spending.  If IBM could gain an upper hand in edge computing as a means of delivering point-of-activity worker empowerment, they could see a major uptick in revenue, enough to free IT spending from the more-for-less pressure of the last two decades.

And keep this in mind; none of IBM’s example applications were 5G infrastructure.  We have to wonder whether this is simply backlash against Microsoft’s Azure Edge Zones announcement, or whether IBM thinks Microsoft has made a rare strategic blunder, one that could give IBM a chance, not only with edge software but for IBM’s own cloud.  I don’t think IBM is writing off network operator 5G deployments, but I don’t think they’re depending on them.  That might be a very wise move.

The question now, clearly, is whether IBM’s and Red Hat’s competitors will make similar moves.  VMware, Dell, and HPE have similar aspirations, and Amazon and Google could take a different stance on edge computing than either IBM or Microsoft.  There’s even more reason than usual to think that the edge space could benefit from some competitive jockeying and creative alternative business and technology models.  If that happens, we could see a lot of dynamism in a space that so far has been dominated by hype.

New Battle Lines Emerge at the Edge

We may be seeing some clarity in the edge space, arising perhaps surprisingly from some increased confusion.  There are more players emerging, more theories promulgating, and more value propositions being tested, to be sure.  What’s interesting is that some trends are emerging.

The biggest question in edge computing is just who has the edge, so to speak.  That’s a two-dimensional question, too.  The first dimension, taking the “has the edge” to mean “has the lead”, reflects the fact that there’s likely to be a dominating set of players, as there is in any market.  The second, taking the “has the edge” to mean “owns the edge elements”, reflects the fact that edge dominance requires having something out there to do the dominating.

Let’s start with potential players/winners.  In the past, many (myself included) believed that network operators were going to “have” the edge in both senses of the term.  Operators have control over edge offices where access networks terminate, they have a mission that could site gear there in 5G, and they have a set of standards and plans (including 5G) that involve function hosting.  Most of the hundred-thousand new data centers that I’ve said carrier cloud could deploy were associated with edge hosting.

Another logical set of winners are the public cloud providers.  The biggest challenge with edge computing is what operators call “first cost”, the portion of the cash-flow curve after a deployment starts where investment far outstrips revenue and the net cash flow dips negative.  First-cost constraints suggest that many edge applications would try to get off to a slow start, but that’s a problem because if there’s a clear edge value, it’s lower latency.  You can thus only start slow by starting in a very narrow geography.  But cloud providers are hosting resources for the masses, and they might be able to deploy first simply by offering edge-as-a-service to all the other constituencies.

There’s even a case to be made for server/device vendors.  Many proposed edge applications relate to IoT, which in turn relates to some form of industrial automation.  The companies that have the industrial processes to automate are logically the edge themselves, and they could in theory deploy technology between IoT elements and their computing resources to provide edge processing.  Vendors would have to not only support but promote this, creating a framework that would raise early benefits to the users, and lower risks at the same time.

What do any aspiring edge-winners need?  Any form of edge computing needs a mission to serve, and mission development isn’t the strong suit of anyone these days.  Market education is a tedious process, and over the last decade the notion of an educational sell has become increasingly oxymoronic.  One salesperson told me, in 2016, that “if you have to educate, you’ll never make the sale at all.”  People don’t earn commissions by not selling.

This gave the early mover advantage, potentially, to the network operators.  They had six potential driver applications for carrier cloud, and there were some early-acting ones that could have created an internal mission for deployment.  Feature or function hosting is one example.  The operators did commit to these missions with things like NFV, but the commitment never developed into significant deployment.  In fact, it’s recently become clear that operators would love to outsource their carrier cloud opportunities to the public cloud providers.

Who of course are our second potential winner group.  All of the public cloud providers would love to provide edge computing as a service, and all have shown interest.  Microsoft and Google seem to be preferred partners in the operators’ view, largely because the operators fear Amazon’s might in the cloud would defeat their own (delayed, half-hearted) efforts to shift to self-deployed cloud assets down the line.  The public cloud gang were successful first in tapping off business cloud opportunities from operators who wanted to provide enterprises with cloud services but didn’t want to build out to get them.  What disappointed the cloud providers in these early deals was the fact that the operators were largely interested in erecting big attractive cloud-promoting billboards and seeing what rolled in.  Not much did, because (as Verizon learned early on), operators aren’t very effective as cloud computing sales organizations.

The cloud providers also realized they had their own first-cost issue.  Remember, few public cloud providers own real estate in every major community the way that the operators do.  They can hardly run out and buy some, fill it with servers, and hope operators (who had their own facilities all along) would fill the cloud provider edge instead of their own.  In fact, cloud providers had edge opportunities independent of whatever missions the telcos might have sent them, and had them several years back.  The cloud provider strategy for the edge has focused on cloud-allied appliances.

Instead of buying real estate, or perhaps even equipment, why not supply users with a software stack and perhaps a hardware specification, and let them roll their own, but one integrated with the public cloud provider’s own facilities and software?  Things like Amazon’s Greengrass came along as a result of this sort of thinking.

Microsoft has perhaps advanced this to the state of the art in its Azure edge stuff, which promotes the “intelligent edge” as a piece of the “intelligent cloud”.  The basic concept is to make edge a subset of the cloud, one that can be a subset or zone of cloud resources, or an independent device.  Things like Azure RTOS and Azure IoT Edge create that private-edge-device-to-Azure-cloud linkage, and there’s a software suite that’s more application-centric (IoT, of course) that facilitates implementation, and thus commitment.

Microsoft has also jumped into the telco-cloud mission with a set of “Edge Zone” offerings.  These include both 5G services to operators who want to outsource some 5G hosted-function pieces, to Private Edge Zones for enterprise private-5G deployment.  The current 5G-centric stuff includes a bunch of application partners who combine to create a complete solution.  This is almost certainly intended to be a prototype for edge-computing strategies in the future, not only for possible carrier cloud mission-stealing, but also for general edge computing, especially IoT.

The big advantage of this approach is that it addresses that educational-sell problem.  Selling a solution to somebody’s problem is easy.  Selling a technology and then educating the buyer to solve their own problem is a lot harder.  That suggests that if vendors want to sell edge equipment to users, competing with as-a-service edge models, they’ll need to build application alliances too.

Which, for vendors, is more difficult.  The problem is that while users welcome as-a-service strategies because they’re inherently immune to stranding capital, the strategies based on purchased hardware and software are considered a lock-in risk.  Even when the user buys a device to serve as a public-cloud edge partner, they see the device as being linked to their relationship to the cloud provider, which they don’t see as a lock-in (even when it sometimes is).

The other problem for vendors is the fact that there’s not a lot of expensive gear involved in an edge commitment.  That makes a sale less productive for both salesperson and company, making an educational sell nearly impossible, and it also means there’s a low-commitment relationship established with users, which discourages partners’ participation.

What this adds up to is that public cloud providers will likely end up owning the edge in the sense of control, and that users will likely own the devices, or at least lease them for installation on their premises.  Once this trend gets started, the notion of a true edge-as-a-service, meaning an edge hosted in some facility data center, will be difficult to promote.

Maximizing the Shift to Public Cloud

Cloud computing is going to grow because of the pandemic and lockdown.  There’s no question of that, but how much it might grow and what it might grow into are harder to assess.  The issues that impact those questions range from security/compliance to simple cost, and enterprise planners are grappling with how to come to terms with them, and with the future cloud that issue resolution will create.

Ten years ago, I did some modeling on cost points, and came to the conclusion that “moving to the cloud” would be economically feasible for only about 24% of applications.  The problem is that hosting economies, meaning resource pool efficiencies, don’t grow linearly or accelerate as the pool grows.  Instead, you reach a point where further server density doesn’t improve your ability to support new applications or your operations efficiency.  My calculations showed that larger enterprises would reach a high enough level of resource efficiency that a public cloud provider’s cost, plus their profit margin, would be greater than the enterprises’ own resource costs.

The other complication is security and governance.  Companies are very reluctant to expose their critical information to a public cloud provider.  I won an online debate on this topic a decade ago, and little has changed since.  It’s possible to reduce the security and compliance risks by things like on-disk on-the-fly encryption, but the cost and performance impact is still considered by planners to be too high.

You might wonder how the cloud is growing at all, given this, and how it could be considered an effective response to the pandemic.  The answer is that for about a year, enterprises have recognized that using public cloud resources for the front-end GUI or “presentation” interface is smart.  By ceding as much of the user experience as possible to the public cloud, QoE is improved and the application’s overall performance can be improved without adding data center resources.  This is because the user interface has a lot of think-time associated with it, and some non-critical editing and even database work can be offloaded to further improve the front-end response.

Planners have been considering the question of how to do more.  About 22% of enterprises (multi-site businesses with at least 20 locations) believe they could cede more applications to the cloud if they could deal with executive objections to the security/compliance issues.  Since most of these deal with database access during transaction processing, the tentative conclusion of planners is that if the cloud applications could dip into data center storage for access and updates, more of the applications could be made cloud-resident.  Executives, it turns out, have issues with this approach too.

One of the issues is spurious, IMHO.  They’re concerned that because the database access has to be exercised across a network boundary with the cloud provider, they’re losing performance relative to doing it locally.  The reason I think this is spurious is that transactions already have to cross that same boundary to reach the core back-end portion of the applications.  There might be a slightly larger data payload moving across if you pushed the main logic of an application into the cloud and then hit the database across the cloud/data-center boundary, but not necessarily a huge increase.

The second issue is that cloud providers usually charge for transiting the cloud-network boundary, meaning ingress and egress traffic is chargeable.  This is also a spurious issue, but not as much as the performance issue I just noted.  The larger payloads might not make a huge difference in QoE, but they could easily run up costs.  If cloud providers want to maximize the number of applications or application components that are transferred to the cloud, they’ll have to look at their traffic charge policies.

The third issue, which is still spurious, is that management has been spooked by reporting of failures in public clouds.  They realize that it’s possible in theory to back up one cloud provider with another, but that increases costs and compromises further cloud migration.  What management forgets is that they’re already depending on public cloud availability with any cloud front-end. There’s no further risk, or at least no significant risk, in depending more on the public cloud.  This problem, then, is largely one of public relations.

The final issue is the only one that’s not a red herring.  Core applications simply aren’t designed to be distributed in a public cloud that way.  Rewriting them to make them cloud-distributable with databases at home in the data center is considered a significant burden, and there are multiple reasons for that, too.

The first reason is that third-party software can’t be rewritten by enterprises, and most software companies are telling enterprises that there are no plans to make that sort of change in the near term.  This impacts just short of a fifth of the enterprise applications, according to the planners.

The second reason, which some might say is related, is that software licenses can either hinder optimizing scaling and redeployment, or downright prevent it.  Software is often licensed based on the number of instances being run, and so scaling components could introduce additional charges—if the software even permits scaling.  This impacts about ten percent of applications.

The third reason relates to recent stories on state unemployment systems.  Many enterprises are running applications written in obsolete programming languages.  In some cases, the languages themselves may introduce barriers to modern software design, and in other cases there’s simply insufficient development resources familiar with the languages.  This impacts twelve percent of applications.

Which leaves the biggest reason, the issue that impacts over half of applications.  The time and cost required to make the changes, and the need to freeze changes while the applications are being redone, is prohibitive.  This is the main reason why we aren’t going to see the predictions of some pundits on the universal rush to the cloud come true.

Is there no solution?  There are two, in fact, but one won’t be palatable for many.  In time applications will be redone and redesigned to meet cloud-ready qualifications.  Yes, it will take years to happen, and some applications might require a decade or two, but inevitably the new hybrid-cloud model will succeed.  Many of those who promote the cloud may have retired by that time, but hey, nature is a force.

The more palatable option is middleware.  Applications are written to access resources, whether hardware, database, or platform, through middleware APIs.  If the middleware is changed to a form that’s more cloud-friendly, that transformation could reduce or eliminate many of the issues associated with a broader public cloud mission.  But it has to be done right.

Database is a good example.  You can view a database access as “logical” or “physical”.  A logical access means something like an RDBMS query.  Whether that goes to a local database or emerges from the cloud as a query to an on-premises database, it’s still a database.  If it’s possible to intercept logical DBMS access, it would be possible to move a relational database (or any structured database with a high-level access semantic) away from the application so it could reside in a data center while the accessing application components move to the cloud.

A physical access, in contrast, means device-level I/O is what’s being done via the application’s API to the data world.  If that’s the case, there’s nothing that can make it efficient or cost-effective if the database remains in the data center while some or all of the application moves to the cloud.  There’s no easy way to get a handle on how many applications could hit this wall, but planners estimate the number to be between 15 and 20 percent.

We still need to look at the way the cloud prices data in-and out-flows.  We still need to look at how we could optimize cloud benefits for applications not easily rewritten.  The rewards could be great, though.  The current trend toward enhanced cloud front-ends would roughly double the cloud’s potential revenue, and we could triple it if we could offload even half the potential components of mission-critical applications that enterprises would be willing to cede to the cloud in a post-pandemic world.  For cloud providers, this is the real light at the end of the tunnel.

How Tech Planners View Possible Recovery Scenarios

Suppose the pandemic is transformational?  That’s a question that more and more planners/strategists are asking, among service providers, vendors, and enterprises.  There are several dimensions of transformation, obviously, but the one that I propose to look at is the technology dimension.  Will the pandemic really change how technology is used, and therefore how it’s purchased?  I’ve tried to get some broad input on this, and the first thing I learned is that most think it’s still too early to tell.

Almost from the first, planners saw the potential for pandemic impact on tech as something related mostly to the lockdown, which has the effect of suspending a lot of retail consumer behavior.  They also saw it as dividing into three phases depending on how long the lockdown went on.  The “hiccup” phase, meaning a lockdown of a couple weeks, was widely seen as exactly what the name suggested.  This would have impacted retail sales and consumer services, but not much else.  A few planner-eggheads would ponder the implications of a future event of that type, but otherwise it would be business as usual.

The next phase, the “it-hurts” phase, meant a lockdown up to about 2 months, perhaps tapering off with fewer restrictions to the 3-month point.  The immediate impact is to create massive job losses in the retail and services sectors, kicking up unemployment.  Now, there’s a growing population of cash-strapped households, and even those with income are limited in what they can buy because so much is closed.

This level of retail behavior change would put most companies in a position of having to lay off employees even outside the retail sector, as the impact rolled through the supply chain.  Uncertainties about their own business future would make more businesses cautious about spending, spreading things further.  This is the phase we’re in, and we’re about 4-6 weeks along.

In theory, a combination of stimulus and unemployment enhancements can reduce the impact of this phase, but again the issue is that even those who are made financially whole, or nearly so, can’t exercise their normal spending behaviors.  The longer that’s the case, the bigger and broader the impact.

The final phase, the “this-really-sucks” phase, means a lockdown more than 2-3 months.  At this point, there are problems likely with business stability and survivability for some companies, and there’s a fear that the protracted pandemic impact will create an impact on consumer behavior through the whole of the summer and into the fall.  The risk of a second outbreak in the fall creates an additional level of concern, with the business risk then extending well into 2021.

The greater the impact of the current phase of the pandemic, the greater the fear of a follow-on will impact buyer behavior.  Keep the lockdowns going into June and there’s enough fear to cause consumers to change behavior.  Keep it going through the summer and we’re talking major problems, unless there’s convincing evidence that there’s a treatment that works and a vaccine that’s imminent.  We need clarity, or everyone in the market will get very defensive, and that action alone will create something to be defensive about.

As I said, we’re in that it-hurts period, and in this period the planners say they’re confronting two basic questions.  First, will tech budgets for the year be permanently impacted, or will buyers just push spending back toward Q4?  Second, will the basic way that business is done change?

Most planners think that if the lockdowns don’t lift by the middle of June, we’ll likely be in the third and ugliest of the phases.  Two and a half months of the quarter would then have been in lockdown, and that would surely have a major impact on retail sales.  In that case, they think that tech spending will decline by at least 5-8% in 2020, and that if there is another outbreak of any significance in the fall, they’d expect 2021 budgets to be set at least 10% below the 2020 levels.  In short, this would hurt a lot.  However, the overwhelming majority of planners still think that things will open by the end of May.  That would mean that at least some economic rebound in spending, that held off during the lockdown, could come in during Q2, which would reduce the need for companies to take financial measures to lessen the shareholder shock of a truly bad second quarter.

So, let’s assume they’re right.  The immediate impact would mostly be on the supply chain for retail sales, so the finished-goods areas would tend to recover first.  B2B and manufacturing supply chains would recover two to three months later, and it’s possible that some of these secondary players would not recover until well through Q3, which might mean that their 2020 tech spending would be hit; there’d be little time to recover and push projects along before end-of-year.  Likely we’d see a decent consumer recovery by Q4, but businesses could still lag a bit.

That deals with our first question.  For the second, a lot depends on the nature of the work done by each company, and in particular whether the company’s workforce could work from home in enough quantity that creating a better WFH model would be broadly helpful.

Tech journalism talks to tech people, and so our view of WFH is biased toward organizations that have a lot of office people.  With most tech manufacturing offshore, the work done in offices relates to sales/marketing, administration, software development, management, and planning.  All of these activities could be outsourced if the people could be made productive.  For companies that actually have to build or handle goods, a big piece of their workforce may be ineligible for WFH, and if most of a company is shut down by a lockdown, it may be that there’s less value to trying to keep the rest open from home.

That doesn’t mean that tech changes couldn’t happen.  Almost two-thirds of planners think that the pandemic, even ending at the end of May, will likely increase the share of their budgets they spend in the cloud.  Most of that is in support of more web-based retail and customer service, and part to WFH, but a part is also because there’s more interest in shifting some classic data-center pieces of applications to the cloud to avoid having issues with facilities access and management.  Nobody was ready to say they’d move critical apps to the cloud and close the data center, but part of the capital projects deferrals will likely involve moving instead to cloud deployment.  If it does, the stuff won’t come back for a long time, if ever.

As far as remote work or WFH, planners are vague, which suggests that while it’s a topic of interest, there’s still not much backing for change.  This is almost surely due to the fact that as long as the lockdowns remain in force, there’s little appetite for taking steps to optimize the future.  How much optimization might be needed is as uncertain as the future is.

The current bias is to simply expand the use of the tools already in place, but to improve dynamic credibility.  SD-WAN is the one certain winner in this picture.  Not only does it promise to reduce costs at a time when cost reduction looks pretty good, it also can provide a lot of connectivity benefits related to WFH, and it can also improve cloud connectivity.  You can see that’s true by following the trends in advertising in the space.

There are likely to be some disappointments when it comes to picking a service or product, though.  As I’ve been saying for two years now, SD-WAN is really a subset of the broader virtual-network mission.  That’s not how the overwhelming majority of vendors/service providers see it, though.  If you consider both WFH and enhanced cloud connectivity, in fact, there’s only one vendor that measures up.  If you’re interested in my view, see the report HERE.

The changes in the SD-WAN dynamic that the pandemic is driving are a symptom a broader vision to come, and eventually I think we’ll see that recognized.  I also think that at least half the current vendors will never achieve it; there’s too much of a difference between the limited SD-WAN mission of the past and the virtual network mission of the future.  It will be interesting to see how everyone adapts, but buyers in this space should be very careful of their selections.

A tech transformation in remote work remains a possibility, but only that.  Enterprise planners tend to think in terms of adopting what they can buy, rather than postulating what might work best for them in hope someone will sell it.  Vendor planners tend to focus on what people are asking for, especially now, at a time when VCs are guarding startup cash closely.

What gets us out of this?  We need a convincing positive in the pandemic fight.  Gradual reopening, even if successful, won’t erase the fear of another wave in the fall, winter, or next spring.  What nearly all planners say is that were a vaccine available with good effectiveness, they’d presume an end to lockdowns and associated recurrence risks.  That would almost certainly create the pent-up-demand boom that’s been discussed.

Another positive step with less radical impact is learning that there were a very large number (at least 20%) of people who’d already had the virus with little or no symptoms.  As I write, the US statistics say that 1.1 million cases and 65 thousand deaths.  That equates to a death rate of about 6%, which would make COVID-19 60 times as deadly as seasonal flu.  If current estimates are correct and the actual infection rate is about 22 times higher, the infected count would be about 24 million, and the fatality rate would be about 0.27%, which is three times that of the flue.  If we had 20% of the population infected instead of that 1.1 million confirmed positives, the death rate would be a third of the flu.  Less deadly, less risk.  Note, of course, that we don’t know what the death rate is because we’ve not been able to test enough of the broad population to establish how many have already had the disease, asymptomatically, and recovered.

Two important points to close with here.  First, the planners I’ve heard from are generally not predicting what’s going to happen here, only indicating what they believe the reaction of their company would be to developments.  Except where I label a view as a prediction, keep this in mind.  Second, if the pandemic does go beyond the summer, or reoccurs strongly in the fall, then most of these views would be no longer relevant.  The difference between this and 1929 is that in the latter case, the crash caused widespread financial failures.  It wasn’t a lockdown, but a true crash.  If this goes on long enough, it will be too.

Even where we now, optimistically, be heading, I think we will see transformation.  Businesses that relied on storefront sales are clearly at a risk for recurrence, and at risk to future pandemics.  I think we’re sure to see more companies providing for online sales, and preparing to move to delivery versus in-store as needed.  I also think we’ll see some optimization of remote-compatible work practices, though less of this.  The big network impact will be decentralization of connectivity, promoting SD-WAN and virtual networking.  How far we go on these changes will depend on the balance between the drivers created by risk perception and the inhibitions created by financial stress.  We’ll have to wait to see how these will play out.