Feedburner Alternative

Over half of the total number of people who read my blog in a given month read an email syndication. Up to June 2021, I’ve been using Google’s Feedburner for this purpose, despite the fact that it’s not the most feature-rich option out there. Google announced a couple months ago that they were dropping email syndication support in Feedburner, which led me to look for another option. I found follow.it. This post isn’t a follow.it advert and it’s not compensated in any way, but I know I was blindsided by Google’s move, and I want to help other smaller sites who need to react before the July loss of Feedburner’s email syndication.

There are a variety of plans associated with follow.it, including a basic plan that supports any number of followers for a limited number of feeds. Their website navigation can be a bit murky when you get beyond the selection of a plan, but if you’re systematic there’s help for everyone who’s interested in having site content changes syndicated to various channels, including much of social media. There’s specific guidance for Feedburner users who need another syndication strategy.

The general process is simple. First, you download your subscriber list from Feedburner. Second, you either enter the list in an online form associated with your plan, or you have their help desk process the Feedburner list and import it for you. When that’s been done, you can delete the Feedburner feed to prevent people from getting two notifications of a site post.

Now it’s time to move to changing the site. My blog is supported by WordPress, so there’s a follow.it plugin for WordPress that will, when activated, run you through all the steps needed to make everything work. One of the steps asks you to install another plugin, for social-media icons, and there are setup parameters for it as well. There’s a premium and free version of the plugin, so it might make sense to check out the features before you install, and upgrade if you need premium support.

Read the steps and documentation carefully and go through everything in order. When everything has been set up, add an email to your subscriber list and do a test posting to make sure your list is actually getting the emails as before. You should also view your site in both desktop/laptop and mobile form to make sure you like where the syndication icons are placed.

I’ve not had any issues with follow.it, and in fact I think it’s working better than Feedburner. I’ve noticed that the syndication emails go out much quicker, which makes syndication more attractive to those who want to read my blog soon after it’s posted. It also supports some condensed-delivery “newspaper” options for those who want to review a series of blogs and decide what to read or forward. For any stuck with the loss of Feedburner’s email syndication, it seems to be a great option.

Do You Save Money with Cloud Computing?

Cloud computing doesn’t always save money. That’s contrary to the popular view, certainly contrary to the publicized view, and controversial to boot, but readers will recognize that it’s a point I’ve made often. A recent article in VentureBeat makes that point, but has been its own source of controversy, and frankly I’m not in agreement with many of the themes of the piece. So, I went back over my own enterprise data with the hope of balancing the scales for the cloud, and I’ll share what what I learned.

The first and most important point was that any discussions about “cloud” savings or benefits is an exercise in futility, because the topic is too broad. There are three major classes of cloud project, and the economics of each of them is unique. Second, most assessments of cloud costs versus data center costs fail because they’re not leveling the playing field in terms of assumptions. Third, most cloud projects are badly done, and a bad project creates bad results except by happy accident. I’ll talk about these points and present what enterprises have told me over the last three years, and you can draw your own conclusion.

Cloud projects can be grouped into three classes based on just what’s running. The first class represents applications actually moved to the cloud, things that were previously run on-premises and have been transported to cloud hosting with little or no modification. These represent about 25% of enterprise cloud applications. The second class represents application front-end additions hosted in the cloud, new GUI and presentation logic added to legacy applications. These represent about 65% of enterprise cloud applications, the largest class by far. The third class is the cloud-native independent applications, applications written for the cloud and not designed as front-ends to legacy applications. These are only 10% of current enterprise applications.

Applications moved to the cloud are limited in the extent that they can exploit cloud benefits, and users report that the primary motivation for moving them in the first place is a form of server consolidation. You can’t scale these applications because they weren’t designed that way, and the cloud doesn’t do much to add resiliency to them either, according to users. According to users, only a fifth of these applications “clearly met the business case” for the cloud. Another 40% were “marginally” justified, and the remaining 40% “failed to meet the business case”. About a third of these were migrated back in-house. The experience here shows that cloud hosting is not inherently more cost-effective if the cloud’s benefits are constrained by application design.

The situation is very different for the second group, the application front-end usage that dominates enterprise cloud use. Users say that nearly 70% of these applications met the business case, another 25% did so marginally, and only 5% failed to meet the business case. Interestingly, only ten percent of the failures were either repatriated to the premises or under active consideration for it. Users are happy with the cloud in these missions, period.

The third group is a bit paradoxical. According to users, just shy of 40% of these applications were clearly justified, another ten percent marginally justified, and the remaining half failed to meet the business case. Over half of that failed group were already being (or had been) repatriated or rewritten, with the latter somewhat more likely than the former.

Why cloud-native, of all categories, isn’t meeting goals comes down to the second and third of my three opening points. Users were comparing hosting costs in-house to cloud costs, when the cloud was actually providing more benefits than the premises hosting option was providing. Cloud-native applications are, when properly designed, scalable and resilient in a way that’s nearly impossible to achieve outside the cloud. The “when properly designed” qualifier, of course, is a key point.

Cloud-native development skills are in high demand, and most enterprises will admit that they have a difficult time acquiring and retaining people with that skill set. Many will admit that part of the problem is that their development managers and leads are themselves lacking in the skills, making it hard for them to identify others who actually have the right background. Without proper skills, the cloud-native applications often don’t exploit the cloud, and there are then fewer (if any) benefits to offset what’s inevitably a higher hosting cost.

If we turn back to the VentureBeat article, we can see that their story is about “cloud” economics, and it’s biased in a sense because the majority of public cloud use comes from tech companies, social media and the like, and not from enterprises. The Dropbox example the article cites illustrates this point, but it also illustrates why it’s dangerous to use cloud applications from startups to judge cloud economics overall.

Startups are capital-hungry, and so the last thing one would want to do is rush out and buy data centers and servers to host their expected customer load, then pay for them while they tried to develop the business. Most start in the public cloud, and most who do eventually end up with two things—highly efficient cloud-native applications, and a workload that could justify a private data center with ample economy of scale. As I’ve pointed out a number of times, cloud economy of scale doesn’t increase in a linear way as the size of the resource pool increases; there’s an Erlang plateau. Any reasonably successful dot-com company could expect to nearly match public cloud economies, and if they don’t have to pay the cloud providers’ profit margins, they will of course save money.

Enterprises aren’t in that position. Most of their core applications are going to stay in the data center, and be augmented with cloud front-ends to improve quality of experience and resiliency. They could not replicate cloud capabilities in scalability and availability without spending more than they’d spend on the cloud, and their satisfaction with that class of applications demonstrates that they realize that. They also realize that without the special cloud benefits, the cloud for them will be more expensive. Where they can realize those benefits, the cloud is great. Where they cannot, the cloud is less great, and maybe not even good.

Front-end application development isn’t the same as core application development, because the pricing, security, and compliance implications of the cloud don’t fit with transplanting stuff from the data center, as the experience of users in the first class of applications shows. The concepts of “cloud-native” development, from high-level application design to the APIs and details of the microservices, are not well understood, and almost everything written about it is superficial and useless at best, and wrong at worst. That’s why our third class of applications aren’t as successful as the second front-end class; there’s more to deal with when you do a whole application in the cloud rather than just a front-end.

There’s a lesson for the operators here, of course, and for 5G and O-RAN. Cloud-native hosting, or any hosting, of network functions and features is not a job for amateurs. Every single enterprise I’ve talked with about cloud projects told me that they underestimated the complexity of the transition, the level of knowledge required, and the extent to which the cloud changes basic software development assumptions. That’s true for anyone, not just enterprises, and the network operators need to keep that in mind.

Can We Overcome 5G Disillusionment?

We might be justified in calling the next four months the “Summer of 5G Disillusionment”. On the one hand, the 5G providers seem to be facing off in a big win-customers campaign. On the other hand, some recent research says that users are disappointed with the lack of new 5G applications. A good part of the problem arises from over-promotion of 5G, something that almost every player shares the blame for. Another part is that “new applications” really don’t have a lot to do with 5G per se, and the final blame point is lack of real progress in the thing that would likely generate new applications, which is carrier cloud.

There’s not much need to amplify the over-promotion point; what technology isn’t over-hyped these days? What makes 5G perhaps more problematic than most technologies is that it was promoted to cellular users and not to deep-in-the-organization technical geeks. Ordinary consumers like a good tale, but they also turn on the teller if it turns out that the story isn’t true.

The reason for the face-off among 5G operators is a good example of hype, in fact. The truth is that operators aren’t looking for 5G customers as much as for customers, period. 5G is just another competitive point that savvy marketers in the mobile operator space can leverage to try to gain market share on competitors. The notion that there’s a race to exploit 5G opportunity surely contributes to a consumer notion that there’s some special 5G opportunity to exploit, and that contributes to their expectations that some new applications will burst on the scene.

New applications for network service “burst on the scene” when a network service change bursts through a bottleneck that had previously constrained them. Most of the applications that consumers say they’re waiting for (in vain, so far) are applications that either already exist or could exist under most developed 4G LTE services. 5G offers some improvements in speed, some reductions in latency, but in the applications the article cites, it’s hard to point to specific 5G capabilities that burst through barriers. If we don’t have those applications, it’s less because of network constraints than because of application constraints.

Let’s look at the example of gaming. 5G’s capability to deliver a low-latency connection is cited as a specific reason why 5G gaming would be better than 4G gaming, but does that stand up to examination? Yes, 5G could reduce latency, but one big contributor to gaming latency is the message paths between various players and a “compositor” application that builds the visual experience for everyone who’s avatar is in the same local space. Some argue that 5G drives edge computing, which then reduces the latency of that path, but that begs two questions; does 5G actually drive edge computing, and where is “the edge” when players are scattered all over the world?

What users really want when they say “new applications” is “new experiences”, and experiences aren’t created by the network, only delivered by it. Hosting creates experiences, which means that the connection between gaming and edge computing isn’t specious, only improperly developed.

In order for 5G to promote “hosted experiences”, we have to presume edge hosting resources would deploy and would be exploited for higher-level services. That requires two specific things; an architecture to define how edge services would be presented to applications, and regulatory and business accommodations to encourage investment. Right now, we have neither.

I’ve noted in the past that one problem with 5G as a driver of edge computing overall is that 5G hews to the NFV model, which is at odds with how cloud applications overall are evolving. If we expect 5G to create broadly useful edge facilities, we have to target 5G implementations to the features that would be broadly useful, which we are not doing.

The regulatory side is even more frustrating. Every major market area has their own network regulators, and their own vision of what constitutes good public policy. The current state of all of this is a swirling mess, and in some areas (including the US) it changes as the party in power changes. The general position is that telcos, as former regulated monopolies, can’t exploit the fruits of that protected relationship by expanding the service set of the core telecom entity. Instead, they have to form a separate subsidiary. The problem is that it’s the core entity that owns the real estate in which edge hosting would have to be installed. Forming a separate subsidiary would typically mean telcos would have to share the resources with not only their separate subsidiary, but competitors to it. Good luck making a business case for that.

Right now, I see the “cloud-providers-host-5G” story line as sitting on the fault line of these two points. On the one hand, if public cloud providers did the 5G hosting deployment, whatever they did would be instantly available for broader exploitation and the creation of new experiences. On the other hand, as I pointed out in an earlier blog, the Bell-heads and Cloud-heads don’t seem to be converging on an architecture that would fulfill both 5G and experience-hosting missions. There’s also the fact that the telcos, in the long run, would have to cede a lot of infrastructure ownership to cloud providers, and pay their profit margins.

The source of 5G disillusionment isn’t 5G as much as “carrier cloud”. Users are implying that they want 5G experiences rather than just 5G connectivity. Somebody has to build them, but if a third party is going to try to build an edge computing strategy, why link it to 5G? If the telcos themselves are to build that strategy, the “carrier cloud” concept, then how do they resolve the technical and business/regulatory problems I just noted?

The problem with hype waves in general is superficiality. You can’t create a hype wave using two-thousand-word articles, but you can’t raise and resolve complex technical, business, and regulatory issues in 300-word sound bites. Companies are focusing more and more on simple promotion, which means simple hype, rather than on addressing complicated problems. Solving those problems is essential if we’re to avoid disillusionment in 5G or any other technology.

How might this all shake out? It’s hard to decide even what the “ideal” outcome would be, but if we assume that user satisfaction has to replace disillusionment, then it would appear that having cloud providers host 5G is the best path to the optimum state. Cloud providers may struggle to get their telecom-cloud-native act together, but they already have the technology needed to host experiences that would create the applications of 5G users say they crave. The question is whether cloud providers could present a compelling story to telcos and induce them to commit to the approach.

Over-promising is an all-too-common marketing strategy, but it can lead to long-term problems, and both cloud providers and network operators are at risk if they can’t come up with a strategy to deliver on 5G expectations.

Navigating the Increasingly Complex Content Space

Remember that old saying “Content is king?” Maybe we’re getting a lesson that it’s still true, and if it is, the truth it represents might reshape online service, service providers, and even vendors. First, AT&T announced a deal with Discovery to merge its Time Warner assets with Discovery’s assets to create a new company that will be a streaming giant, potentially a threat to players like Netflix, Hulu, and Disney. We then heard that Amazon acquired MGM, and the two in combination could be, or could indicate, revolution, despite the financial analyst commentary that it’s a defensive move.

There’s never been much doubt that broadband traffic is dominated by video, and for decades it was linear television that justified a lot of the wireline connections, including things like cable and Verizon’s FiOS service. The combination of these two truths has always been a bit of a contradiction; if streaming video is dominating broadband, what’s the fate of linear TV? There seem to be two forces driving change here, one on the demand side and one on the supply side.

Streaming video gained popularity with the widespread use of cellphones and the availability of 4G or better broadband. Mobile users couldn’t use linear video, and they had a harder time watching things that were “on” at a specific time. We saw years of streaming growth even before 2020.

Demand for streaming video has boomed with the pandemic, in part because people were at home more and thus needed a wider range of viewing options, and in part because the lockdown stalled production of many shows that would ordinarily have been part of “live TV”. Netflix, among others, has shown that the growth in streaming may not continue now that lockdowns are easing, but a lot of people got used to watching what they liked rather than what was on, and some at least will likely continue to frequent streaming providers.

On the supply side, we’ve had things like Google’s FTTH, that was a pure IP delivery not suited for linear delivery, for some time. The real change is 5G, both mm-wave technology and even traditional cellular/mobile 5G, as a replacement for fiber or copper delivery media to the home. 5G is already staking a claim as a player, and potentially a price leader, in the broadband space for areas where direct fiber isn’t economical because of low demand density. And guess what? 5G would require streaming TV.

Streaming uses broadband connectivity, and the majority of streaming video providers aren’t broadband providers at all, they simply ride on whatever broadband is available. There’s no infrastructure to deploy, and there’s no “preferred region” as there is with most wireline/fiber broadband. So we have users who have gotten more into streaming, a 5G broadband model that may well become ubiquitous and will support any streaming video service. What does that say?

Obviously, AT&T thinks it says that they need to separate the “telco” side of their business from the content side in general, and the streaming content side in particular. The new company would be able to push consumer video anywhere that suitable broadband was available. This is particularly important for AT&T because Verizon’s territory has much higher demand density, and Verizon can therefore be profitable with FTTH to way more customers. Why not, instead of fighting Verizon for FTTH supremacy when demand density ties AT&T’s hands, sell AT&T streaming video to Verizon customers? Yes, they could have done that before, but the new company will have a lot more mass, a lot more content to play, and won’t confuse users who might think AT&T content was available only to AT&T broadband customers.

Amazon’s position also seems clear. They don’t get most of MGM’s video library under the deal because of co-production constraints, but if streaming is to become universal, then is it not logical to assume that differentiation for streaming providers will be critical? What better way to differentiate than to be able to produce more content? Amazon already produces live shows, but adding MGM to the story would create even more opportunity for streaming content.

We might even be seeing the end of “live” TV for more and more original material. Obviously, news and sports have to be offered live (even if the latter is often viewed time-shifted anyway). Amazon produces multi-show series, but instead of dribbling them out in a specific weekly timeslot, they release the whole season at once, to be viewed any way the consumers like.

The overall situation is chaotic, and it’s likely to become more chaotic before we shake out a new video viewing market model. It’s certainly not a simple matter of companies like AT&T fleeing content ownership, because all the indications are that content is about the only durable asset in the whole swirling mess. It does suggest very strongly that linear TV may fall by the wayside, even to the point that eventually some spectrum used for over-the-air broadcasting may be salvaged for broadband service delivery.

One clear winner in all of this is the equipment vendors who provide broadband access technology. A lot of people will see the IP-video shift as benefiting “the Internet”, but in traffic terms the great majority of video is actually delivered from content caches in the viewers’ metro areas. Metro networking, which tends to be a bit more like switching than like routing, is the sure winner. We can expect to see more aggregation done to link viewers to cache points, but the Internet backbone isn’t going to grow as fast as the edge under streaming video pressure. Forget the core network, the sweet spot for equipment vendors may be metro aggregation.

A not-clear-but-increasingly-likely winner is edge computing. 5G will play a big role in driving change in metro infrastructure, and 5G relies more on feature hosting than any other mainstream network technology. Caching video, and CDN hosting in general, are other reasonable edge applications. Thus, the video shift might increase the value of edge hosting more quickly than other glitzy edge applications like IoT, or even gaming. This raises (again) the question of whether a 5G deployment model could be cloud-friendly enough to serve as a broader edge strategy.

The hardest thing to predict, perhaps surprisingly, is the impact on streaming services, and those who offer them. Consolidation at the media level could mean that some content owners could refuse to participate in streaming package deals like those offered for streaming live TV (Hulu, YouTube TV, etc.) Consumers might end up with a la carte programming (which they’d like) but end up paying much more because they have to buy most everything that way. Today, a good streaming TV service could cost perhaps $64 per month, and that’s equivalent to roughly eight a la carte deals with networks, assuming the networks don’t raise their own prices. You get more channels with today’s packaged streaming services.

Change is inevitable, and let’s face it, it’s been clear for decades that there was going to be a major change in the whole TV-and-video space at some point. It looks like that point has arrived.

Important Notice for RSS or E-Mail Subscribers!

In July, Google is eliminating Feedburner support for email subscriptions, so to continue to support subscriptions to my blog, we will be transitioning to another subscription model. We will be migrating the current subscriber data to the new service, but any who subscribe during the period of transition may find their subscription doesn’t transfer. Please just re-subscribe when you see the new subscription icon on the blog page if you suddenly don’t see your emails of blog posts.

I have captured the Feedburner subscriber list as of June 3rd for conversion. If you have subscribed after that, I’m sorry but you’ll have to subscribe again with the new setup! The process of shifting to follow.it is now underway, and I’ll update this message when it’s been completed. For now, this will be pinned to the top of the blog page.

The subscriber list has been added to follow.it and I’ve set up the preliminary feed redirection and I’m running tests starting June 5th 2021 to ensure that things work.

The follow.it icon has now been added to the page, and it accepts email subscriptions to the blog.

I have run a posting test and the post appears to syndicate via email, so this new framework should be considered live on Monday, June 7th, 2021, and any who want to subscribe can then do so.

Please note that there may be differences in the timing of your receiving posts you’ve signed up for, and you have more options for what and when you receive things. You can change your preferences as needed!

Why Cisco Has More to Think About than Chips

Cisco announced a pretty good quarter, but its guidance suggested its profit margins would be impacted by a decision to absorb increased chip prices rather than pass them along. All that is true, but there’s still a lot to be learned from looking at their earnings call transcript, and also exploring their recent M&A decisions and other announcements. In summary, there’s more at stake here than chips.

The first thing we can pick out from the call’s transcript is that Cisco expects that the network market, and their own portfolio, will undergo radical changes. “Cisco’s end-to-end portfolio will serve as the foundation for next-generation infrastructure solutions as well as cloud enabled delivery models and innovation, allowing our customers to move with even greater speed and agility. This will require a significant investment cycle and reinforces the strength of our strategy while driving greater opportunity to create a world that is more connected, inclusive and secure.”

Routing-as-usual, then, is not a viable long-term strategy, then. That shouldn’t be a surprise given that both service providers and enterprises have been slow-rolling capital programs, largely because the benefits can’t justify greater spending. When that happens, vendors like Cisco have two broad options. One is to try to tweak their sales model to accommodate the buyers’ inertia, and the other is to try to transform their business. Obviously the former is easier, so let’s look at it first.

Cisco hopes to address buyer reluctance by “simplifying the adoption of our offerings with network-wide automation, analytics and flexible-as-a-service consumption models, all aimed at improving our customers’ network performance capabilities and security which we believe will drive tremendous long-term opportunities for us.” Network automation is aimed at reducing opex, which is good for equipment sales because, loosely speaking, ROI equals benefits divided by capex plus opex. Lower opex and you can raise capex without impacting the business case of the buyer.

Another near-term, low-apple, approach is to discourage buyers from hunkering down on a past purchase long after it would normally be depreciated and replaced. The subscription software model is an example; new features that address things like performance are much easier to introduce in software, but if users won’t buy new hardware/software bundles they may simply sit on old features. By separating software as a subscription, Cisco can encourage buyers to keep paying for new stuff without having to fork-lift hardware.

Security is a kind of transition between the low and higher-apple items on Cisco’s list. Once you’ve worn out the easy paths, the next things to look at would be increasing their total addressable market (TAM) by encroaching on someone else’s space, or finding new benefits that would justify additional spending. Security is an obvious target of opportunity, largely because buyers have persisted in gluing on layers of new stuff rather than addressing their security model in a holistic way. That may not prove true forever, but for now, Cisco has an opportunity.

What’s lacking in Cisco’s story is a real and overall sense of innovation, and that shouldn’t be a surprise given that Cisco has always tried to be a “fast follower”, letting others take the risk and blaze the trail, then swooping down to do an interdiction. Even those swoops are often more marketing fluff than significant technology changes; enterprises I talk with rate Cisco below competitors like Juniper in terms of technical innovation, and they also see Cisco’s acquisitions made more for sales-tactical reasons than for strategic reasons.

The advantage of Cisco’s approach is that it magnifies incumbency. If you promote a change in technology, you tend to throw buyers open to new choices and give competitors a new shot at your base. You also challenge the technologists who, because they’ve certified with your existing products, may find their careers threatened by any major shifts. Over the years, I’ve found that any vendor’s certified technical cadre is more conservative than non-certified people, but it’s always been more true for Cisco than for other vendors.

The big question for Cisco, and one the company isn’t addressing on earnings calls or in any other public forum, is whether they can hold onto this approach in an industry that’s facing major changes and challenges. There are two things I think Cisco is going to need to do, and likely do quickly, and I’m not sure they’ll do either of them.

The first thing is to address the problem service providers have with profit per bit, which involves two specific steps. First, to provide near-term relief, Cisco needs to have an absolutely impeccable operations automation story, because they can’t sustain higher pricing without giving something back besides capex. Opex is the obvious answer. Second, to provide long-term relief, they need to be addressing the hosting piece of the service story to offer operators a path upward on the food chain.

Cisco’s position in operations automation has been a bit bicameral in the past. On the one hand, they generally reject standard approaches as pathways for competitors to mess up Cisco’s dominance, and operators tend to like standards. On the other hand, they’ve tacitly accepted standard strategies like ONAP even when they demonstrably aren’t working for operators, in terms of broad opex reduction.

Their higher-layer reluctance is harder to fathom, given that Cisco has sold servers for some time. Look a bit deeper, though, and you see that Cisco has never really tried to compete broadly with the “real” server vendors like HPE, and they’re not aggressive in developing their own software suites to support feature/function hosting. It’s likely they fear any sort of server hosting for its possible impact on router sales.

Demand pressure could derail Cisco’s past resistance to effectively addressing either of these areas, but competitive pressure is another growing risk for them. 5G mandates hosting, and if 5G hosting supports carrier cloud, then Cisco is on the outside looking in for what could be a massive spending wave. Nokia and Ericsson have effective 5G strategies, and as they expand their inventory of 5G platform tools, they raise the bar for Cisco to do anything at all.

Then there’s Juniper. Of all the network equipment vendors out there, Juniper has made the most strategic moves in M&A, especially relating to the very same two new goals that Cisco drags their feet on. They could revolutionize security with 128 Technology zero-trust capability. They could revolutionize higher-layer services through a combination of session-aware handling of traffic and the Apstra data center networking strategy, and their Mist and Netrounds acquisitions could form the heart of a powerful story on service lifecycle automation. If Juniper had positioned these assets optimally, Cisco could have been confronted with major competitive problems, but Juniper doesn’t even come close to matching Cisco’s marketing savvy, and let them off the hook.

Cisco has just hired a former VMware engineer and Cloud-Native Computing Foundation veteran to head up their open-source stuff, which suggests that Cisco finally realizes that they need foundation/platform software and they don’t have the right skills in-house. However, we’ve seen cloud providers do limited hiring to gain some traction in 5G hosting, and the initiatives don’t accomplish as much as they could if there were a deeper team and a specific awareness of the needs of the “telco cloud”. Might they build such a team? Sure, but not immediately, and all Cisco’s competitors have an opportunity to make things harder for Cisco now.

Cisco has also recently made some UCS data center platform announcements that focus on hybrid and multi-cloud, but again it’s not clear whether they represent a real initiative in the space or are simply positioning initiatives. They seem, in particular, to be taking aim at some of the Juniper cloud marketing and product announcements, and you can’t stay ahead of your competitors by counterpunching.

We may see signals of a Cisco strategic shift in their next earnings call, but I’d be surprised if we had any more than a hint before the end of this year. Their competitors may make a move sooner.

Making the Cloud-Heads and Bell-Heads See Eye to Eye

It’s pretty clear that cloud providers think they have a shot at the gold (forget brass) ring in terms of telco services. I hear the signals of interest from both cloud provider and telecom contacts, but I wonder whether either side understands just how wide the gulf between them really is. Over the last two weeks, I’ve gotten a lot of commentary from cloud provider telecom insiders and from the telcos themselves. What I’ve found sure looks like a combination of one group who knows the right approach but not what to do, and another group with the opposite problem.

The cloud people know how to do cloud-native. They understand how to build modern applications for the cloud, how to deploy and manage them. The telcos know how to do network services. They understand the challenges of keeping a vast cooperative device system working, and profitable. Making the twain meet is more than a philosophical challenge, in no small part because both parties are locked in their own private worlds, and neither of them understand that while they speak different languages and thus risk misunderstandings, there’s a more fundamental problem. They’re both wrong.

When I talk to cloud people about telecom and cloud-native design for services, they draw on their experience with social media and say that they have the answer. Social media is huge. It scales. It works. It makes money. All of that is true, but it’s not really relevant to telecom providers, because what the cloud and social media people assume as a kind of unstructured, undefined, taken-for-granted piece, is what the telcos are trying to sell.

Social media is huge, and that’s true. We have billions of users on it, and the systems and software scale to support them. Players like Twitter and Netflix and Google have pioneered network software models that work at this level, and while there have been glitches in all the social media platforms from time to time, users aren’t rushing to the exits, so the software works.

The problem is that social media expects underlying connectivity provided by somebody else. They don’t call the space “over-the-top” or OTT for nothing. The software that cloud and social media types provide is application software not network software. The principles of operation for social media are different. All of the events associated with social media are generated by people, and for the most part those people are clusters of connectivity within small groups. Twitter doesn’t have to organize the whole Twitter universe to manage a Tweet, only the followers of the person doing the Tweeting.

In networks, events are more likely to be generated by problems or at least abnormal conditions. A problem could crop up anywhere, and remediation that has to be organized across a large set of cooperative devices. Not only that, there’s likely a strict time constraint on operations to consider. If you post a Tweet, how long would you expect to wait for a response? Your followers might be doing any number of things (including ignoring you), and so a delay could be considered normal. With network features and services, a delay is often the indication of a lost response or a failed element, and you can’t wait minutes to see what’s happening and set about getting it fixed.

Social media scales, of course. You have those billions in the Twitterverse or on Facebook, and they all do stuff. The thing is, a Tweet or post on Facebook is atomic. The processing isn’t complicated; you look at the list of followers and you send the post/Tweet to them, replicating the message as needed. Where this processing happens is pretty open; you could in theory have the originating device do a lot of the heavy lifting, and surely you could have a resource pool from which you could grab capacity to do the replication and routing. With networks, the events have potentially universal scope, and there’s a big question about where you process them, and how.

A network problem often generates a whole chain of events, which is why we have fault correlation. The events may relate to a common cause (fault correlation is all about that), but there’s also the problem of coincidence, meaning events whose responses are likely to involve some of the same elements. How do you mediate remediation of multiple faults when some of the things that are involved are shared among them? How do you prevent resources from being allocated twice, or more?

Operators have understood the difference, and have fallen back on the model we might call “virtual humans”. We have people in a network operations center (NOC), and before the heady days of operations automation, problems got reported to this pool of humans and humans took care of them. If there were multiple problems, the humans took them in order, or by priority. If there was a risk of colliding remedies, the humans coordinated. No wonder we have monolithic software models in projects like NFV and ONAP!

The cloud community has created cloud-native, microservice-based, applications fairly easy, in no small part because their problem is easy. You could draw out the logic of a post-to-followers social-media framework on a napkin. It’s a lot harder to do that for network lifecycle automation, which is what things like 5G O-RAN are really about. How do you organize all the pieces? You could layer on run-time workflow orchestration, service meshes, and so forth, but how do you deal with collisions in remediation, with prioritization of problems?

I don’t believe for a moment that you couldn’t make telecom services into cloud-native applications, but I’m become less certain that either cloud providers or network operators have a clue as to how to proceed. Certainly the telcos have demonstrated that they can build applications only by assuming that there’s a single thread of processing fed by a queue of events. Just like humans. Will the cloud providers see the telco world as Tweets, just as the telcos see it as NOC-resident humans? If so, their approach won’t work any better.

There are a lot of suggested solutions to the two-different-worlds dilemma we face. Graph theory, state/event tables, you name it. Most of them would probably work, but do either the cloud providers or the telcos understand that they’re needed, and would either group know how to apply them?

We’re rushing to a decision on whether to host the future of networking, stuff like 5G, in the cloud, without knowing whether anyone really understands the problem or would recognize the solution. Among my telco contacts, most still think NFV or ONAP, with their monolithic “anthropic” approach, are perfectly suitable. Among my cloud contacts, most think that an event is an event, whether it’s a Tweet or a network fault.

You can hire people who understand the telco market. You can hire managers who can run telco projects, but this works better for selling something than for implementing it. Microsoft so far seems to have the best approach; buy companies who do telco stuff and run them in your cloud. Even that, though, depends on being able to spot good implementations.

We may, as all the coverage suggests, have a land rush starting here, a rush for cloud providers to gain critical traction and positioning with telcos as the biggest single application of cloud computing—hosting service features—evolves. Do those who are running know where they’re heading? That’s a whole different question.