An Example of a Cluster-Computing Tool for Carrier Cloud

In my last blog, I talked about applying cluster technology to carrier cloud.  Today I want to use an example of cluster-based infrastructure to illustrate just what might be done, and to explain the general application case better.  My selected example is Univa, who has two products that combine to create the essential cluster-carrier-cloud framework.  You’ll see the potential value of cluster technology for carrier cloud and, in particular, NFV.  You’ll also see why we’re not realizing it.

Univa classes itself as a “software-defined computing infrastructure” package, something that provides scheduling and orchestration for diverse resource pools.  They are aiming at the “big compute” market, so far primarily in the enterprise space.  This illustrates my point that there’s a lot going on in virtualization that’s not being targeted at network operators, but nevertheless may be exceptionally useful or even critical.  I said there were two products/packages in play here, and they are the Univa Grid Engine and NavOps Launch.  We’ll look at each of them, and at how they might combine.

Let me start by saying that there are multiple pieces to each of the two main elements in the Univa structure, and none of them are really all that well explained in their collateral.  Many features seem to be provided in several places, and many logical missions for the Univa suite would really likely stitch several pieces together, though exactly why and how isn’t always clear.  The company could do with better high-level documentation, in short, but I’ve tried to dig through it, since I think the concept is strong.

Grid Engine is a package that creates one or more resource clusters, and these clusters can be employed in fairly traditional cloud missions, as parallel grid tools, and as high-availability and low-latency compute resources.  It’s a work scheduler plus, a means of allocating cluster resources to applications/elements, and it can be applied to bare metal, VMs, containers, public cloud, and private cloud.  Some of its features require add-ons, including license management and charge-back tools.

It’s always difficult (and sometimes dangerous) to try to classify what a package like Grid Engine does under the covers, but I’ll have to in order to explain it.  In effect, Grid Engine creates clusters at multiple levels.  You have “clustering” specific to a particular public cloud, a virtualization technology (data center, containers) and overall.  The clusters can be visualized as hierarchical, so elements of a high-level cluster like containers can be drawn from lower-level clusters like cloud providers.  Policies determine how resource contributions flow through this process.

Work scheduling comes in as the description of how applications are assigned to compute resources.  Policies determine how resources are selected and how they’re lifecycle-managed once committed.  Since applications are assigned requirements that these policies operate on, the result is that deployment can be viewed as “fire and forget”, where the application is committed and Grid Engine keeps it running and its workflow elements organized.  However, the basic model is to deploy and sustain, not to adapt to changes.  We’ll get to how that would be done later on.

A nice feature of this approach is that you can run pretty much any kind of application.  Some strategies for feature/component hosting would demand that the applications or services be organized in a specific way, like event-driven or transactional or parallel.  Grid Engine doesn’t limit you in the kinds of applications that could be run, or the mix thereof.  For an application like NFV, you could support functions that provided low-latency IoT event processing, parallel computing for data analysis, and transactional high-availability stuff in any mix, providing of course that the applications/components themselves were designed right.  Some of this is supported best by adding NavOps elements, as we’ll see.  Similarly, you might want to add the Universal Resource Broker (URB, now listed as a NavOps element) if you have a lot of your own infrastructure to manage, since it provides Apache Mesos support.

Since license and usage management are available with the package, some of the specific issues that have already come up in NFV are addressed with Grid Engine.  This again illustrates that it would be better to exploit the tools already available than to invent a new model of virtual function deployment that has to be integrated (with some to considerable difficulty) to work with current software elements that do pieces of the NFV job.

NavOps is a set of tools more aimed at cloud computing than “big computing”, but obviously the lines between those two are fuzzy and some applications would use one or the other, and others both, whichever space you’re in.  NavOps Command and URB are aimed at improving the workload management for cloud deployments, including bursting and failover, and also integrating with current popular cloud/cluster frameworks like Kubernetes and Mesos.  NavOps Launch is based on open-source Tortuga, a cluster/cloud management tool.

Functionally, NavOps extends the scale and efficiency of cloud computing and cluster/resource management.  It would probably be, with all tool pieces considered, a strong basis for carrier cloud and NFV.  A better basis, in my view, would be a melding of the two toolkits, which would provide something similar to Mesos (and, via URB, could include Mesos), but extended into the virtual-machine-and-IaaS-cloud world.

Univa’s model, as Apache and DC/OS, are still somewhat dependent on the ability to identify abnormal events in the application/service being hosted, and initiating a trigger to signal remediation.  They’re also not attempting to resolve resource-level problems in a specific sense, only issues that impact what is being hosted on the resources.  Neither of these are crippling defects since any carrier cloud or NFV infrastructure solution would have these same issues.  However, NFV defines a management model (not an effective one in my view, but a defined one nevertheless) that could theoretically be accountable for the first of these.

The NFV ISG didn’t look at this kind of tool as the basis for hosting, and Univa doesn’t claim to support NFV or carrier cloud; their focus has been on the enterprise.  Operators might chase after Univa-like solutions on their own, of course, but most operators are conditioned to responding to availability of tools, not to running around looking for them.  Might the company take a position in the service provider space?  Money talks, but marketing to service providers is something most enterprise-focused companies undertake rarely, and often almost accidentally.  As carrier cloud opportunities develop, they might think more about that space.  That they’ve not done that so far is an indicator that you can’t entirely blame operators for not seeing the benefits of current cluster and virtualization technologies in their carrier cloud plans.  The providers of the tools aren’t making it easy for them.

Whether Univa thinks about the carrier cloud applications of its capabilities or not, it’s something that the service provider space needs to be thinking about.  If you want to have facile, easily integrated, operationally efficient cloud and NFV deployment, you need proven, mature, feature-rich tools as your starting point.  Univa is an example of a widely used, proven, large-scale, solution to the problem that NFV and carrier cloud proponents are just now realizing they need to face.  That need will become a critical problem if it’s not faced quickly.

In a story yesterday, Light Reading asked if a new Orange deputy CEO was the champion NFV needs there.  Perhaps senior executive sponsorship is helpful for NFV or any other new technology, but good executives won’t sponsor strategies that can’t prove benefits, and any who try won’t be successful (or likely be executives for long).  Where leadership would be helpful now is in recognizing that the current NFV evolution will get to the right approach late, if ever.  If Orange or any other operator wants NFV to work, they have to stop expecting the standards process to produce implementations, and look to where the critical pieces of those implementations are already emerging.

A Retrospective on MWC From the Wall Street Perspective

All that glitters is not MWC, to paraphrase a popular saying, but MWC probably glitters more than most technology shows.  The question that you have to ask after an MWC event, or any other trade show for that matter, is whether there’s anything behind the bling.  I thought it might be interesting to look at Wall Street’s take on the event and add in my own assessment where I think the Street has it wrong.

Remember that Wall Street and the media have one thing in common; they love a revolutionary story.  Static markets don’t produce stock appreciation.  Thus, it would be surprising if the Street dissed 5G overall, and it doesn’t.  The lead of most research on the show is that “5G is happening” or “happening faster than expected”.  The Street cites the AT&T, T-Mobile, and Verizon commitments to services available in “late 2018 or early 2019” as the proof point.

As I’ve noted in earlier blogs, it’s certainly true that there will be 5G service offered by multiple US providers in that period.  However, these services will be little beyond radio-network advances added to legacy mobile infrastructure.  Don’t expect to see all the fancy features of 5G, including network slicing and even controlled latency.  Some of the services will promote attachment of IoT devices via special service models, but the people I talk with say that there won’t be an explosion of IoT adoption.

Even with 5G New Radio (NR), the problem operators face with 5G is that to exploit it fully, you need new spectrum, new phones and devices, and new infrastructure.  Any of those things presents a challenge in investment and ROI, a demand for credibility among both buyers and sellers, and a risk.  How many people would change phones to get 5G?  How many when they didn’t even have the new 5G spectrum available in their area?  How many operators would commit to wide deployment if there were few mobile devices that could support the services and pay back on their commitment?

Another view of the show is that it’s really not so much about “near term” “happening-now” stuff, but about what the technology signals about the future, likely meaning beyond 2019.  This view can be summarized by citing those signals: 5G NR, IoT, SDN/NFV, mobile-edge computing, and 5G digital services like AI, robotics, etc.  You can see a sort-of-technological progression here, one that does reflect at least the order of issues that the 5G market faces.

The first step in the progression is the 5G NR stuff, which is first because there’s zero chance anyone would either do a full-bore 5G NR/Core rollout at one time, and less chance they’d do 5G core without NR in place.  Operators can use some current spectrum with 5G NR until the new stuff is available, but compatible phones will be an issue.  That suggests that early 5G NR might be easier to pull through using something like IoT, which isn’t typically used with 4G and so represents a new revenue opportunity for the mobile operators.

The problem with IoT, of course, is that it’s at least as speculative as 5G is.  Yeah, I know you’ve heard about the ten or twenty or fifty billion IoT devices already out there.  The fact is that they aren’t using mobile cellular services to connect.  Most of them don’t even use IP directly, they are based on a home/local facility protocol like Insteon, X10 or Zigbee.  For sure, a huge uptick in 5G-connected IoT devices would make a great start to a 5G business case for operators, but who makes the business case for the IoT part?  We’re still waiting.

SDN and NFV, the next issue on the Street list, won’t help with jump-starting 5G either.  In fact, there is little in the early 5G deployment opportunity set that really empowers either technology.  What’s happening with 5G and SDN or (in particular) NFV is that vendors who had banked on these new network technologies boosting their fortunes and exit strategies are now realizing that proactive adoption of SDN or NFV on a large scale isn’t going to happen for years, if ever.  Unless 5G pulls it through.  If you’re dying of thirst, every mirage is a waterhole in disguise.

SDN and NFV in 5G deployments depend on serious commitment to the 5G Core; NR won’t cut it.  Serious 5G Core depends on significant new services, services that won’t fit the model of current 4G/IMS/EPC networking.  Can we identify the services?  Plenty of folks at MWC told the Street they could, then named off things like augmented reality, robotics, universal AI, self-driving cars, flying cars, and so forth.  Earth to MWC: enumerating things isn’t creating them.  I can name a bunch of stars, but I can’t get you to any of them (not even the closest).

Some Street reports on MWC also mentioned a vendor not particularly linked with 5G, Juniper.  The company is said to be undergoing a “sales overhaul”, because of significant market headwinds arising in part from a slowdown in cloud switching.  Why is this tidbit relevant to my 5G theme?  Because Juniper had been benefitting from a “positioning deployment”, meaning the process of taking a technology (cloud, in this case) from nothing to something.  Every new service technology has a positioning deployment, and when it’s over the service providers sit and monitor their cash registers to see if things are paying off as expected.  That’s the problem with those new SDN-and-NFV-facilitating services.  We may see early interest in exceptionally good business cases, but will we see it on a scale needed to create a massive infrastructure shift for mobile services?  I doubt it, at least not in the next three or four years.

If you read the details of the Street research from the show, you get a better if less optimistic picture.  The technical advances, particularly in phone and camera technology, needed for those new 5G services can be expected to be available in about 2020.  When they are, there will be an opportunity to test them with the proto-5G NR applications that have already deployed.  That testing could result in significant service deployments by 2022.  Not “near term” but respectable.

Could we do better?  Operators tell me that the most decisive steps toward full 5G would come not from any of these new AI-and-artificial-stuff services, but from the hybridization of 5G and fiber to the node for home broadband and video delivery.  None of this stuff generates the heart-throbbing attention that things like virtual reality and robotics do, but the hybrid has the advantage of creating 5G opportunity through applications with proven revenue potential.  We will, I’m sure, get to the sci-fi stuff down the road, but not unless we can pass through the initial deployment period successfully.

Nokia told the Street that “To take full advantage of all these features [referencing network slicing, low latency, and high-speed data] requires complete redesign of network architecture.”  Well, that sure sounds like a vendor telling the Street that a forklift of all infrastructure for 5G deployment is inevitable, or every feature can’t be provided.  I’m telling you that if every feature can’t be justified, no forklifting will be provided.

Are 5G vendors, in an attempt to boost the Street’s vision of their own financial future in 5G, setting the bar so high for deployment that the buyers, the network operators, won’t be able to justify 5G at all?  That could happen.  On the other hand, how much of the future is determined by an interest in getting to it?  I think that the deciding truth with 5G is that a lot will happen with it, if we can just get it started, but most of the happening is out four or five years into the future.  As usual, the market exaggerates pace of adoption more than it does what we’re going to be able to do once adoption takes place.

Cluster Computing as a Pathway to Carrier Cloud

Real virtualization is based on clusters.  Virtualization assigns tasks to resources, and it doesn’t make sense to go through the assignment process to provide a resource pool the consists of one host.  Virtualization really implies a remapping of hosting across a pool of available servers not a 1:1 assignment.  In cloud and container computing, a pool of resources is known as a cluster, and so the term cluster is probably a reasonable general term to describe a resource pool.  Is a cluster also a good baseline for things like carrier cloud?  We’ll see.

Let’s start with a simple question: What the heck is a “cluster” exactly?  It’s a collection of “worker resources” under the control of a “master resource”, cooperating to perform some mission.  That’s also a pretty good definition of a resource pool, and of many collective computing resources we find these days, as long as you don’t get too literal.  Every resource pool doesn’t have an explicit master resource, and most today have a fair variety of missions, but the cluster concept is important because we have so much open-source software targeting clusters of some type.  Might some of that be pressed into service to support virtualization and the carrier cloud?  Clustering as a natural pathway to virtualization is important by itself, but it could be critical if clustering technology is directly applicable to carrier missions.

There are actually many different ways of characterizing clusters, including the technology and the mission.  For example, clusters could be homogeneous based on a single operating system or container type of environment, they could be multifaceted in terms of their technologies, they could be designed for parallel computing, and for high availability.  We’re going to look at some of the cluster types in more detail here, but just be aware of the fact that it is possible to specialize clusters to a mission.  One of the questions that we have to ask when we evaluate cluster technology is whether that kind of specialization is useful or harmful overall.  That’s particularly true when you talk about clustering as a part of a public service, including carrier cloud.

To navigate through all this confusion, it’s probably best to start by saying that everything in cluster computing is based on the same general thing underneath; it’s all a bunch of resources.  Assigning a structure or definition to a cluster is really a matter of understanding the specific thing we expect those worker resources to be cooperating to do.  Not so much for this specific vertical mission, but more in terms of the software structure that’s expected to be run on the cluster.

Everybody is most familiar with cloud computing, which is where a pooled resource creates a series of virtual hosting points that can be used by applications as though they were real.  Most public cloud services use this kind of cluster, and in most cases the pool is made up of identical or very similar systems in terms of hardware and platform software.  A task gets a “virtual server” that is always less than (a VM among many, or a container) or equal to (bare metal) a single host in terms of power.  The tasks that run in a cloud don’t have any explicit relationship with each other, and public clouds presume the tasks are deliberately separated.  Some describe the cloud as supporting a service relationship with users.

Grid computing is another special form of cluster computing, but this time the goal is to support an application that needs more resources than an entire server would provide.  A grid application gets a lot of resources, so its virtual host is bigger than a physical host rather than the other way around.  Unlike the cloud, which is designed for traditional programming tools and techniques, grid computing requires that applications be developed especially for that model of cluster use.  There are commercial users of grid computing, but most of it is familiar to us from scientific applications.  Grid applications are also specialized for the grid cluster hosting environment, and so it would be fairly difficult to base broad cloud services, or even function hosting, on the grid model of clusters.

 

One mission-related subdivision of the cluster concept is high availability, and this is the cluster model that applies most directly to things like function hosting for services.  The chances of a single computer failing are finite.  Add additional computers, and the probability of all of them failing at the same time diminishes exponentially.  Some applications of clusters exploit this effect, and if virtualization means remapping of needs to resources in a dynamic way, then availability can be influenced strongly by the correct cluster design.  That design starts by creating a large enough pool of resources that there’s always some available to step in if other resources fail.  That, in turn, means that you have some homogeneity in the pool.  If every application/component has unique requirements that require individualized hosting, you don’t have a pool of assignable resources.

That doesn’t mean that all cluster hosts have to be totally homogeneous.  In many cases, applications will fall into groups according to the resources they need, and if there is enough demand for each of these resource groups, you can still create a statistically efficient resource pool from each of them.  However, it is always going to be easier and more efficient to build clusters from masses of equivalent resources because you’ll reach statistical efficiency with a smaller number of resources overall.

To make a cluster of resources into an efficient resource pool does demand something beyond just numbers.  For example, “resource equivalence” is the fundamental notion within a pool of resources.  You have to be able to make free substitution within the pool, which means not only that the resources themselves have to be fairly uniform, but that the connectivity among them not create sharp differences in application/component QoE depending on where you put things.  The more you distribute the cluster’s resources, the harder it is to connect them without impacting QoE, because of propagation delay, serialization delay, and handling delay in the network.

You also need to be able to schedule work, meaning make assignments of resources to missions, based on whatever criteria the application might require and the pool can support.  Once you’ve scheduled work, you have to deploy, meaning make the assignment between virtual and actual resources, and then you have to provide for lifecycle management that will detect abnormal conditions and take automatic steps to remedy them.  These three capabilities have to be overlaid on basic clustering for the concept to be useful, and how the two work would likely determine the range of services and applications that clusters could enable.

If you’ve followed this (which I hope you have) you recognize that a lot of what clusters need is also what something like Network Functions Virtualization (NFV) needs.  In fact, NFV has never been anything but a specific application of virtualization and clustering principles.  Or perhaps I should say “NFV should never have been”, because obviously the ISG didn’t follow the path of basing its approach on cluster technology.  That could have been done, because unlike container-based solutions like DC/OS, there were cluster implementations available in open-source form when the ISG launched.

It’s not too late for NFV clusters, though.  Most cluster strategies operate at a lower level than containers, working instead with bare metal or virtual machines.  That might make it easier to adopt clusters explicitly in NFV, because containers/bare metal offers better tenant isolation and both also are more flexible with respect to networking options.  Finally, there are a lot of tools available for scheduling and deployment on clusters, which means there would be more choices for operators who wanted to try it out.

All cluster-based strategies would pose the same difficulty as OpenStack in mapping to NFV, though.  Scheduling, deployment, and lifecycle management in cluster applications is typically designed to be uniform in implementation and efficient in execution.  There are many things the NFV ISG has suggested as requirements for scheduling resources for VNFs or for management that don’t map directly to cluster solutions to the same problems.  The number of issues and the difficulty in addressing them will relate to the flexibility of the previously mentioned three software layers that control cluster behavior.

When you rise to the level of lifecycle management in that structure, things get truly complicated, for the simple reason that lifecycle behavior is very service/application dependent.  Scaling of components under load, for example, is something that some applications (including NFV) mandate, but others don’t use because of stateful behavior complexities.  Updating things is an example of stateful behavior, and obviously you can’t spin up unlimited things to update a common database unless you can manage collisions.  Cluster tools won’t generally address this level of issue, and in fact even specifications for NFV don’t do a particularly good job there.

Clustering, as I said at the opening of this blog, is a critical element in virtualization of any kind.  In later blogs, I’ll take a look at some clustering strategies and we’ll use them both to explain the issues and requirements, and to see what features apply best to network/cloud applications.

CDNs Could Be a Step Along the Road to Edge Computing

The idea of combining content delivery networks and edge computing is logical on the face, given that CDNs function at the edge.  Now, a demonstration of CDN and machine learning hosted on Ericsson’s Unified Delivery Network (UDN) suggests the company might be approaching fulfillment of both that combination of features and an implicit promise that came with the original UDN announcements.  “Unified” has to mean something, after all.  There are three specific reasons for the marriage of the two concepts that could be driving Ericsson’s attention at this specific point in time.

The first of the reasons why the Ericsson move might be smart is the fact that historically net neutrality has exempted content delivery networks from regulations. This means for the operators that anything that they did in close conjunction with CDNs could arguably expect the same exemption.  Operators typically favor technology revolutions that don’t introduce additional regulatory threats, and clearly this would be an example of such an evolution.

The reason for special treatment of CDNs is that they make an enormous contribution to Internet quality of experience.  How they do that is where the edge-computing relationship starts.  CDN’s have always been a cache-close-by concept, where instead of pushing thousands or millions of copies of content to users from some central point, you cache popular content in areas where it’s likely to be consumed.  That improves quality of experience and also reduces the drain on network transport resources.  Over time, caching points have moved from the “provider edge” or inside edge of the access network, out into the access/metro network.  This is a response to the combination of increased video content traffic and the impact of the shift to mobile broadband.

Over time, CDN’s technology has evolved too.  It’s shifted from an appliance implementation to something that either is, or at least resembles, a server with data storage and software.  This shift has obviously moved CDNs toward a convergence with computer technology, especially at a technical deployment level.  Given the trends toward cloud computing carrier cloud Network function virtualization and similar posting driven, software-defined, technologies, it’s certain that’s CDN implementation will eventually be based on cloud computing. From there, it would be surprising if it didn’t extend itself to compute-caching activities as well, which gets us back to the regulatory exemption you might get from piggybacking on CDNs.

The extent to which regulatory considerations could actually drive edge computing policy is hard to predict because of the policy disorder we’re seeing these days.  Every major market region has its own regulatory regime, and its own neutrality rules.  In the US, we have seen neutrality policy shift back and forth over time, and at the moment we’re even seeing some states take a position on the topic.  Given that this variability at the policy level makes policy-driven technology planning difficult, it may be that operators would discount or at least devalue regulatory influence on edge deployment, for now.  Ignore it?  Probably not at the market planning level.

That opens the second reason for CDN and edge computing convergence.  Carrier cloud has many possible drivers, but my modeling is always shown that the most credible of these drivers relate to the monetizing a video traffic and the delivery of ads in general.  Since most video traffic and advertising is already delivered out of CDNs, it’s logical to assume that linking computer activities related to ad targeting or video optimization with CDNs would be easy and beneficial.

I mentioned that advertising and video are the most credible near-term carrier cloud drivers, and that actually understates the case.  My model shows that through 2020, carrier cloud deployment would have to be driven by video and advertising applications; no other driver will emerge in that period.  From 2020 through 2023, mobile feature opportunities grow, but even in 2023 video and advertising is the largest single source of carrier cloud opportunity.  While this doesn’t mean that all these near-term applications have to relate directly to CDN, it’s likely that many would and that some tight integration between carrier cloud resources and the CDN mission would be helpful.  It’s also likely that this integration would involve the delivery of some customer information to the edge element, to facilitate ad selection.

We come then to the third factor in promoting the convergence of CDNs and edge computing.  The same QoE factors that encourage a migration of CDN cache points outward toward the user would promote edge computing.  Pride of place is a major issue in the cloud because the placement of computing resources in proximity to the user facilitates the control of time-dependent functions. Anyone who watches ad-sponsored video understands that ads are intrusive enough by their nature, without adding in a delay associated with picking ads, serving them in conjunction with the video and then transitioning back to the content when the ad is completed.

The challenge with proximate placement of computer resources is having a place to put them. Edge computing is a technical problem but it’s also a real estate problem.  One massive complex supports a centralized cloud.  You could serve every US Standard Metropolitan Statistical Area (SMSA) with about 250 data centers, but that’s hardly the edge.  CDN caches are deployed in a couple thousand locations today, and there are about 12,000 “edge offices” for operators in the US where facilities to host augmented CDNs could be sited.  Obviously if you already have space to install content delivery elements in the couple-thousand current sites, and if those content delivery elements include computer resources, you certainly have the space to augment resources to provide at least some edge computing capability.  If CDNs continue to migrate outward, those edge offices are the next stop.  Ride along to these locations, and edge computing might finally reach the real, logical, edge.

There are also some risks, and perhaps serious risks, to linking edge computing and content delivery. The most significant of these is the risk of creating a content-specific deployment model, making it more difficult later to incorporate non-content applications like Network Functions Virtualization.  While it’s true that compute is compute, it’s also true that software organization, network connectivity, and the balance between storage, memory. and compute resources would vary across the range of possible carrier cloud applications.

None of these barriers are insurmountable, and Ericsson’s original UDN mission statement suggests that they have a long-term commitment to making the “U” in “UDN” a reality.  They say they want to add content and applications traffic, and while the latter could mean something narrow or broad, it surely means more than simply cache support.  However, Ericsson has not made specific announcements of a broader edge-compute or carrier-cloud model for UDN.  That goes back to the question of software, resource, and connection specialization, and whether the model of information flow that relates to content delivery would be extended or even extensible beyond that.

It’s pretty clear the network operators are not looking for a single-mission solution for content delivery, for edge computing or for carrier cloud.  That’s really going to be the challenge for the Ericsson and similar players in this space. Whatever you believe the drivers for carrier cloud are, those drivers won’t develop in a homogeneous way, emerge at the same time, and won’t drive the market to the same extent. It’s inevitable that operators and vendors alike will focus on the drivers that have the most affect the soonest, because it’s justifying early deployment is always the most problematic. It’s also inevitable that, once they get through the early deployment, they’ll think about leveraging their investment as broadly as possible to improve their return. The only way to harmonize these goals is to plan for the long-term and apply in the short-term.

It would be nice to believe that the solution to harmonizing short-term and long-term objectives for carrier cloud would somehow emerge in the market naturally.  Unfortunately, at least as far as MWC is concerned, it appears as though “market forces” are seeking visibility rather than substance—not that that’s an unusual step. We have plenty of announcements about how Vendor X or Vendor Y are moving closer to the edge, but not very many are specific about what they plan on doing there or how they plan on justifying their deployment. Answers to these questions are essential because we’re not going to see carrier cloud emerge from some kind of massive large-scale experimental initiative taken by the operators. Mega-deployments are not a science project. Never will be.