What Tech Companies and Tech Alliances Tell Us About Tech’s Future

In the last couple weeks, we’ve heard from three tech companies who could define how IT is done.  Ericsson represents the service-centric view of transformation, though in the network space.  HPE represents the agile side of the IT incumbent picture, and IBM the entrenched (some would say, ossified) side.  It’s worth looking at the picture they paint as a group, and in particular in light of recent news that Berkshire Partners was buying MSP Masergy and that AT&T was partnering with both IBM’s SoftLayer and Amazon’s AWS.

If you look at the recent discourse from all of these companies, and in fact from most players in the cloud, IT, and networking space, you’d see a story that’s strong on the notion of professional services.  This is due in no small part to the fact that buyers of technology generally feel they’re less equipped to make decisions, and to implement them, than ever before.

In 1982, when we started to survey enterprises, about 89% of IT spending was controlled by organizations who relied on internal skills for their planning and deployment.  Nearly all (98%) of this group said that they were prepared for the demands of their IT evolution, and their top source of information was the experience of a trusted peer followed by key trade press publications.  Vendor influence was third, and third-party professional services were fourth.

In 2016, the numbers were very different.  This year only 61% of IT spending is controlled by companies who rely on their own staff, and of that group only 66% said they were prepared for the tasks before them.  Their top information resource was still the experience of a trusted peer, but the media had dropped to fourth, with second now being vendors and third being third-party services.

You can see from this picture that enterprises seem to be depending on others much more, which should be a boon to professional services and those who offer them.  In fact, professional services revenues in IT are up about 18% over the last decade.  Why, then, would Ericsson seem to be missing on its services bet?  Yes, part of it is that the network operators are less committed to professional services than enterprises, but is that all?

No.  The big factor is that technology consumers are generally less likely to want professional services from third parties.  They are happy to pay vendors for them, or to contract for software development or integration from a software company, but much less likely to consume anything from a third party.  The desire for an impartial player is strong, but stronger yet is the feeling that your service vendor will be better for you if they’re also a primary provider of hardware or software.

IBM and HPE are both primary providers of hardware and software, and they both have big expectations for professional services, but the industry-average growth rate per year would hardly send chills up the spine of the CFO or the Street.  What the enterprise surveys I’ve done show is that the movement of vendors from number three in influence to number two is driven by a change in the nature of IT spending.

IT spending has always included two components—one the budget to sustain current operations and the other the budget to fund projects to transform some of that operations.  The transformation can in theory be cost- or benefit-driven.  For the period from 1982 to 2004, the spending was balanced between these two almost equally—certainly never more than a 10% shift.  In 2005, though, we saw two changes.  First, sustaining-budget spending grew to account for about 64% of total spending and gradually increased from there to 69% in 2015.  Second, the target of the “project spending”, which until 2002 had been largely improved productivity, shifted that year to controlling costs.  “I want to do what I do now, but cheaper,” was the mantra.

It’s only logical that if that much focus is on just doing what you do today at lower cost, your need for professional services is subordinate to your need for cheaper components, and you probably favor the idea of vendors making their own analysis of your situation and recommending an alternative in the form of a bid.

The same logic drives users to adopt virtualization technology to reduce hardware costs, to look to automate their own design and application deployment processes, and to transfer to the cloud any work that’s not efficiently run in-house.  In some cases, where internal IT is expensive or seen as unresponsive, that could mean almost everything.  Since server/hardware spending is a big target, this creates systemic headwinds for all the IT players who sell it, including both HPE and IBM.

The challenge for businesses who embark on this kind of transformation, as opposed to simply looking for cheaper gear and software, is the lack of strong internal skills to support the process itself.  All forms of virtualization are likely to succeed, based on my survey data, to the extent to which they are no-brainers to adopt.  That can be because they’re a high-level service, or because the service provider/vendor does the heavy lifting.  Which is why “the cloud” is problematic as an opportunity source, in its current form.

What users want is essentially SaaS, or at least something evolved well away from bare-metal-like IaaS.  HPE’s approach to fulfilling that need seems to be focused on enabling partners to develop wide-ranging cloud solutions that would then be deployed on HPE hardware.  IBM seems to want to control the software architecture for hybrid clouds, by extending them out from the data centers they already control.  Both these approaches have risks that have already impacted vendors’ success.

HPE’s problem is the classic problem of partner-driven approaches, which is getting your partners to line up with your own business needs.  That’s particularly challenging if you look at the enterprise space as a vast pool of diffuse application demand.  What do you need to focus on?  Traditionally, the senior partner in these tech partnerships is driving the strategy through marketing, but HPE doesn’t have inspiring positioning for its strategic offerings.  They need to, because you recall that buyers are looking more to vendors for this now.

IBM’s problem is that the cloud isn’t really dominated by IT stakeholders, but by line stakeholders.  Those people aren’t part of IBM’s account team call-on list, and IBM doesn’t market effectively anymore so it can’t reach cloud influencers that way either.  Yet again, we see a company who is not reacting efficiently to a major shift in buyer behavior.  Not only that, IBM has to establish a relationship with stakeholders and company sizes it doesn’t cover with an account team.  They need marketing more than anyone.

How does this relate to Masergy?  Well, what is an MSP anyway?  It’s a kind of retail-value-add communications service provider who acquires some or all of the basic low-level bit-pushing service from a set of third parties, and adds in management features and functional enhancements that make the service consumable by the enterprise at a lower net cost.  Like SaaS, which can save more because it does more, managed network services can save users in areas related to the basic connection service, enough to be profitable for the seller and cheaper for the buyer overall.

What better way to deal with the declining technical confidence of buyers?  Masergy isn’t boiling the ocean, service-target-wise.  They find one service, appealing to one class of buyer (enterprises) and typically consumed in a quantity large enough to be interesting on an ARPU basis.  They then address, in framing that one service, the very internal technical skill challenges that make strategic advances in tech hard for the buyer.  That’s an approach that makes sense, at least to the point where you’ve penetrated your total addressable market.  Enterprises don’t grow on trees.

The AT&T/Amazon-and-IBM deal could be more interesting, if more challenging.  One similarity with Masergy is obvious; the market target is largely the same.  Yes, you can have SMBs on either IBM’s cloud or Amazon’s, but the real value of the deal to both the cloud guys and to AT&T is to leverage the enterprise customer, who already consumes a lot of AT&T network services and who could consume a lot of cloud too.  AT&T could even add security and other features using NFV technology.   In many respects, these deals would add another higher-layer service set, not unlike the MSP approach.

Not in all respects, of course.  AT&T seems to be the out-front retail control in the deals in most cases, and AT&T’s NetBond cloud-integration strategy covers pretty much the whole cloud waterfront, which means neither Amazon nor IBM have an exclusive benefit and AT&T has the potential for classical channel conflict—who do they partner with on a preferred basis for things like IoT, a focus of the deals?

The big difference between the Masergy model and the AT&T model is that point about easing the burden of meeting technical challenges with your own staff.  Masergy simplifies the problem through bundling, but it’s difficult to bundle business services with networks because business services are delivered through applications that vary almost as much as the businesses do.  The cloud is not a solution, it’s a place to host one, which makes AT&T’s NetBond partnership paths harder to tread.  We’ll have to see if they can make it work.