We are not through the first earnings season of the year, the first after the new tax law passed, but we are far enough into it to be able to see the outlines of technology trends in 2018. Things could be a lot worse, the summary might go, but they could also still get worse. In all, I think network and IT execs are feeling less pressure but not yet more optimism.
Juniper’s stock slid significantly pre-market after its earnings report, based on weak guidance for the first quarter of this year. Ericsson also reported another loss, and announced job cuts and what some have told me is an “agonizing” re-strategizing program. Nokia’s numbers were better, but mostly compared with past years, and they still showed margin pressure. On the provider side, Verizon and AT&T have reported a loss of wireline customers, with both companies saying that some online property (Yahoo, etc., for Verizon and DirecTV Now for AT&T) helped them offset the decline.
In the enterprise space, nine of every ten CIOs tell me that their primary mission in budgeting is not to further improve productivity or deliver new applications, but to do more with less. Technology improvements like virtualization and containers and orchestration that can drive down costs are good, and other things need not apply. The balance of capital spending between sustaining current infrastructure and advancing new projects has never in 30 years tipped as heavily toward the former goal.
In the world of startups, I’m hearing from VCs that there’s going to be a shake-out in 2018. The particular focus of the pressure is the “product” startups, those who have hardware or software products, and in particular those in the networking space. VCs say that they think there are too many such startups out there, and that the market has already started to select among the players. In short, the good exits are running out, so it’s time to start holding down costs on those who aren’t already paying off for you.
Something fundamental is happening here, obviously, and it’s a something that the industry at large would surely prefer to avoid. So how can we do that?
Every new strategy has to contend with what could be called the “low apple syndrome”. It’s not only human nature to do the easy stuff first, it’s also good business because that probably represents the best ROI. The challenge that the syndrome creates is that the overall ROI of a new strategy is a combination of the low and high apples, and if you pluck the low stuff then the residual ROI on the rest can drop precipitously. The only solution to that problem is to ensure that the solution/approach to those low apples can be applied across the whole tree. We have to learn to architect for the long term and exploit tactically, in short.
There are two forces that limit our ability to do that. One is that vendors in the space where our new strategies and technologies are targeted want to sustain their revenue streams and incumbent positions. They tend to push a slight modification of things, something that doesn’t rock the boat excessively. However, they position that as being a total revolution, and that combination discourages real revolutionary thinking at the architecture level.
The other force is the force of publicity. We used to have subscription publications in IT, but nearly everything today is ad sponsored. Sponsor interests are likely to prevail there. Many market reports, some say most, are paid for by vendors and thus likely to favor vendor interests. Even where that’s not directly true, one producer of market forecast reports once said to me that “There’s no market for a report that shows there’s no market for something. Report buyers want to justify a decision to build a product or enter a market.” I used to get RFPs from firms on outsourcing analyst reports, and the RFP would start with something like “Develop a report validating the hundred-billion-dollar market for xyz.” Guess what the market report will end up showing. How do you get real information to buyers under these conditions?
OK, if we have two forces that limit us, we need two counterbalancing forces. The essential one is a buyer-driven model in specification, standardization, and open source. I’ve seen vendor/buyer tension in every single standards activity I’ve been involved in, and the tension is often impossible to resolve effectively because of regulatory constraints. Many geographies, for example, don’t allow network operators to “collude” by working jointly on things, and valuable initiatives have been hampered or actually shut down because of that.
The operators may now have learned a way of getting past this issue. AT&T’s ECOMP development was done as an internal project, and then released to open-source and combined with OPEN-O orchestration of NFV elements to create ONAP. The fact that so much work went into ECOMP under AT&T means that even though open-source activity would likely face the same regulatory constraints as standards bodies, vendors have a harder time dominating the body because much of the work is already done. AT&T is now following that same approach with a white-box switch OS, and that’s a good thing.
The second solution path is establishing software-centric thinking. Everything that’s happening in tech these days is centered around software, and yet tech processes at the standards and projects level are still “standards-centric”, looking at things the way they’d have been looked at thirty years ago. Only one initiative I’m aware of, the IPsphere Forum or IPSF, as it was first considered a decade ago. This body introduced what we now call “intent models”, visualizing services as a collection of cooperative but independent elements, and even proposed the notion of orchestration. However, it fell to operator concerns about anti-trust regulation, since operators were driving the initiative.
Clearly, it’s this second path that’s going to be hard to follow. There’s a lot of software skill out there, but not a lot of strong software architects, and the architecture of any new technology is the most critical thing. If you start a software project with a framework that presumes monolithic, integrated, components linked by interfaces—which is what a traditional box solution would look like—that’s what you end up with.
The NFV ISG is a good example of the problem. The body has originated a lot of really critical stuff, and it was the second source (after the IPSF) of the application of “orchestration” to telecom. However, it described the operation of NFV as the interplay of functional blocks, something easy to visualize but potentially risky in implementation. Instead of framing NFV as an event-driven process, it framed it as a set of static elements linked by interfaces—boxes, in short. Now the body is working to fit this model to the growing recognition of the value of, even need for, event-driven thinking, and it’s not easy.
I think that blogs are the answer to the problem of communicating relevant points, whether you’re a buyer or seller. However, a blog that mouths the same idle junk that goes into press releases isn’t going to accomplish anything at all. You need to blog about relevant market issues, and introduce either your solution or a proposed approach in the context of those issues. You also need to blog often enough to make people want to come back and see what you’re saying. Daily is best, but at least twice per week is the minimum.
A technical pathway that offers some hope of breaking the logjam on software-centric thinking is the open-source community. I think ONAP has potential, but there’s another initiative that might have even more. Apache has a “Mesosphere” project that combines DC/OS, Apache Mesos, and Apache Marathon, and all this is tied into a model of deployment of functional elements on highly distributed resource pools. Marathon has an event bus, which might make it the most critical piece of the puzzle for software-defined futures. Could it be that Red Hat, who recently acquired CoreOS for its container capabilities, might extend their thinking into event handling, or that a competitor might jump in and pick up the whole Mesosphere project and run with it? That could bring a lot of the software foundation for a rational event-driven automation future into play.
Don’t be lulled into thinking that open-source fixes everything automatically. Software-centric thinking has to be top-down thinking, even though it’s true that not everyone designs software that way. That’s a challenge for both open-source and standards groups, because they often want to fit an evolutionary model, which ties early work to the stuff that’s already deployed and understood. It shouldn’t be considered impossible to think about the “right” or “best” approach to a problem considering future needs and trends and at the same time prevent that future from disconnecting from present realities. “Shouldn’t” apparently isn’t the same as “doesn’t”, though. In fairness, we launched a lot of our current initiatives before the real issues were fully explored. We have time, in both IoT and zero-touch automation, to get things right, but it’s too soon to know whether the initiatives in either area will manage to get the balance between optimum future and preservation of the present into optimum form.
The critical truth here is that we live in an age defined by software, but we still don’t know how to define the software. Our progress isn’t inhibited by lack of innovation as much as by lack of articulation. There are many places where all the right skills and knowledge exist at the technical level. You can see Amazon, Microsoft, and Google all providing the level of platform innovation needed for an event-driven future, a better foundation for things like SDN and NFV than the formal processes have created. All of this is published and available at the technical level, but it’s not framed at the management level in a way suited to influencing future planning in either the enterprise or service provider spaces. We have to make it possible for architectures to seem interesting.
Complexity cannot be widely adopted, but complexity in a solution is a direct result of the need to address complex problems. It’s easy to say that “AI” will fix everything by making our software understand us, rather than asking us to understand the software. The real solution is, you guessed it, more complicated. We have to organize our thinking, assemble our assets, and promote ecosystemic solutions, because what we’re looking for is technology changes that revolutionize a very big part of our lives and our work.