Feeds on this Blog Will Be Disabled

Starting roughly August 1st 2023, we will be suspending the RSS and email subscription options for this blog, and no further signups or notifications will be received. If you have used this feature to get Tom Nolle’s latest blogs, particularly for redistribution through your company, you will need to sign up to the Andover Intel feeds instead. We have attempted (hopefully successfully) to move old blogs to the Andover Intel site as well, but links within a blog to another blog will likely not be translated, and so won’t function once the original blogs on this site are remoted. We plan to keep these CIMI blogs on this site through the end of 2023, but we cannot promise.

This Blog Has Moved!

CIMI Corporation has shut down its operations, and Tom is continuing his work under Andover Intel. We ask that you visit that site (click the link) for access to Tom’s future blogs, and for contact information regarding our services. CIMI Corporation’s blog and website will continue to be available through at least early 2024 but will not be updated.

If you registered via Follow.it to receive RSS or email syndication of blogs for CIMI Corporation, you’ll need to register again for Andover Intel. Click the green icon on any post and follow the instructions.

Starting a New Adventure!

Starting July 1st 2023 I’ll be launching a new company, Andover Intel, and doing all my blogging, writing, and consulting through that entity. As a part of this transformation, I’ll be suspending my blogging on this blog site and instead using Andover Intel at the link provided above. If you’ve registered to receive my current blog via RSS or email, you’ll need to register again under Andover Intel. If you don’t you can still access my blog there directly but you won’t receive an automatic feed.

Andover Intel was created to take a different slant on the “analyst” role we’ve had in the network/tech space for decades. Most analyst activity is seller-driven, meaning that vendors and providers make announcements and that generates coverage, pretty much the same as it does for tech media. Those who have followed my blog here know I’ve never been a fan of that approach, and so I’m taking a different path now. Andover Intel is user-driven. We have a special email address (see the Contact Us page) to allow users of technology (end users like enterprises and SMBs, and service and cloud provider personnel who are consumers of technology rather than sellers) to contact me with comments, questions, and insights of their own. It’s this material that I’ll be using to drive my work and blogs.

The material on the CIMI Corporation website will remain at least through 2024, and the emails will be operative through the end of 2023. I suggest that you start using the contact emails on Andover Intel’s website, though.

For those who have been getting consulting services and tech writing from me, the same services will be generally available through Andover Intel, but I’m deemphasizing on-site consulting in favor of Teams/Zoom work. If you have questions on what services are available please contact me via Andover Intel.

Wall Street is Selling Tech Short, and It’s Hurting us All

Ciena delivered a great quarter by any measure, but it saw a major hit on its stock. The situation is a lesson for tech companies overall, because on the one hand companies are legally responsible to serve shareholder interests, and on the other hand Wall Street has been manipulating markets. We’re in a period where tech overall may be a slave to hedge fund greed, but it may work out in the end.

The problem Ciena had was that while it had a good quarter, it lowered full-year guidance to reflect the fact that buyers are uncertain about macro-economic issues. The old guidance predicted a 20-22% increase in sales and the new 18-22%. Obviously, all Ciena has done is admit to a potential hit on the low-end side; the new guidance range overlaps the old otherwise. An article in TelecomTV asked “Is that enough to justify an 11% share price correction?” Good question, particularly given that the market consensus on the company, as reported by Yahoo Finance, is positive in the short, mid, and long term.

OK, perhaps an admission that sales growth might dip a bit in the second half is a negative, but if there’s such an appetite for Internet capacity that telcos who can’t raise broadband prices are themselves facing a profit-per-bit crisis that might require subsidies, how can a company who provides the tools for increasing capacity be viewed in a negative light? Perhaps they aren’t.

Hedge funds, unlike ordinary investors, can not only bet on the decline of a stock to make a profit, they can actually stimulate the decline. There are various tools to do that, all collectively referred to as “short selling”. One approach is to borrow shares of a stock from a big broker who holds a lot, sell the shares, and expect to buy them back at a lower price when the stock goes down. If short-selling is used in a particular way, it can exaggerate even minor market movements. In Ciena’s case, there were probably a few investors who saw that risk of a decline in sales growth as a trigger to take some profit. Short-sellers probably jumped on the sales these investors triggered, and since stock prices are set by the ratio of buyers to sellers, that dumped the stock.

You may not see this as being something you worry about if you’re not investing, but tech companies were already sweating this short-selling thing before Ciena, and they’ll surely sweat it more now. Not only that, the problem isn’t limited to tech vendors; short-selling is IMHO a major reason why the markets have performed so badly over the last year. Tech buyers, those who are public companies, are also facing this problem.

When a company’s stock price goes down, the management team has a legal obligation to shareholders to try to sustain and grow it. Growing revenues is a great approach, but that requires a willingness of buyers to increase their spending, and if the buyers are also being pressured by short-selling, they may also be looking to cut costs. Cutting cost can equal spending less on products and services, so many vendors/providers will elect to cut costs themselves. Why tech layoffs? Maybe you should blame Wall Street.

The impact on tech here could be widespread. There are generally two pieces to the budget for tech products and services, one aimed at sustaining what’s already been justified and deployed, and one representing incremental improvements and extensions. Both these pieces can be impacted by “slow-rolling”, delaying a budgeted expense by staying with current infrastructure. New projects can also be dropped completely, scaled back, or changed to lower costs. While the risk of this happening is real, vendors have to be careful with their guidance, which can then provoke more short-selling and stock dips…and so forth. In theory, Wall Street short-selling could drive the whole economy into a recession.

There’s some good news, though. The US Securities and Exchange Commission (SEC) is investigating short-sellers, and even the threat of investigation creates a risk to the practice that’s likely to curb the most egregious behavior. I think some of that is already visible in the market, and I wouldn’t be surprised to see Ciena recover a lot of its share price dip fairly quickly. The other good news is that, to paraphrase a song, “They say that all bad things must end some day.” If you borrow stock to sell it, you have to pay a fee while you’re holding it and you have to give it back eventually. That’s called “covering your short”, and to do that, a short seller has to buy the stock at the market price (there are other complex options involving derivatives, but it all comes down to buying shares at some point). That covering can then send the shares up, so investors who are smart enough not to follow the alarming comments in the financial media will likely not only get back to where they were, they might even earn a profit from the whole deal.

The problem is that a lot of bad things can happen while we’re waiting for Wall Street to either get back to the old notion that you make money investing when stocks go up, or for the SEC to fine or jail some people who are working against the market at large. The general view of network users, for example, is that the second half of 2023 will see a relaxation of macro-economic negatives. On a fundamentals level, that’s almost surely true, but could short-selling keep driving markets down even if macro conditions improve? Darn straight, and if they do, the caution created among both buyers and sellers would be enough to derail the positive turn we could otherwise expect.

Anything that distorts the tech market hurts all of us in it. Let’s hope that either the market itself, through performance, stops this behavior or that regulators step in. Ultimately it may be up to Congress, and there is no question in my mind that the current regulations that allow things like short sales let experts make money at the expense of the average investor.

What Can We Really Say About Generative AI?

If you’ve been in tech for more than a few years, you’re surely aware of the fact that vendors and the tech press will jump on anything that gains any user interest or traction. They’ll lay claim to the hot concepts even if it means stretching facts to (and some would say “beyond”) the limits of plausibility. Would it surprise you to hear that this is being done with generative AI? It shouldn’t, and that means we really need to look at the technology to see what could reasonably be expected.

Anything new in tech has to pass two basic tests. First, is there a value proposition that would promote its use? Second, could providers make money from it? If the answer to either of these questions is “No!” then we can assume that we’re riding a hype wave that will eventually break. How we qualify a “Yes!” answer would determine how fast and how far the technology could go. Let’s apply this test to generative AI.

Generative AI is a combination of two technical elements. One is the ability to parse a plain-language query and determine what’s being asked, and the second is the ability to gather information from a knowledge base to provide an accurate answer. I’ve tried generative AI in two specific forms (ChatGPT and Google’s Bard) and derived my own view of how well we’re doing at each of these two things.

It’s possible to frame a query that generative AI seems to understand without too much difficulty as long as the query is fairly simple. By that I mean that the query includes a minimal number of levels, which we’d represent in a logical expression by putting IF/THEN statements in parenthesis to represent that we want one to be based on the results of one at a higher level. As the complexity of the query grows, the chances that the result will be interpreted to match the intent of the user reduces quickly. Many of the most egregious generative AI failures I’ve seen result from this.

The second area, the ability to analyze a knowledge base go get an answer, is much more difficult to address. I’ve done queries that have produced in seconds what I’d have taken up to an hour to research using traditional search engine technology. I’ve also done queries that produced totally wrong results, so wrong that they defined logic. Over the last month, many end users of AI have emailed me with their own stories, which largely match my own, and we’ve seen articles describing the problem too. There is no question that generative AI can make major mistakes in analysis.

What makes things worse given both of the generative AI limitations I’ve described is the fact that it’s essentially impossible to check the results because you don’t know how they were derived. Decades ago, I designed an information query system for a publishing company that accepted parenthesized IF/THEN statements, converted them to reverse Polish format for processing, and then executed them on a knowledge base. When I did the execution, I created a log that showed what the derived reverse Polish expression was and how it was evaluated, with each step showing how many items passed the query at that point. If you selected everything or nothing, or just more or less than expected, you knew where you messed up.

You don’t get that with popular generative AI tools, so unless you have a feel for what the results should be you can’t spot even a major error. Even if you have a rough idea, you still can get a result that barely passes your sniff test and pay a price. That’s one of the biggest problems that users reported with generative AI, regardless of the mission. I saw it when trying to get the number of leased lines in service in the US; the results were totally outlandish and I didn’t know how the package came by them.

These problems are almost surely a result of immature technology. We expect a lot from generative AI, more than we should realistically, given the amount of experience the market has with the technology. Pretty much all AI is based on a rule set that describes how information is to be examined and how results are correlated, analogous to my reverse Polish parsing. Get the rules wrong, even if what you do is miss some important relationship issue, and it’s garbage in, garbage out. We’re getting more refined with generative AI every day, and the results are getting better every day, but right now any package I’ve looked at or had reported to me will throw a major flier occasionally, and that makes it hard to trust it. Packages that will log their rule application are the best ones because you can at least try to see whether the package did logical things, but of course if you have to research everything you ask a package in traditional ways, why use it?

OK, where do we stand on this point? I think the basic technology concept behind generative AI (or any other form of AI, in fact) is sound. What’s required is a bit of maturing, and a mechanism for defending or explaining results. Cite sources!

The business model side is even more complicated. In order for someone to make money on generative AI, meaning profit from their investment in it at any level, vendor or user, somebody has to pay something. As my Latin teacher might have said or a prosecutor might ask, cui bono? Who benefits? The answer to that depends less on generative AI as a technology and more on the mission we set for it.

Most people experience generative AI as an alternative to traditional Internet searching. That mission, and any mission that’s related to much of what ordinary people do on the Internet, is ad-sponsored. The problem is that generative AI doesn’t really offer much of an opportunity for ad insertion compared to traditional search. Getting right to the answer is great for the one asking the question, but not so much for those trying to profit from answering it.

The easy solution is to say that the questioner would pay explicitly, but that flies in the face of Internet tradition and a company who promoted the approach wouldn’t likely be offered much credibility in the financial markets. This is why generative AI isn’t likely to kill search unless somebody figures out how to monetize it.

The next issue is the knowledge base. Search engines crawl the web to index pages, and since most people who publish web content want it found, few opt to limit the crawling. Still, you can do that. Does a generative AI package do its own crawl, or take advantage of the crawling a search engine does? Does the fact that you have a website, or you’re a blogger as I am, mean that you’re surrendering your content to be used to develop a knowledge base? We’ve already had legal complaints on that point.

Some of these issues can be resolved if we assume a company uses generative AI on its own data, and that the package it uses provides the logging of query processing needed to validate the methodology. However, these applications are only now evolving, and user experience with them is both limited and mixed. What I’m hearing most often, though, is that the technology isn’t “revolutionary”, meaning that more standardized analytics tools or traditional AI/ML tools work just as well with less risk.

Enterprises seem to think that’s the case too. Companies who reported extensive analytics tool usage, without AI augmentation, expressed only a quarter of the level of interest in generative AI as those who had no significant analytics commitment. That reinforces what I think is the summation of the issues I’ve cited here. Yes, generative AI could be a valuable tool. No, it’s not yet proved itself to enterprises, and it’s hard to say how long that proof might take.

The Latest on Telco Subsidies: An Alternative Approach Needs Work

It’s starting to look like the meat of the issue of telco subsidies by OTTs is emerging. A recent piece on the topic makes the comment that the big problem with the EU proposal on subsidies is that the EU “essentially attempts to regulate the internet like the telephone network.” The question that raises is both simple and profound, and it’s “What is the Internet?”

There are two basic pieces that make up our Internet experience. One, the most important to Internet users, is the “over-the-top” or OTT piece, which is made up of websites that host information, content, and services. The other, which is fundamental to accessing this desired piece, is the collection of Internet service providers or ISPs who provide connectivity for both users and OTT providers. In the earliest days of the Internet as we know it, this second piece was for Internet users, highly dependent on the telephone network because it leveraged existing infrastructure. OTT access also typically comes via ISPs, but also through a set of content delivery networks (CDNs) that cache popular content closer to points of user access to improve quality of experience.

Early Internet infrastructure consisted of private OTT resources, telephony-centric user access resources provided largely by the telcos, and interconnect facilities to link everything. The latter was really the only true Internet-network piece of the puzzle; everything else was a pure commercial venture. Today, most of the interconnect requirements are met by “private peering” or connection between ISPs themselves.

The reason all this is important to the question of regulation is that the two pieces we call “the Internet” are now and have always been separate, and that regulatory practices have codified that separation. In the US, we have “telecommunications services” and “information services” that correspond to the two pieces, and a similar separation is defined in other major markets. It’s also true that the two pieces have separate business models. Virtually all the ISP services are paid for by the user, and the majority of the OTT services are ad-sponsored, though there are also some (like cloud computing and streaming video) where users pay at least some part of the service fees.

The argument that the Internet is being regulated like a telephone network is correct at one level, because the access piece of it is deployed and paid for the way telephone services have been. Yes, it’s regulated that way. The rest of the Internet is largely unregulated. So if someone suggests that it’s bad to regulate the Internet like a phone network, they run afoul of the basic truth that part of it a phone network (the evolution of one at least) and the other was never regulated that way. But let’s get past that semi-semantic point and address the question of how it should be regulated.

I think that a completely unregulated view of both pieces of the Internet is hard to defend. First, we’d have to transition the access piece to an unregulated model, which is at the minimum a major regulatory shift. We’d have to address the question of whether an ISP, facing ever-declining revenue per bit, would stay in the access market at all, or would instead try to shift to a different business. In the EU, many of the telcos established subsidiary operations in other countries to attempt to sustain revenue/profit growth. Would more of that shift be a result of deregulation, and if so would we face the risk of losing investment in access?

In the US and EU, there’s also a contrary trend, which is to add regulations to the OTT piece. There’s major concerns about digital privacy and the use of online social media and other sites to spread propaganda, lies, hate, and so forth. But would imposing liability, for example, on OTTs for what’s posted on their sites raise their cost of operation by raising the cost of policing content? If that happened, would the profit potential of these sites decline, meaning that ad sponsorship might no longer be possible and consumers would have to pay for these services?

All of this has to be considered in addressing the subsidy issue, because you could argue that the implicit principles on which the Internet was founded collide with the natural business model of networks. Up to the Internet, operators settled among themselves for services that spanned more than one operator’s infrastructure. With the Internet we got “bill and keep” where every ISP gets paid by its customers and keeps all the money regardless of whether there’s traffic exchange with others. Some peering agreements might require a measure of balance of traffic or even payment, but in the main we don’t have settlement. That’s one driver behind the move to subsidize one piece of the Internet from the other.

The other is national politics. Most OTTs are US companies, where the ISPs are typically serving a specific national market (or, in the case of the EU, a continental one). As new uses for the Internet, new OTT services, grow, the demand for access capacity grows, and ISPs find themselves facing more traffic without (because of the lack of settlement) more revenue. Given that the OTTs driving the traffic are from another country, it’s no surprise that national-centric telcos are asking for relief, and may well get it.

Do they need it, though? That’s the real question, and the one that’s hardest to answer because it’s one of those thorny two-part things. The first part is whether the current profit-per-bit problems of the telcos are enduring problems that actually demand a new solution, and the second is whether that solution should be subsidies or something else.

My analysis of telco/ISP costs suggests that the problem with profit per bit is largely due to the need to deploy new access mechanisms to accommodate broadband Internet needs. A study Omdia did earlier this year showed that access infrastructure accounted for roughly a third of costs, the great majority of which has been made up of the deployment of new media, including fiber, CATV, satellite, and 5G. In most major markets, that media shift is well along, which means that the contribution it’s made to cost (and the negative impact it’s had on profits) is likely to decline over time.

Wireless may be an exception here; 5G has driven a major investment in new infrastructure. However, can we blame OTT traffic for that? Operators have promoted a vision of 5G that’s beyond what simple ISP missions would demand, and I think most 5G costs can be attributed to that vision and not to Internet requirements. I think 6G risks being even more directed at non-Internet missions, or at least at missions not currently supported on the Internet.

But suppose that there is a continued profit-per-bit shortfall? Well, for one thing, it’s not totally clear that profit per bit constitutes a reasonable measure of return on infrastructure. You can clock an Interface at a high bit rate if the media will support it, but does that mean your cost has risen and so your ROI has fallen? See my first point on how access costs work. For another thing, could operators take other steps to reduce costs or raise revenues? Finally, is this, in the end, really more about boosting stock prices for operators?

I think that much of operator profit pressure from access modernization will tail off, so the problem is likely not acute in a pure financial sense. I also think that since operators are no longer given a guaranteed rate of return or operate as an arm of the government, they’re simply private companies. You have to expect these private companies to act to support shareholder interest, which means that they have to be able to show the stock price has growth potential, there’s a good dividend to be paid, or both. Thus, I think that there is a reason to want to see operators’ profits are at least stable and hopefully a bit better year over year. Can that be done?

Not decisively or quickly. I think cost management strategies could be helpful, but what can really be saved there at this point is problematic. The big thing is new revenue, and for that to happen we have accept subsidies, eliminate bill-and-keep for some form of settlement, or encourage the operators to get into other areas. To avoid having those other areas erode investment in infrastructure we need them to exploit infrastructure much as OTTs do.

The AT&T concept of “facilitating services” seems the best hope to add revenues without threatening infrastructure investment, but it probably will require some regulatory attention in some markets, and it surely demands having a good idea of what services need facilitating. That’s the big hole in the story, I think, and since operators were demonstrably inept in promoting specialized 5G applications (and even in knowing what ones might be credible) it’s hard to see how they can do better here, quickly.

My analysis suggests that it may be premature to jump into subsidies at this point, or even to try to change the Internet settlement model. However, it seems very likely that governments will have to take some action to prevent investment in access infrastructure from stalling, and so if subsidies are to be avoided they should start thinking about how to develop that facilitating services ecosystem with at least some facilitation on the part of regulators.

Are Telcos Failing the Enterprise?

An EY report on telcos has been raising a lot of comments in the networking community. The main proposition is that the telcos have “failed to articulate a compelling value proposition” and that this is why 5G take-up by enterprises is low. The report is titled “EY Reimagining Industry Futures Study 2023” and it covers enterprise issues on both 5G and IoT, and it covers a lot more ground than the stories on it would suggest. I have my own enterprise data and telco views, and I’ll cover both aspects of the report here.

The first important point is that the report is really about enterprise views of technology (5G and AI in particular) and technology sources, a much broader topic. It suggests enterprises aren’t really that committed to either 5G or AI. Neither technology is reported to be adopted by even a quarter of enterprises, and 5G barely makes the 50% level current or 1-year-out investments. My own data says that only 11% of enterprises are even considering “5G” if we take that to mean either private 5G or features of public cellular services that are unique to 5G, like network slicing. For AI, the number is 9%, but AI actually has more credible “considering” interest than 5G does if we go out beyond the one-year mark, period where actual budgeting is considered. Since 1990, enterprises have tended to overstate their adoption of what they see as leading-edge technologies, and the difference between my numbers and the report may be due to this.

There’s another specific problem with the 5G stuff presented, and that’s the fact that the report mixes adoption of private 5G and the adoption of 5G and specialized 5G features like network slicing. My data says that there are relatively few verticals where private 5G makes sense, and the report does show the greatest 5G interest in the only one of those verticals (Energy) that it covers. For the rest, the fact is that “5G” is likely to mean nothing more than the adoption of public 5G services, which happens when those services are supported both by the local telco and by smartphones. Thus, consumers are the hottest 5G market of the moment, and I submit that if that definition is used the findings on enterprise use of 5G are not highly useful; enterprises use what operators and devices are set for.

AI is even more problematic, given that the technology is highly visible and thus seen by enterprise planners as a kind of test of their being up-to-date. To expand on what I’ve noted above, over the last three decades, roughly a third of enterprises have consistently reported having adopted or being committed to adopting technologies that weren’t even commercially available. Then there’s the fact that vendors have been AI-washing their products and technologies to get media attention, and in many cases this has blurred the notion of what AI actually is. Given that there are actually at least three or four different models of AI (including generative like ChatGPT and AI/ML) and you can see a bit of careful analysis is needed here and it’s not provided.

My data says that if we adopt the broad notion of AI, well over 60% of enterprises have at least some of it now. It also says that 100% of enterprises believe they will be using more AI within 3 years, and two-thirds say they’ll use a lot more. However, they’re at a loss as to what specific type of AI they’d be using, who they’d get it from, or even what they’d be doing with it. In fact, only about half of enterprises can say exactly what AI they have in play, and they’re as likely to say they have none when they use it somewhere as to say the opposite.

My conclusion here is that there is little we can draw from the data on either 5G or AI. I’ve found that enterprise planning for technology doesn’t work the way that vendors or the media would like. Enterprises don’t spent much time or money exploring technologies until they have a specific mission, and then they tend to look at technologies that are offered in some form by vendors who are currently supporting other missions for them. Abstract technology research, like “what would I do with 5G or AI” just doesn’t make sense to most enterprises. They’re mission-centric, in short, not tech-centric in planning and that makes technology surveys challenging.

What about the points in the articles on telcos? It turns out that there’s a single section in the report that talks about the types of ICT (information technology, in US terms) providers and how they rank in terms of ability to supply enterprises with solutions. What’s interesting is that while the stories on the EY report focus on the negative views enterprises have of the telcos, telcos actually rank third or fourth out of nine categories. The top two provider types in all categories are IT services providers (professional services firms) and application and platform vendors. Furthermore, telcos rank at the top in terms of trust for IoT, but low (third from the bottom) for trust in digital transformation.

One broad truth you can probably see here is that this is consistent with the way that enterprises actually review technology options, as I described above. If you need a mission to start a review, it follows that you’d be most likely to focus on mission specialists. Telcos, and in fact network vendors/providers overall, have never been seen as mission specialists. They have, however, been consistently more trusted. In fact, since 1989, my surveys have put telcos as the second-most-trusted source of technology knowledge, second only to the experiences of a known peer in the same industry.

The question is whether this trust position could or should be leveraged by telcos to establish a better ranking in digital transformation. The stories on this report come out pretty strongly on the side of “they should”. I’m not so sure that’s true, and even if it is I’m far less sure that it’s possible.

There’s a progression associated with technology support for business missions. You always start with applications. You then move to what the applications run on/with, meaning the hardware, operating systems, and middleware. You then move to connectivity, and from that to the question of a public service or private facilities. I have never talked with an enterprise who believed that the telcos or network vendors were playing a role in those early steps. Thus, the question is whether telco aspirations in 5G would justify telcos’ taking or attempting an earlier engagement in this natural flow.

The report’s “Next steps for 5G service providers” jumps to the point that they need to do just that. It offers four key actions, and IMHO none of them really provide the pathway to success. What is that pathway? Support for the mission that’s the kicking-off point for any successful technology project. How does 5G support the mission? Obviously that question has to be answered, but how do telcos do the answering? Do they build professional services organizations? That’s the conclusion that the report’s data reaches; it’s the applications that drive the bus, and you can’t talk applications without talking about what they’re intended to do for the enterprise.

What I think the report demonstrates is that telcos need to be ready to identify the mission connections for things like 5G in a credible way. The problem with that is that such a path would surely result in having little of interest to say to the media, and to investors. “I promote 5G as a faster and potentially more ubiquitous form of cellular service” isn’t going to get good ink or look good to Wall Street. The prevailing theory for telcos, I think, is to say what you need to say to make the most of early media/market interest in an emerging technology, and then presume that if there really is any substance to the stuff you’ll figure out how to use it.

Cynical as that sounds (even from me), it’s probably the right approach. Telcos will never be able to support the early pieces of that technology-to-mission connection flow. What they need to do, and have been doing, is building buzz that could encourage the participation of those who could do what’s necessary. One could argue that things like developer programs could then help to build the flow of project progression. However, that would be true only if a technology (like 5G) were promoted for a reason that actually had a real mission connection down the line. I don’t think that telcos ever established that to be the case, not because they were inept but because it wasn’t true.

As one section of the EY report says, these days everything in tech is about building a cooperative ecosystem of players. In order for 5G or AI to become what people want to believe they will become, there has to be a real natural revolution created by them. We still can’t say what that revolution is for either 5G or AI, and I don’t think that’ the fault of the telcos. It’s the fault of the environment within which we develop and validate new technologies. You don’t get realism unless you value it, and valuing realism is surely not common in the tech world today.

Startups, Exits, and Bubbles

There is no question that the 2022-2023 economic bender has impacted startups. The mere fact that interest rates have been ballooning is enough to change the economics of venture capital, and the failure of Silicon Valley Bank didn’t help either. Now, we’re starting to see questions about the thing that VCs adore above all else, the exit strategy.

Every startup is funded on the assumption that it will make a profit for its investors, the “angels” and VCs that provide the initial capital. Even those who elect to start a company or participate in a new one are usually lured by the hope of making a killing on “the exit”. The exit, of course, means sale of the company to a major player or a successful initial public offering (IPO). The problem is that exit values have been plummeting, with 2022 looking bad, and 2023 looking really bad. Everyone involved wants to see a reversal of this trend, but that’s going to be difficult without an understanding of what’s causing it, and even then it may be that the causal factors really can’t be reversed. We might well have been in a venture bubble.

I’ve worked with tech startups for decades, and over that period there are things that have remained fairly constant for the space and things that have changed, sometimes radically. From the very first, the startup and VC space has been focused on generating buzz, or hype. A startup is a quick way of entering a market area, so if there’s potential for a new product or service, the VCs will try to plant their flag there and exploit it. However, it doesn’t take rocket science to understand that a big company could easily commit the same resource level (or more) to a tech area and build their own stuff there, given the time. Thus, a big part of startup-think is to create the perception that there is no time, that the new area is hot right now and those who hesitate (and try to do it themselves) are lost.

Shortly after the VC space really got going, I saw the growth of a related concept, the idea that buzz was furthered by competition. If you’re a widget startup, you need to promote yourself for sure, but you also need to promote widgets. A “new” hot product area has to be validated or the whole startup thing is a waste of time because nobody will buy in. Having multiple startups in a space spreads the cost of validating the space among the group, and the competition among those startups gets the media excited. There have been stories of back-room VC deals where a group of VCs agree to pool ideas, with each having an opportunity to be the winner in a space but requiring them to take a secondary position in promoting other spaces with an entry, knowing other VCs would win there. Needless to say, the amount of hype increased here, and has increased ever since.

By the end of the decade of the 2000s, it was clear that VC returns were less rosy and there was more and more interest in creating companies on a shoestring. Social media provided the opportunity. You don’t build a product, you launch a website. Amazon’s AWS was the favored platform for these social-media startups because they expensed the software and hosting rather than having to buy capital equipment and build data centers. By 2010 it was harder to fund “product” startups, and today we obviously still see things favoring the social-and-Internet startup types.

There’s an underlying problem behind all of this, though. Any “market” has a saturation point. There is a limit to the amount that will be spent on routers, servers, toothpaste, and deodorant. There’s also a limited amount that will be spent on startups. In the social media area, one thing that defined the last two years was the realization that social media and Internet giants like Alphabet/Google, Facebook/Meta, and Twitter were showing signs of hitting the wall. Servers and network gear also seemed under pressure, and from where we sit in 2023 all that seems self-evident. That’s what is creating the startup pressure, because it threatens the exit strategies.

So what now? If we go back to my market-saturation point, one thing we have to accept is that it’s not possible to say that we can somehow radically increase the amount of toothpaste or routers people will buy. If you want to make more money, it’s best to open a market area that’s not already at or approaching saturation. Ad-sponsored OTT services are there. The only thing you could hope to do is to steal market share, and that’s a very complex, time-consuming, and expensive proposition that frankly isn’t all that appealing to VCs.

The problem is that it’s likely that any new market area will be more expensive, more time-consuming, and more complex than VCs have become accustomed to dealing with. If we were to explore the product startups in the tech space, we’d see that the majority of those founded over the last five years have favored spending additional funding rounds on sales/marketing rather than on product development. While the VCs have always favored startups with “laser focus” to contain costs and risks, that’s become an obsession today. They want a concept that’s largely complete and they want their startup to play it, not improve or advance it. Money to market and sell can pour in, but time is also passing. The problem is that if success takes long enough, the underlying product concept can run out of gas. Valuations, the implied value of a company created when a new funding round is closed, can rise and rise until there’s no exit possible because the concept isn’t valuable enough.

That’s what creates the real risk to “unicorn” startups, one that have been successful in raising a boatload of money and generating a high implied valuation. Prior to COVID we were in a period of some market exuberance. Tech, during COVID, was a bright spot because work-from-home and stay-at-home generated tech-centric needs. Now we’re past that, and so exuberance has faded, and startups and their concepts will have to deliver in the real world. That’s the bad news; returning to reality is always a shock.

The good news is that there’s reality to return to. As I’ve pointed out, between the consumer space and the business space, we have nearly a trillion dollars in addressable benefits to justify additional spending. The problem is defining ways of addressing those benefits and getting the money they represent. That this can’t be done (easily or perhaps at all) using current paradigms should be obvious; we would have done it already.

My personal view is that the largest single opportunity set associated with that trillion dollars is that of “digital twin” metaverses, but there are a lot of experienced and talented software architects out there who may have another, better, idea. My main point is that the best place to direct new projects to get at that money would be the place that gave you the most money from a single concept.

AI is an example on both the positive and negative side. Anyone who doesn’t believe AI is important is deluding themselves, but so is anyone who believes it’s not already overhyped. The danger in hype, for startups and for the market, is that it easily diverts efforts and funding from spaces that could be highly valuable to those that are simply eye candy. I think we can assume that there will be dozens of AI startups. I’m confident that most of them will fail, and it’s likely IMHO that the AI bubble will only make exit strategies harder in the long run.

For companies who have already launched, and who may right now be contemplating what to do about an exit strategy, my recommendation is to look for credible technology extensions that would open new opportunity areas. In a financially constrained market of any sort, you want to be an “evolutionary revolutionary”, someone who extends a credible and visible concept in an arresting new way. There are opportunities to do that out there too.

Doing nothing at this point is not an option. “Waiting for recovery” as a choice presupposes not only that there will be one (there will be) but also that the recovery will return us to the familiar and comfortable status quo, and it will not. The outlines of the future are visible now, but the future isn’t the present, or the past, and a startup/unicorn who believes that is exposed to a terrible risk.