The Uneasy Relationship Between ChatGPT and Search

I’ve already blogged about my own experience with the wildly popular ChatGPT, but that analysis opens a question that was also raised recently online, which is whether ChatGPT could be a threat to search in general and Google/Alphabet’s revenue model in particular. So could it? The answer is related to some of the points I made in that earlier blog, but at the same time it depends on other broader points.

The key point of the article I reference is “However, with ChatGPT’s introduction, Google could quickly be pushed into irrelevancy as users throng for more simplistic answers than indexed pages.” A quest for more simplistic answers is the root of the issue. Large-language-model AI engines like ChatGPT rely on creating composite views by analyzing articulations. That means that they can easily digest a vast amount of material and frame a free-text summary of what they’ve found. You can see how that works in the examples I provided in my blog. The result is what you could call “guided consensus”, which means text that’s produced by constraining all the relevant articulations based on the query. 5G’s potential and 5G’s potential based on CIMI Corporation views are likely the results of collecting articulations based on different constraints.

It’s this concept of consensus and articulation that’s important. I characterized ChatGPT’s output as being shallow, lacking insight. I heard from two long-time friends, one a Wall Street type and one a former editor-in-chief of a tech publication, and got surprisingly similar comments. The former noted that the ChatGPT results would have been considered good analysis on Wall Street, and the latter that they’d have been happy had reporters created the story that ChatGPT did. What this says is that “shallowness” isn’t just an attribute of the kind of stuff students might want to generate, it’s perfectly acceptable in many other spaces because mass market consumption doesn’t depend on insight quality.

I could have done a search on the same terms I sent to ChatGPT, and had I done that I’d have gotten (as the article suggests) a list of indexed pages that hit the key terms. From those, I could have produced a similarly shallow piece, but with more work needed to check each link, develop a viewpoint, and then turn that into text. But I could also have done the same search, looked deeper, and created insight. What my two friends demonstrated was that even in fields we might think are technical, insight takes second place to speed of producing pablum. That means that all those who want facile shallowness would in fact be likely to flee from search to ChatGPT. So search and Google are doomed? Not so fast.

First, nobody is going to do a ChatGPT-like tool without any hope of financial gain. They’re even less likely to deploy it at such a scale as to threaten searches, because the cost of running all the queries would be daunting. The future of public, open, chatbots is the present of search, meaning ad sponsorship. I can see Google, Microsoft, and others tuning their own chatbots to serve ads. But even that wouldn’t be likely to replace search, not because the results would be shallow (that’s fine with most), but because they’d necessarily be tainted.

You can stick ad-sponsored links in search results easily (and everyone does that, despite the fact that most people ignore them). You could also stick them in a chatbot text response through a YouTube-like pre-roll before the user sees the response. How many ads would a user be willing to watch, though? I think one might be a stretch, but surely not more than that. How many are offered in search results? A quick test on my part yielded an average of six on the first page and five on the second. Eleven to one? Unless advertisers paid more for ChatGPT-type ads, I don’t think there’s enough revenue there to build out a mass-market infrastructure.

So the real pressure of ad sponsorship would require that the results be biased in favor of an advertiser. Ask ChatGPT who makes the best routers, and it will reply with a qualifier that it’s a complex disclaimer, and a list that’s not exactly complete or useful. Might a router vendor buy a mention in that sort of result, maybe even to be featured? It’s hard to see how that wouldn’t be a natural result, because without that capability the value to advertisers would be minimal and the revenue to the chatbot provider wouldn’t cover costs much less create profits.

Suppose that advertisers could buy their way on to that list. Ask “who makes the best IP router” and it responds with a list whose order depends on the advertisers’ contribution. Suppose somebody buys the top router and it turns out to be junk, even a fraud? At the very least, this is going to create a barrage of bad publicity for the chatbot, and it’s not beyond the realm of possibility that it would generate lawsuits. There is a difference between doing a search for “best IP router” and getting a list of mentions of the phrase, and getting a text answer to my chatbot question. The former is research, and the latter looks for all the world like an opinion. That difference is that makes ChatGPT valuable, but also what makes it a potential risk.

A bigger question, though, is whether consumers would trust something like ChatGPT if its generated text was determined in part by payments from advertisers. I asked an enterprise CIO who contacted me on my original ChatGPT blog, and he said “Why would I? I can get the vendors to lie to me face to face if it’s lies I want.” OK, that’s cynical in a sense, but it’s also true. Would you pay for a report whose contents were determined by the highest bidder?

The biggest question, though, is whether anyone would advertise via chatbot given all of this. Students writing papers or taking exams aren’t very good prospects for sales, after all. Would Street analysts publish research knowing that some of the material was determined by the companies involved and not objective? Would a tech publication use a story about a product announcement that was biased because the company involved paid for prime handling? Any doubts in these areas could contaminate the ad sponsorship model, which would then force chatbots to charge for their results. We know how that would go; everything on the Internet (according to user-think) is supposed to be free.

I think that the question of how a chatbot can be profitable is the biggest question in chatbot-based AI. If ad sponsorship doesn’t work, then I think there’s little chance that chatbot use would rise to the point where it threatened search or Google. If it does, then of course Google and others in the search game would simply deploy a kind of chatbot front-end to their current search process and within it, adopt the workable ad sponsorship model. Search already creates the mass of data with web crawls, and the constraints via the search terms. A little add-on and it could generate reports and textual answers, and still be the kind of search we’re used to. A third-party chatbot would have to pay to collect all that data, and then process the user queries and deliver responses at the volume needed to keep people engaged. How often would you use a search engine if you couldn’t get on it because it was overloaded?

So forget chatbots? No, because it may be as interesting to see how providers like OpenAI think they can monetize this sort of thing as it is to see what tests one can past. Because monetization is the only test that will really matter in the end.