How big is AI Search, Rise of Claude, ChatGPT fan-out shake-up | #MonthlyLandwehr
Malte is CPO & CMO at Peec AI, one of the leading AI search analytics platforms, and one of the brightest minds in the field. So I want to take all the news, new research and interesting developments and get his perspective on them. Want…
Malte is CPO & CMO at Peec AI, one of the leading AI search analytics platforms, and one of the brightest minds in the field. So I want to take all the news, new research and interesting developments and get his perspective on them.
Want to get your brand into ChatGPT & Co.? Try out Peec AI, my favorite AI Search analytics platform
What we covered in this episode
- Ethan Smith’s “AI is much bigger than you think” study: his 20% market share figure, how it compares to other estimates, and why the industry resists the higher numbers
- Claude’s rapid rise: 3x growth Jan–March, overtaking Perplexity, the mix of strong product + OpenAI missteps + Anthropic’s PR savvy, and the professional vs. consumer usage split
- Why people still treat AI search as a “black box”: the parallel to early SEO reverse-engineering, lack of official spokespeople, and influx of non-SEO “experts” muddying the waters
- ChatGPT’s fan-out query changes (GPT 5.3 → 5.4): site search becoming more common, brand bias implications, and evidence that Google is still the primary grounding source
- Digital PR as the key lever for AI visibility: why technical SEO alone isn’t enough for LLM citations, the importance of matching content formats (listicles vs. profiles vs. how-tos) to what actually gets cited, and needing PR with an AEO/SEO mindset
Check out the full #MonthlyLandwehr episode
Check out the episode on YouTube, Spotify, or Apple Podcasts.
Auto-generated Transcript
Niklas Buschner (00:01.279)
It is time again for my favorite format in this podcast, the Masters of Search podcast. It is time for monthly Landwehr. And for monthly Landwehr, I have my good friend Malte Landwehr here with me, who is CMO and CPO at Peak AI, one of the leading AI search analytics platforms. And the good thing is that we have a full pad with a lot of things we want to talk about. And the even better thing is that Malte is super relaxed, super chill, because
You just came back from vacation, right?
Malte Landwehr (00:32.94)
Yeah, I just spent two weeks in the US, one week in Florida visiting my family and then one week cruising around the Caribbean. Caribbean.
Niklas Buschner (00:41.985)
Awesome. And could you stop thinking about AI search a little bit?
Malte Landwehr (00:47.272)
Sometimes.
Niklas Buschner (00:48.793)
Okay, because now is the time to go into the trenches again. So I have a couple of things that we brought together that happened during the last weeks. And I would like to start with an analysis that I think our mutual friend, Ethan Smith from Graphite, from the great San Francisco did, which is titled AI is much bigger than you think.
where he basically looked at the hypothesis if AI or AI search and organic search cannibalize each other. So if it’s the same pie or if the pie is getting bigger. First question, did you also find this significant? Like, were you surprised by his findings?
Malte Landwehr (01:42.326)
Yes, yes I was. Ethan is known for his very rigorous testing and usually when I do like a quick analysis just doing something then he’s the one who’s like very critical. Can you replicate it? Can you publish the data? How real is this number? And he only publishes things when he is very, very, very, very certain that everything is correct.
And his study is a bit of an outlier. Like if you look at other studies, SEO Clarity arrived at like 15 % share, HREFs and Peak AI arrived at 12 % share of chat GPT, and then Ethan is at like 20 % with his study. And it’s in the same ballpark.
but it’s a higher number, right? And then we also have the other end of the spectrum where we have Rand Fishkin from SparkToro who thinks it’s like 4 to 5 percent. And then we have some people in between like, I don’t know, Cystrix did a study that’s somewhere in between. I’m sure there are some others. And yeah, it’s interesting to see that somebody who is as rigorous in his testing and numbers as Ethan came up with the highest number here.
But there must be something to it if he is doing it because he’s not a hype guy, not somebody who exaggerates. So yeah, I was surprised that he came up with 20%.
Niklas Buschner (03:16.537)
Before we dive into the data a little bit more, let’s take a quick step back for everybody that is not familiar with the study. Could you quickly explain to our listeners what Ethan analyzed? Like what was the hypothesis that he looked at?
Malte Landwehr (03:31.778)
Yeah, I mean, the question is always what percentage of the search market is actually AI search or what percentage of the search market is actually chat GPT. And then different studies take different approaches. Like some look for the market share, some look what is the size of chat GPT in comparison to Google, which can also explain some of the differences of the percentages. And I believe he used a similar web data, if I remember correctly.
And then of course you have to always make some assumptions because for chat GPT we don’t have such volume, right? We have the total usage and then we can make assumptions or use data as far as it’s available. What percentage of the prompts is probably search. Then you have the problem that in chat GPT one prompt can replace 20 Google searches. So how do you weight this against each other? And then of course you have the complexity that depending on what study you trust in Google itself.
15 to 50 percent of searches already contain an AI overview and based on most external data it’s very difficult to estimate the size of the AI mode in Google. So yeah that is what he tried to measure here. How big is AI search?
Niklas Buschner (04:47.437)
And what do you think why this hasn’t been done before? Because my impression was that the question he tried to answer is super obvious. Like, simply put, how big is AI in comparison to search? Like, how big is AI search in comparison to classic organic search? Why didn’t people look at this before? Because the finding is pretty substantial for our industry,
Malte Landwehr (05:12.287)
Yeah, I mean, people did, right? Like, the PKA number I cited, it was actually me doing it a few months ago. SEO Clarity, Ahrefs, SysTrix, all tried to do it. But of course, Ethan’s study is, I think, the most robust. He invested the most time there. And it’s also the most recent one, because some of the numbers I cited are a couple of months old from other studies. So I think others have tried.
But usually when anybody publishes this, people tend to dismiss it. Like they don’t see it in their referral traffic. So it can’t be true. But of course, if you compare a system that delivers clicks, which is Google, which is system that delivers answers like chat GPT, of course you cannot compare them based on the traffic that you receive on your website. But if you look at companies that do the attribution based on asking people, where did you hear about us?
you will notice that 20 % is a very common number for them, even 20 % of overall traffic being AI. So, or being AI search to be specific. So, it really depends on whom you ask whether they are like, yeah, meh, or renewed. And on the other hand, people, this can’t be true. I get my clicks from Google, JetJPT can only be like 1 % of traffic. So I think it’s been done before.
He did it a bit more robust, a more complete. Maybe finally this number will become mainstream. But of course there are also people like Grant Fishkin already disagreeing with it, which is not helping with getting the truth out there.
Niklas Buschner (06:50.743)
And what do you think why, for example, Renfrews can disagree?
Malte Landwehr (06:55.437)
I think RAND looks a little bit differently at the data.
I have nothing but respect for Rand, but I mean he’s also a person who says this email is written without AI or I will never write my email right like he has a bit of a bias maybe and maybe maybe I have the same bias just like that I’m pro AI so I don’t want to dismiss his point of view but out of all the studies that were all done by respected people right like the Cystrix numbers from from Johannes Boys, the the Atrus numbers, the SEO clarity numbers
These are all respected people in the industry. They all reached multiple times higher numbers, like double the number of Rand Fishkin, three times those numbers. I just believe he has a very pessimistic view on AI, and maybe that influences how he interprets the data, or maybe he just looks at a different data set.
Niklas Buschner (07:55.138)
And would you agree that Ethan’s and also probably the PKI study and other data that it somehow that it was contrary to the general sentiment in the market so that the market would rather have the idea of AI is not so big as some people say it is?
Malte Landwehr (08:18.445)
I mean, of course, all established SEO players, all companies that depend on SEO traffic, all publishers, all affiliates, they don’t like this narrative, right? They want clicks from traditional web search to remain there. And of course, also people working at Google, for example, probably don’t want this narrative that searches on happening elsewhere because that would mean that advertising budgets will also shift if the narrative or the perception shifts.
And of course, many people are afraid of a world that is changing. Because if you used to measure your work with the visibility of a website in a search engine or with the number of clicks or leads or conversions that followed a click that is easily attributable. Of course, we say, I search everything is different. So I believe people prefer the lower numbers, the lower studies that say like 1%, 4%, 5 % because it gives them more
safety. So yeah, definitely it’s a narrative violation for many people if you say it’s 10 percent, it’s 20 percent. They needed to be five percent to be able to sleep at night.
Niklas Buschner (09:29.017)
So you’re obviously surrounded by people that also believe in the narrative of AI, no matter if we talk about AI as a tool or as a channel, is significant, is substantial and will have huge impact. What would you say to someone that is still maybe caught in this old narrative or that has, that maybe correctly sees the threat for their business model and sees the threat for their
previous SEO motion that was dependent on click, what would you say to them or how would you help them to maybe also come to the conclusion that they have to act and that they have to move into a certain direction? Because I also often talk to people that are still hesitant, skeptic, unsure.
Malte Landwehr (10:19.309)
mean, sharing studies like the ones that I cited above can be helpful, I think. And if you look at the timeline, you will also see that apart from the Randfischkin study, the numbers increased over time. Personally, I think that my original study that arrived at the 12 % already explains the misconception, but apparently it didn’t convince people. So I did that wrong.
So I may be also the wrong person to ask this. What I find very helpful is when people look at self-reported attributions, so asking every new user, asking every new lead, hey, where did you hear about us? And then seeing where AI search ends up. Of course, this is not perfect data. And especially people coming from performance marketing don’t like it. They want to attribute everything a session.
but people with a brand marketing background are much more open to receiving it. And I would also suggest just to look at the most successful people in your network and look at how they work. And I’m very, sure for the majority of people, they will realize that these people use AI a lot. I don’t know anybody who has started using Clots, started digging into MCPs, command line interfaces, who then went back to say, it’s all trash. I don’t want to do it.
The people who looked at AI and didn’t adapt it usually use like, I don’t know, a free version of chat GPT or perplexity or something a year ago, which yeah, was sometimes trash. But nowadays it is just part of everybody’s daily business if they care about efficiency and access to information. And I just cannot see that going away. I mean, it’s not just the search aspect, right? We also had feedback like,
Yeah, I put your contract into Claude and asked it to look for problems. It said I can sign this contract. So I signed it. Like, okay. So apparently it’s important and people use it for important decisions.
Niklas Buschner (12:24.794)
Speaking of Claude, perfect unscripted bridge here from your side. How surprised were you by the recent rise of Claude?
Malte Landwehr (12:36.269)
Very. I mean, it is the by far best system you can use right now. But I think a lot of the hype from, I would say, like average users who are not power users comes from the debate in the US about supporting the US military, whether it’s not supporting them or complying with orders from the White House. And I did not expect that to happen.
And then also in general, OpenAI somehow dropped the ball on marketing, on PR, on how they are perceived. Right now, for the second time, there are these allegations against Sam Altman in the media, and Anthropic just managed to have constantly good PR. And I mean, it’s not organic, right? Like, for example, right now, there is the story trending that
There was a new model that did the jailbreak and got internet access and then sent an email to a researcher who was eating a sandwich. I mean, these stories do not just happen, right? Like somebody is seeding them. And Anthropic has been incredibly good in the past at doing that and constantly having these like hype stories and an LLM broke this benchmark or tricked its way around this security thing and…
I think it’s very noticeable that a few months ago, more more anthropic employees became active on X, formerly Twitter, and started gaining visibility. And for some reason, OpenAI is just not doing anything when the reality is that as a daily driver for people who don’t need MCPs or don’t need engineering support with Cloud Code,
OpenAI or ChetGPT is still a very good solution for your everyday needs. But yeah, they somehow dropped the ball there. Started, I think, around January when the user numbers potentially started to decline. And then really, really accelerated when there was this White House military debate where OpenAI and Anthropic were perceived very, very differently by the public.
Niklas Buschner (14:55.406)
Hmm, maybe for everybody that is listening and that has somehow heard about Claude and okay Claude is somehow growing now. I quickly checked the numbers in similar web also and gladly already we have March numbers available. And if you look at Claude AI, which is obviously just the website, but looking at all traffic worldwide. So this does not include the mobile app, but at least it’s desktop and mobile. I can see that.
the visits to Claude.ai have grown three X from January to March, which is pretty substantial. And then going back to one of your, maybe not favorite, but one of your clearest hypothesis that perplexity is basically in constant decline. In January, Claude and perplexity were still somehow head to head. There has already been a little gap, but now perplexity is like just
Gradually declining whereas Claude is rising. So would you say that for Claude it’s basically just the perfect storm? So they have a superior product and then all the rest just fell into place like on the on the one hand forced by them like by PR stories but on the other hand also just by by randomness like may I pulling out of certain things the whole Department of Defense thing happening
Malte Landwehr (16:22.469)
I think it’s a mixture of things. So their model is in a very good place right now. They built a very good ecosystem around it. And then, yeah, OpMay.ai started messing up. I think overtaking perplexity was not a big challenge just because perplexity can do nothing. And, yeah, I mean, if you look at the other LLMs, like Meta is not really doing anything.
Gemini is still growing, but growing less extreme than Claude, but it’s still significantly bigger if you look at desktop usage. And then GROK has its own problems with a founder that might discourage some people from using it. So, yeah, I think it’s just a very, very good situation for Anthropic at the moment.
Niklas Buschner (17:14.011)
How do you look at the distribution of users? Because I had a conversation where someone asked me, but is this all professional usage or is it also consumer usage? And I found it an interesting thought because Claude is, I would say, mostly perceived as a professional tool, whereas JetGBT has somehow, I don’t know if it was intended or if it just happened, has gotten this perception of being rather a consumer tool. So how do you look at?
the rise of clod between professional usage and consumer usage.
Malte Landwehr (17:47.786)
So based on the numbers that I am seeing, there is a much higher percentage of professional usage on Claude. Like Gemini is still about four times as big as Claude, if you look at web usage. But in terms of clicks that can be attributed with the referrer, I see professional websites in the B2B space since last month getting the same amount of clicks from Gemini and Claude.
So the professional usage on Claude must be higher. And of course, a lot of their community is very engineering driven and then why would they use a second LLM for their other use cases? So yes, it’s higher, but this increase over the last three months, I think it comes from regular users discovering it and starting to use it.
Niklas Buschner (18:41.371)
How do you look at the whole AI search aspect with Claude? Have you already had the chance to do some first tests and compare how it behaves compared to chat GPT? Which domains it cites, etc.? If I look at, for example, the big publicly available studies around domains being cited, etc., it’s oftentimes, at least from my perception, maybe it’s wrong,
Looking at AI mode, AI overviews, now Gemini and then still Perplexity, which is somehow Legacy and then Chachibiti. But I haven’t seen a study yet on citations, for example in Claude or more of a deep dive into the AI search analytics aspect. So how do you think about that or do you already have some thoughts or insights?
Malte Landwehr (19:33.262)
I mean, Claude is much more difficult to get data out of because all the other systems you can go to their website without a login, you can perform a search, you can come back with a different IP, you can do it again. Claude requires a login. That is the first level of complexity. So unless you want to maintain tens of thousands of free Claude accounts,
you need to use the API, which is much, much, much, much more expensive than using a proxy server and going to chetgbt.com, example, or perplexity or Gemini. So that’s why it’s more difficult to obtain very large amounts of data. And I believe that’s why nobody has done a study so far. From what I’m seeing, it’s not a crazy outlier in compared to other LLMs.
Usually when I look at lot of LLMs, it’s Microsoft Co-Pilot who’s the big outlier in terms of what sources it’s using. And I also believe if you monitor data for three LLMs and you optimize based on that, very, very likely you will increase your visibility in all the major LLMs. So I would not obsess too much over.
is Claude citing Wikipedia a little bit less often or more often because I mean at the end of the day it doesn’t have 100 % market share, right? It’s 25 % of Gemini right now in terms of usage of the web website on desktop. So yes, it’s big. It’s maybe the number three now, or it probably is the number three now, but…
It’s not like it has 90 % market share and you should stop optimizing for other systems. I mean, that would be a little bit like saying, I don’t know, Bing has 10 % market share and let’s say Bing would not rank Reddit as good as Google. You wouldn’t say I stop optimizing on Reddit because of Bing, right? He would still do it for Google. yeah, from what I’m seeing, the difference is not too extreme between Claude and other systems.
Malte Landwehr (21:48.531)
and I don’t think people should adapt their strategy because of the rise of clod over the last couple of months.
Niklas Buschner (21:55.012)
But obviously the citation data is what people obsess about. At least a lot of people cite these studies where Wikipedia is like a top source or Reddit is a top source. And I recently had multiple discussions where the same argument basically came up around AI search, which was, yeah, we don’t really know yet how this all works. So, and I thought that,
We have a good idea already, right? So what do you think, for example, why people even in the SEO industry, so people that have SEO jobs, and I mean, they don’t have GEO jobs or AEO jobs yet, but what do you think why people still think about or perceive AI search as this black box where we don’t really know how this all works?
Malte Landwehr (22:46.005)
don’t get it either because with traditional SEO it was also all about reverse engineering and finding out what works and testing and doing it and then you do 10 things and for maybe two of them you know that they are good and for eight of them you think that they are good. In reality probably out of these eight things two are a complete waste of time and one is negative but the other five were still good so you did seven good things in total out of ten so you increased your visibility.
And you can use the exact same pattern with AI Search. And I think with SEO, once there was a best practice, there were 500 influencers repeating it and saying, yeah, this is the best practice, whether it was correct or not, right? Because they also often promote things that are wrong. But it gave people this reassurance, ah, OK, if 500 people say it, then it must be true. But with AI Search,
you have, I don’t know, 10, 20 people telling you how it works. And if you don’t know them, if you have not been following them for the last 10 years, you might not believe them. But I’m also very confused by how often smart SEOs say, yeah, we don’t know how this works. Nobody has figured this out. When I constantly see companies doing what I assume everybody knows what to do.
and increasing the number of leads, increasing the number of clicks, making money off of this. So I really don’t know why people always say, we don’t know what to do. And I mean, I also know you have clients who have done very, very well in AI search. And I just, I really don’t get it. I’m very confused by it. I think some aspect can be that sometimes with Google,
You had people like John Mueller or Matt Cutts actually say, yeah, this works or this doesn’t work. And we don’t have that with AI search, apart from being a little bit sometimes, but otherwise nothing, right? There’s zero. But the thing is, these spokespeople from search engines, they were never giving the best SEO advice, right? They were basically trying to prevent large brands from doing something really stupid. But
Malte Landwehr (25:11.189)
Otherwise, it was pretty basic advice. And if you wanted to be in the top 10 % of SEOs, you had to sometimes do something that they say it doesn’t work. Because they don’t have the incentive to help people with SEO. They have the incentive to prevent large brands from spamming, and they have the incentive to disencourage spammers from doing stuff that makes such results worse for users.
they don’t have the incentive to give you good SEO advice or help you get the most traffic. So yeah, I’m a little bit perplexed by this whole situation. I know hundreds of people who know what to do and do it and are very, very successful with it. There are also now hundreds of people who actually have AEO or GEO or AI search in their job title in addition to SEO. But I think change is always tough and people are slow to adapt.
Niklas Buschner (26:09.36)
Yeah, I could also imagine in addition to what you just said that in the good old SEO days, so to say, all the tools or basically all the tools somehow came up from within the industry. So from people that were SEO practitioners and then built the tools for themselves and then made it bigger. And now with AI search, we suddenly had venture capital interest in this industry. And then if you look at also other
examples of this situation. if you look at Lab Coffee, for example, totally different business. So coffee business, quick, affordable coffee. There was also this big outrage about VC money coming into this business and destroying everything. So I could at least imagine that people have this inherent negative bias towards AI search because they feel like that there are a lot of people that
Basically just want to jump on the hype train and make quick money. Whereas on the other side, as you just said, there are documented, like well documented and very transparent case studies already. But I think I discussed it with someone from your team with Noah. I think it’s just a matter of having like very transparent case studies and stories that are told about this really works. This is really substantial. You can grow a business based on that. It’s not hype.
and it’s not about just visibility, et cetera. It’s about people going to these tools, asking questions, and then actually inquiring at your business or purchasing something. So, but yeah, maybe to close the topic, do you also feel like this whole VC topic could be something that some people have this like inner bias against?
Malte Landwehr (27:59.094)
I think the VC part is less of the problem. It’s more like the people without an SEO background who pretend that they know what to do. And both on the GEO tool side and on the agency influencer side, there are many, many people who sometimes with a lot of venture capital, but sometimes also on their own, now say, I’m no longer a crypto expert. I’m now an AI search expert. And they produce a lot of…
questionable insights and not going to name names, but there’s also a very well funded tool in the space that deleted some of the research repeatedly. Like they published research, everybody cited it and then deleted from their website. I don’t know why they would do it if everything was correct. And of course we have people at conferences who have no SEO background now telling people how AI search works, where I often think they
over rely on an oversimplified narrative, like just do these three things, or I ran this quick statistic and this is how it works. And then you have the people who think that mass producing content is the solution and promote that, where everybody who has been in SEO for a while already knows it’s going to go up and then it’s going to go down. yeah, so there’s a lot of bad advice in the space. And maybe that is the reason why people who don’t have an easy time filtering between the good and bad advice.
find it very difficult to trust anything that they see in this space. So that could be one of the reasons why many people have this mindset, nobody knows what to do, and ignores the hundreds or maybe thousands of people by now who very well know what to do and have tremendous success with it.
Niklas Buschner (29:44.925)
This why we give you direction in this format here in this podcast. So if you’re unsure about what to do, just always come back to this podcast and you will get the best of the best advice in the industry. I have a final question around Claude because I found it interesting. Maybe to the extent you can share it because you describe the complexity of tracking Claude with the required login. How do you try to solve it?
Like how are you think because there has been a lot of discussion in the past about chat GPT monitoring, for example, where some tools were still using the API and other tools like, as for example, PKI have a way more sophisticated mechanism with proxy service, et cetera. Also something that differentiates PKI from other competitors that just add in Germany to the prompt to track something in Germany. But letting that aside,
So maybe, as I said, to the extent you can share, how do you think about solving this complexity with Claude?
Malte Landwehr (30:46.605)
I don’t believe in running an army of accounts to do scraping. You can maybe do that for one data study, but we are building analytics product here with a promise that people get their data every single day and some of them even multiple times per day. And for that you need redundancy and you need safety. So for us, for Claude, the only option right now is the API.
Niklas Buschner (31:16.423)
Fair answer. I think it’s also something that creates trust if we have transparency about how very important players in the industry are, as for example analytics platform like PKI, how you handle stuff. very interesting. Let’s quickly shift over to this somehow old player, chatgbt. Like Claude is now the new kid on the blog, the cool kid. And then chatgbt is somehow the boomer. No, just kidding.
There has been a lot of rumour or would say discussions and also some profound research around chat GPT changing its way of handling the fan out queries with I think either GPT 5.3 or 5.4. Can you give us a quick rundown of what happened there or what supposedly happened?
Malte Landwehr (32:11.757)
Yeah, so two things happened. One is that GPT 5.3, which is the default for free users and non-locked-in users, is hiding the fan-out queries. And then in 5.1, 5.2, we could see them. 5.3 in one version of the model, actually, only. They are gone. And then in 5.4, they are there again. And the 5.4 fan-out queries are different from the ones we’ve seen before.
They use site searches a lot more often. That was a very rare thing in the past, and now it’s almost like the standard. And they often, there are much, much, much more fan out queries than before, at least for paying locked in users. I have actually not analyzed it too much for locked in non-paying users. And yeah.
That’s the change. And of course, the change with the site search brings in potential biases, because if I ask for the best TV, and then I will now give a random example. don’t know, Walmart is with a site search in there. They have much more influence than everybody else, right? Or think it’s even Samsung, who has obviously a very clear bias in the TV space.
of who has the best TV. So if there’s a Samsung.com site search, even though Samsung wasn’t mentioned in the original prompt, that is an issue. And to a small degree, this brand bias has happened in the past. So it sometimes happened. You had a prompt like best sneakers. And one of the fan out queries would be best sneakers Adidas. That’s of course a crazy advantage for Adidas, right? Because all of the results that now come in would be about Adidas.
it’s a problem for Nike and anybody else. And with this, it was usually old established brands like Adidas and Nike having an advantage over on running or Sketchers, for example, because they were in this fan out query which suggests that the fan out queries were generated by an LLM that did not perform web search but just quickly generated them based on model knowledge.
Malte Landwehr (34:32.671)
so older established brands had an advantage. One thing I find very interesting about the fan-out query topic is that with site search, you know there are always people who say, chat GPT is not based on Google, don’t say it. They use Exa and Tavili. And maybe they use them for some things, but Exa doesn’t support site search as far as I know.
And I didn’t check it for Tavili how that thing works, but like this is another strong indication that Google is still being used. And I actually validated this also again yesterday when I trigger a lot of prompts. And now you have the hourly view in Google Search Console, right? You immediately see the spike in impressions if you target a specific domain with the prompts.
It’s another validation that Google is the major source for grounding in chat GPT. Yeah, so that is for me actually the most interesting part that’s coming out of these fan art queries.
Niklas Buschner (35:42.053)
And now, as always, you perfectly described what is happening. What do you think? Why that is why there is this brand bias and why there is the site search now suddenly being used more and more.
Malte Landwehr (35:54.592)
So I think the site search allows, makes it faster because I’ve had the theory for a very long time that chat GPT often only uses the search snippet for grounding. They don’t always retrieve the actual URL and I can’t prove it of course, but I believe something similar is happening here. They will for many, queries from the search snippet in Google, they will get all the information they need.
And with site search, they can more like focus this on specific domains where they believe on this domain, the information is hidden that I need. And for many prompts, you can make a reasonable guess that this is true, right? If you want to know about USB ports of TVs, you could do a site search for Samsung, Hisense, a few other, Sony, LG.
and try to guess that based on just the snippet that is attached to each URL. So I think it’s about saving time and saving money and working around people who block LLMs because major news publishers are very actively blocking LLMs depending on which statistic you believe and which country. Something like 25 to 50 percent of news publishers are blocking chat jpt.
So getting around that by just doing a site search on their domain is pretty good approach, I think.
Niklas Buschner (37:25.021)
So just to get this right, your hypothesis was that if JTBT performs the grounding process, it just looks at the search snippet, meaning the meta title and the meta description, or maybe if there are other snippets, like rich snippets added to it, or…
Malte Landwehr (37:40.238)
So I think they look at the whole rich snippet, which is the reason why certain structured data is good for LLMs. My hypothesis, not that they always do this by the way, right? Like obviously they retrieve content, but my hypothesis is that especially for domains that block them, they sometimes still cite that domain just based on the snippet. So yes, it’s primarily title and if Google displays the meta description, the meta description, but in many cases, especially for the kind of fan out queries JGPD is running.
it will not be the meta description, it will be an extract from the search result.
Niklas Buschner (38:16.519)
And what do you think how Chetjubete comes up basically with their brand ranking or the brand relevance so to say. if they want to look at Walmart or want to look at whatever, where does it come from? So does it come from foundation and knowledge or is it also somehow connected to search and the relevance in search? So because I can imagine a lot of people asking themselves, okay, so.
what do I take from it, like how can I somehow be the brand that is then part of the brand bias of ChurchiePT.
Malte Landwehr (38:54.893)
Yeah, I think we need to also differentiate between brand and website here with the like site search, it’s websites, right? Not brands specifically. Honestly, I don’t know. And I also think there is limited value in overly focusing on this question because it would only be interesting if it gave you a very clear prioritization what to do.
Niklas Buschner (38:58.555)
Mhm. Of course.
Malte Landwehr (39:22.817)
But the reality is that whatever is happening is probably influenced by who is the most dominant brand, who is the most well-perceived brand, who is the most authoritative brand, or what website is the most important one for typical searches. What they are doing here, I have no idea. I have many hypotheses. They could have just some machine learning model trained for it. They could use a very simple LLM that uses
the model knowledge, but they could also use at clickstream data from search engines like Google. And they could have learned, for fashion, these three domains are the most clicked ones after a Google search. So we should look over at them with a site search. there are so many ideas how they could build this. And this is happening right now. We have no idea what will happen in GPT 5.5.
Like there was a time when fan out queries last year, November became much, much longer. They looked more like this, list of keywords that you would use for vector retrieval. Then around Christmas, they became a little bit shorter again and have stayed stable. Like they went basically from six words to 16 words down to 12. And now we have something new with much more site search, but stuff can change again. And
I would not encourage people to now massively change their strategy because of this. I would encourage people to think about why are they doing this? They want to quickly retrieve information. They want to cheaply retrieve information. They want trusted sources. So just provide these three things and you will be good with GPT-5, GPT-6, whatever. I have never been a fan of over-optimizing for the status quo.
because that means after every change you will run behind it. The only exception is if you are hyper-focused on aggressive SEO, probably with temporary results, probably you are a spammer, then it can make sense to try to exactly understand how it works right now to get the absolute maximum out of it. But for something that could change again in three months, I would not obsess over it.
Niklas Buschner (41:47.738)
Quick note from me. Don’t be a spammer but a final question around the The final Chris and the the site search do you have any data on how? Frequent or like what is the share of site search within the fan art query set because I just want to prevent that people think that Basically every fan art query is now a site search. So like how
Relevant is it in the final query set?
Malte Landwehr (42:19.031)
I don’t have reliable data that I trust on that one.
Niklas Buschner (42:23.869)
But you would say that probably it’s not every single fan-out query.
Malte Landwehr (42:29.805)
I know that it’s not every single fan out query. I have a couple of prompts running like what does this website on this URL or on their homepage say about this topic? Or what do they say? What does this author on that website say? Or what does the article publish on a certain date say? Even with that, there is not always a site search. And I mean, it would be obvious to run a site search for these prompts. So even if you try to force it, there’s not always a site search.
Niklas Buschner (42:59.835)
Okay, now I saw a post from you. Maybe you wrote it from vacation because there is a picture with you standing in front of like sunny river like water site. Maybe it’s Florida. I don’t know. But can you quickly tell us where the picture is from?
Malte Landwehr (43:22.143)
Yeah, I mean you mean the picture where it says I love PR. It’s in Puerto Rico and the PR refers to Puerto Rico. I thought it was obvious, but okay, apparently it was not obvious for everyone.
Niklas Buschner (43:25.049)
Exactly right. Ahhhh. Awesome.
Niklas Buschner (43:34.269)
Yeah, maybe if I would have put in 30 seconds more thoughts, would have like honestly, I questioned even if it might be an AI generated picture that you just generated for your post. But yeah, this is the level we have reached with AI. But honestly, I wouldn’t have I wouldn’t have any negative feelings if it would have been AI generated. But back to the point, your post starts with
I used to think technical SEO was the future. I no longer believe that. For LLM visibility, digital PR matters more than most marketers realized. And now digital PR has been talk of the town for quite some time, I would say. And I still see a lot of people struggling with like, okay, where can I start? What is even, what is the digital aspect of PR? So why not PR? What is the differentiation? So I’d like to just ask you,
What would you give people as advice if they see the relevance and understand that they have to do something for D2DPR, but they just have a hard time to think about where to start?
Malte Landwehr (44:42.401)
Yeah, I mean, with your own website, I still believe tech SEO is for many companies the more important thing. But with AI visibility, you are very dependent on third party sources and technical SEO just doesn’t get you on third party sources, right? So, and you need to get there somehow. then digital PR is one of the best tools to do that. And it has to be digital PR because you want to get into the
digital version of the New York Times. mean, also getting an article in New York Times is great for like brand perception and everything, but you want to be in the digital media outlets and you want to be in the ones that actually give access to LLMs. And yeah, their digital PR is just a good way to do it. An important caveat is, it’s not enough to just pay a random PR agency to get you coverage because depending on what kind of prompts you want to be known for, it could be that
Press releases, interviews, articles about you are not the content format that’s being cited, right? If you want to be listed, if you want to be known for prompts like what’s the fastest car, you actually have to be in the articles, 10 fast cars, right? And it’s typical listicles. So just pouring money into PR and just doing some digital PR will not necessarily get you in there. I think people need to look for agencies that…
have a strong PR capability in that regard, but that also have this, I call it the SEO mindset, maybe we now need to call it the AEO, GEO mindset, that look at the sources, track prompts, make a strategy specifically for that, and don’t just do any PR, which still can be positive, but will not have that desired effect of increasing AI search visibility. For that, you need a…
fusion of PR agency and AI search or PR agency and SEO agency to get the good results. And there are many out there, many good ones, but I’m also very, very sure many people now just pour money into the PR without getting the AI search results that they hoped for.
Niklas Buschner (46:53.374)
So basically I would approach it in a similar way as I approach it for potential onsite optimizations. I have a hopefully meaningful prompt set running. I analyze it. I look at the sources. I get an understanding how the LLM or how the AI chatbot comes up with its answer. And then I identify opportunities.
For example, third party websites like a niche blog or a certain publisher in the tech space or in the marketing space, whatever, if you would think about how can I improve the visibility for PKI, for example, around best AI search analytics platform. And then you check if there is an angle or an opportunity and there an experienced PR professional can help you because they are trained and experienced in finding these angles on how you can
become part of this publication? Either would you say if there’s not an option to become part of the very article that is already cited that there’s still that it’s worthwhile thinking about can I get my own article or an article that is favorable for me in the same domain or the same outlet?
Malte Landwehr (48:07.157)
Yes, absolutely. I mean, often it’s not possible to be included in an existing article unless it’s run by the commercial content team, not the journalistic team of the newspaper. And yeah, it’s I think I would look at two things primarily. What domains are being cited and what content formats are being cited and then try to get into future content formats, future content pieces on those formats.
So if articles are never cited, if it’s always listicles, get into future listicles. If it’s company profiles with in-depth review, you need one of those, et cetera, et So also pay attention to that. Or if it’s top of funnel prompts, maybe it’s how-to guides that get cited. And then you must get somehow mentioned in future how-to guides.
So just look at that. And these content formats can really be very, very different depending on your prompt set, the prompt intent, the topic, et cetera, et cetera.
Niklas Buschner (49:03.484)
Awesome. Malte, think we already did a good job in optimizing time to value or like value in time for people listening to this. Is there anything left where we feel like it’s top of mind for you and something that we didn’t talk about that we should have talked about or that you would like users or users, viewers, listeners to think about or to like take with from this episode?
Malte Landwehr (49:32.109)
I think we covered everything. I can quickly reveal that the I Love PR picture is heavily AI edited. So I actually took it, but there were people and there was a cruise ship in the background. So I changed it very significantly. So yeah, it was a mixture between real and AI. But I was actually there at the I Love PR sign and took a picture.
Niklas Buschner (49:40.99)
Niklas Buschner (49:54.846)
Okay.
Niklas Buschner (49:59.059)
But then you were smart enough to edit it with AI, then download it, and then take a screenshot of it so we don’t have this digital AI watermark that would have allowed to see this little credential icon in the top left corner on LinkedIn. Or did you use a tool that doesn’t have this?
Malte Landwehr (50:17.453)
I’m pretty sure there is an AI watermark hidden in it. I mean, probably not visible to human eye, but I do assume that Google would know that it’s AI, because I use Gemini. And screenshots don’t help, as far as I know, to get rid of the Google Imaging AI watermark, if you look at how it’s done.
Niklas Buschner (50:33.662)
Because Google knows everything.
Niklas Buschner (50:42.47)
Hmm I think it’s just for chat to be it was the case for Chet’s pretty but Honestly, I don’t know Malte has been super insightful. Thanks for doing this already looking forward to the next one and Yeah, all the best for you and Definitely guys if and guys and girls if you listen to this and you want to stay on the cutting edge of everything AI search follow Malte on LinkedIn follow Peec AI follow Malte’s team
It’s always very insightful, highly recommended from my side. Thanks so much.
Malte Landwehr (51:23.895)
Thanks.
Niklas Buschner (51:24.882)
See you.
Malte Landwehr (51:26.286)
Bye bye.
Ready to make organic the channel you can count on?
Run a free audit on your domain or book a 30-minute call with the Radyant team — we'll dive into your category, share what we've seen work in similar situations, and outline a plan if there's a fit.