Gemini appears tuned to try to handle the typical questions people type in, while more traditional things you search for get some confabulated nonsense.
I've observed a great deal of people trust the AI Overview as an oracle. IMO, it's how 'normal' people interact with AI if they aren't direct LLM users. It's not even age gated like trusting the news - trusting AI outputs seems to cross most demographics. We love our confident-based-on-nothing computer answers as a species, I think.
I think Google is in a particularly bad situation here.
For over a decade now, that spot in the search page had the "excerpt from a page" UI, which made a lot of sense. It cut down an extra click, and if you trusted the source site, and presumably Google's "Excerpt Extraction Technology" (whatever that was) what was left not to trust? It was very trust worthy information location.
Like if I search for a quick medical question, and there is an except from the mayoclinic, I trust the mayoclinic, so good enough for me. Sometimes I'd copy the excerpt from google, and go to the page and ctrl-f it.
Google used to do a decent job at picking reputable sources, the excerpts were always indeed found in the page in a non-altering context, so it was good enough to build trust. Now that system has degraded over the years in terms of how good it was at picking those reputable sources. Most likely because it was SEO gamed.
However, it has been replaced with a the AI Overview. I'm not against AI, but AI is fundamental different than "a relevant excerpt from a source you trust with a verifiable source in milliseconds".
How could you think this hard and be so far off. Google is in a hyper strong position here and I don’t even like them.
They can refine grounded results over time and begin serving up increasingly well reasoned results over time as
Models improve cost effectively. Then that drives better vectors for ads.
Google did it because it's better for Google, yes. They no longer have to deal with people trying to hack SEO. Now you would have to figure out how to influence the training process of google to hijack that box. So it's better for Google to move to AI Overview. What's your point here?
I say Google is in a bad position morally or in terms of "doing the right thing" not that one would really expect it from a corporation per se. There is a distinction you know.
Google introduced the box as "Excerpt from a search result" box. They traditionally put a lot of care into their search quality and it showed and built trust with their users. Over the years, the search quality dropped. Whether it was less attention from Google, fundamentally harder problem to solve with far more motivated attackers. Yet, the intrusion of bullshit website in the "Excerpt from a search result" box still let you decide that you are not gonna trust medical advice from "mikeInTheDeep.biz" it wasn't ideal that they build trust then let it slip, but being able to see a source with a quote makes it useful when you trust the source.
With AI Overview, you either trust it all, don't trust any of it, use it as confirmation bias, don't
My manager, a direct LLM user, uses the latest models to confirm his assumptions. If they are not confirmed on the first try, he then proceeds to form the question differently until gets what he wants from them.
Most folks just want confirmation. They don't want to have their views/opinions changed. LLM are good at trying to give folks what they're looking for.
I already went through a realization a while ago that you just can't mention something to people anymore and expect them to be able to learn about it by searching the web, like it used to be possible, because everything is just unreliable misleading SEO spam slop.
I shudder to think how much worse this is going to be with "AI Overview". Are we entering an era of people googling "how does a printer work" and (possibly) being told that it's built by a system of pulleys and ropes and just trusting it blindly?
Because that's the kind of magnitude of errors I've seen in dozens of searches I've made in the domains I'm interested in, and I think everyone has seen the screenshots of even more outlandish - or outright dangerous - answers.
I've observed a great deal of people trust the AI Overview as an oracle. IMO, it's how 'normal' people interact with AI if they aren't direct LLM users. It's not even age gated like trusting the news - trusting AI outputs seems to cross most demographics. We love our confident-based-on-nothing computer answers as a species, I think.