- USDT(TRC-20)
- $0.0
At its Google I/O keynote earlier this month, Google made big promises about AI in Search, saying that users would soon be able to âLet Google do the Googling for you.â That feature, called AI Overviews, launched earlier this month. The result? The search giant spent Memorial Day weekend scrubbing AI answers from the web.
Since Google AI search went live for everyone in the U.S. on May 14, AI Overviews have suggested users put glue in their pizza sauce, eat rocks, and use a âsquat plugâ while exercising (you can guess what that last one is referring to).
While some examples circulating on social media have clearly been photoshopped for a joke, others were confirmed by the Lifehacker teamâGoogle suggested I specifically use Elmerâs glue in my pizza. Unfortunately, if you try to search for these answers now, youâre likely to see the âan AI overview is not available for this searchâ disclaimer instead.
This isnât the first time Googleâs AI searches have led users astray. When the beta for AI Overviews, known as Search Generative Experience, went live in March, users reported that the AI was sending them to sites known to spread malware and spam.
What's causing these issues? Well, for some answers, it seems like Googleâs AI canât take a joke. Specifically, the AI isnât capable of discerning a sarcastic post from a genuine one, and given it seems to love scanning Reddit for answers. If youâve ever spent any time on Reddit, you can see what a bad combination that makes.
After some digging, users discovered the source of the AIâs âglue in pizzaâ advice was an 11-year-old post from a Reddit user who goes by the name âfucksmith.â Similarly, the use of âsquat plugsâ is an old joke on Redditâs exercise forums (Lifehacker Senior Health Editor Beth Skwarecki breaks down that particular bit of unintentional misinformation here.)
These are just a few examples of problems with AI Overviews, and another oneâthe AI's tendency to cite satirical articles from The Onion as gospel (no, geologists actually don't recommend eating one small rock per day) illustrates the problem particularly well: The internet is littered with jokes that would make for extremely bad advice when repeated deadpan, and that's just what AI Overviews is doing.
Google's AI search results do at least explicitly source most of their claims (though discovering the origin of the glue-in-pizza advice took some digging). But unless you click through to read the complete article, youâll have to take the AIâs word on their accuracyâwhich can be problematic if these claims are the first thing you see in Search, at the top of the results page and in big bold text. As youâll notice in Bethâs examples, like with a bad middle school paper, the words âsome sayâ are doing a lot of heavy lifting in these responses.
When AI Overviews get something wrong, they are, for the most part, worth a laugh, and nothing more. But when referring to recipes or medical advice, things can get dangerous. Take this outdated advice on how to survive a rattlesnake bite, or these potentially fatal mushroom identification tips that the search engine also served to Beth.
Credit: Beth Skwarecki
Google has attempted to avoid responsibility for any inaccuracies by tagging the end of its AI Overviews with âGenerative AI is experimentalâ (in noticeably smaller text), although itâs unclear if that will hold up in court should anyone get hurt thanks to an AI Overview suggestion.
There are plenty more examples of AI Overview messing up circulating around the internet, from Air Bud being confused for a true story to Barack Obama being referred to as Muslim, but suffice it to say that the first thing you see in Google Search is now even less reliable than it was when all you had to worry about was sponsored ads.
Assuming you even see it: Anecdotally, and perhaps in response to the backlash, AI Overviews currently seem to be far less prominent in search results than they were last week. While writing this article, I tried searching for common advice and facts like âhow to make banana puddingâ or âname the last three U.S. presidentsââthings AI Overviews had confidently answered for me on prior searches without error. For about two dozen queries, I saw no overviews, which struck me as suspicious given the email Google representative Meghann Farnsworth sent to The Verge that indicated the company is âtaking swift actionâ to remove certain offending AI answers.
Perhaps Google is simply showing an abundance of caution, or perhaps the company is paying attention to how popular anti-AI hacks like clicking on Searchâs new web filter or appending udm=14 to the end of the search URL have become.
Whatever the case, it does seem like something has changed. In the top-left (on mobile) or top-right (on desktop) corner of Search in your browser, you should now see what looks like a beaker. Click on it, and youâll be taken to the Search Labs page, where youâll see a prominent card advertising AI Overviews (if you donât see the beaker, sign up for Search Labs at the above link). You can click on that card to see a toggle that can be swapped off, but since the toggle doesnât actually affect search at large, what we care about is whatâs underneath it.
Here, youâll find a demo for AI Overviews with a big bright âTry an exampleâ button that will display a few low-stakes answers that show the feature in its best light. Below that button are three more âtryâ buttons, except two of them now no longer lead to AI Overviews. I simply saw a normal page of search results when I clicked on them, with the example prompts added to my search bar but not answered by Gemini.
If even Google itself isnât confident in its hand-picked AI Overview examples, thatâs probably a good indication that they are, at the very least, not the first thing users should see when they ask Google a question.
Detractors might say that AI Overviews are simply the logical next step from the knowledge panels the company already uses, where Search directly quotes media without needing to take users to the sourced webpageâbut knowledge panels are not without controversy themselves.
On May 14, the same day AI Overviews went live, Google Liaison Danny Sullivan proudly declared his advocacy for the web filter, another new feature that debuted alongside AI Overviews, to much less fanfare. The web filter disables both AI and knowledge panels, and is at the heart of the popular udm=14 hack. It turns out some users just want to see the classic ten blue links.
Itâs all reminiscent of a debate from a little over a decade ago, when Google drastically reduced the presence of the âIâm feeling luckyâ button. The quirky feature worked like a prototype for AI Overviews and knowledge panels, trusting so deeply in the algorithmâs first Google search result being correct that it would simply send users right to it, rather than letting them check the results themselves.
The opportunities for a search to be coopted by malware or misinformation were just as prevalent then, but the real factor behind Iâm Feeling Luckyâs death was that nobody used it. Accounting for just 1% of searches, the button just wasnât worth the millions of dollars in advertising revenue it was losing Google by directing users away from the search results page before they had a chance to see any ads. (You can still use âIâm Feeling Lucky,â but only on desktop, and only if you scroll down past your autocompleted search suggestions.)
Itâs unlikely AI Overviews will go the way of Iâm Feeling Lucky any time soonâthe company has spent a lot of money on AI, and âIâm Feeling Luckyâ took until 2010 to die. But at least for now, it seems to have about as much prominence on the site as Googleâs most forgotten feature. That users arenât responding to these AI-generated options raises suggests that you don't really want Google to do the Googling for you.
Full story here:
Since Google AI search went live for everyone in the U.S. on May 14, AI Overviews have suggested users put glue in their pizza sauce, eat rocks, and use a âsquat plugâ while exercising (you can guess what that last one is referring to).
While some examples circulating on social media have clearly been photoshopped for a joke, others were confirmed by the Lifehacker teamâGoogle suggested I specifically use Elmerâs glue in my pizza. Unfortunately, if you try to search for these answers now, youâre likely to see the âan AI overview is not available for this searchâ disclaimer instead.
Why are Googleâs AI Overviews like that?
This isnât the first time Googleâs AI searches have led users astray. When the beta for AI Overviews, known as Search Generative Experience, went live in March, users reported that the AI was sending them to sites known to spread malware and spam.
What's causing these issues? Well, for some answers, it seems like Googleâs AI canât take a joke. Specifically, the AI isnât capable of discerning a sarcastic post from a genuine one, and given it seems to love scanning Reddit for answers. If youâve ever spent any time on Reddit, you can see what a bad combination that makes.
After some digging, users discovered the source of the AIâs âglue in pizzaâ advice was an 11-year-old post from a Reddit user who goes by the name âfucksmith.â Similarly, the use of âsquat plugsâ is an old joke on Redditâs exercise forums (Lifehacker Senior Health Editor Beth Skwarecki breaks down that particular bit of unintentional misinformation here.)
These are just a few examples of problems with AI Overviews, and another oneâthe AI's tendency to cite satirical articles from The Onion as gospel (no, geologists actually don't recommend eating one small rock per day) illustrates the problem particularly well: The internet is littered with jokes that would make for extremely bad advice when repeated deadpan, and that's just what AI Overviews is doing.
Google's AI search results do at least explicitly source most of their claims (though discovering the origin of the glue-in-pizza advice took some digging). But unless you click through to read the complete article, youâll have to take the AIâs word on their accuracyâwhich can be problematic if these claims are the first thing you see in Search, at the top of the results page and in big bold text. As youâll notice in Bethâs examples, like with a bad middle school paper, the words âsome sayâ are doing a lot of heavy lifting in these responses.
Is Google pulling back on AI Overviews?
When AI Overviews get something wrong, they are, for the most part, worth a laugh, and nothing more. But when referring to recipes or medical advice, things can get dangerous. Take this outdated advice on how to survive a rattlesnake bite, or these potentially fatal mushroom identification tips that the search engine also served to Beth.
Credit: Beth Skwarecki
Google has attempted to avoid responsibility for any inaccuracies by tagging the end of its AI Overviews with âGenerative AI is experimentalâ (in noticeably smaller text), although itâs unclear if that will hold up in court should anyone get hurt thanks to an AI Overview suggestion.
There are plenty more examples of AI Overview messing up circulating around the internet, from Air Bud being confused for a true story to Barack Obama being referred to as Muslim, but suffice it to say that the first thing you see in Google Search is now even less reliable than it was when all you had to worry about was sponsored ads.
Assuming you even see it: Anecdotally, and perhaps in response to the backlash, AI Overviews currently seem to be far less prominent in search results than they were last week. While writing this article, I tried searching for common advice and facts like âhow to make banana puddingâ or âname the last three U.S. presidentsââthings AI Overviews had confidently answered for me on prior searches without error. For about two dozen queries, I saw no overviews, which struck me as suspicious given the email Google representative Meghann Farnsworth sent to The Verge that indicated the company is âtaking swift actionâ to remove certain offending AI answers.
Google AI Overviews is broken in Search Labs
Perhaps Google is simply showing an abundance of caution, or perhaps the company is paying attention to how popular anti-AI hacks like clicking on Searchâs new web filter or appending udm=14 to the end of the search URL have become.
Whatever the case, it does seem like something has changed. In the top-left (on mobile) or top-right (on desktop) corner of Search in your browser, you should now see what looks like a beaker. Click on it, and youâll be taken to the Search Labs page, where youâll see a prominent card advertising AI Overviews (if you donât see the beaker, sign up for Search Labs at the above link). You can click on that card to see a toggle that can be swapped off, but since the toggle doesnât actually affect search at large, what we care about is whatâs underneath it.
Here, youâll find a demo for AI Overviews with a big bright âTry an exampleâ button that will display a few low-stakes answers that show the feature in its best light. Below that button are three more âtryâ buttons, except two of them now no longer lead to AI Overviews. I simply saw a normal page of search results when I clicked on them, with the example prompts added to my search bar but not answered by Gemini.
If even Google itself isnât confident in its hand-picked AI Overview examples, thatâs probably a good indication that they are, at the very least, not the first thing users should see when they ask Google a question.
Detractors might say that AI Overviews are simply the logical next step from the knowledge panels the company already uses, where Search directly quotes media without needing to take users to the sourced webpageâbut knowledge panels are not without controversy themselves.
Is AI Feeling Lucky?
On May 14, the same day AI Overviews went live, Google Liaison Danny Sullivan proudly declared his advocacy for the web filter, another new feature that debuted alongside AI Overviews, to much less fanfare. The web filter disables both AI and knowledge panels, and is at the heart of the popular udm=14 hack. It turns out some users just want to see the classic ten blue links.
Itâs all reminiscent of a debate from a little over a decade ago, when Google drastically reduced the presence of the âIâm feeling luckyâ button. The quirky feature worked like a prototype for AI Overviews and knowledge panels, trusting so deeply in the algorithmâs first Google search result being correct that it would simply send users right to it, rather than letting them check the results themselves.
The opportunities for a search to be coopted by malware or misinformation were just as prevalent then, but the real factor behind Iâm Feeling Luckyâs death was that nobody used it. Accounting for just 1% of searches, the button just wasnât worth the millions of dollars in advertising revenue it was losing Google by directing users away from the search results page before they had a chance to see any ads. (You can still use âIâm Feeling Lucky,â but only on desktop, and only if you scroll down past your autocompleted search suggestions.)
Itâs unlikely AI Overviews will go the way of Iâm Feeling Lucky any time soonâthe company has spent a lot of money on AI, and âIâm Feeling Luckyâ took until 2010 to die. But at least for now, it seems to have about as much prominence on the site as Googleâs most forgotten feature. That users arenât responding to these AI-generated options raises suggests that you don't really want Google to do the Googling for you.
Full story here: