Google Cut Back AI Overviews in Search Even Before Its ‘Pizza Glue’ Fiasco

Google Cut Back AI Overviews in Search Even Before Its ‘Pizza Glue’ Fiasco

As anyone who so much as glanced at the internet in the past few weeks probably noticed, Google’s sweeping AI upgrade to its search engine had a rocky start. Within days of the company launching AI-generated answers to search queries called AI Overviews, the feature was widely mocked for producing wrong and sometimes bonkers answers, like recommendations to eat rocks or make pizza with glue.

New data from search engine optimization firm BrightEdge suggests that Google has significantly reduced how often it is showing people AI Overviews since the feature launched, and had in fact already substantially curbed the feature prior to the outpouring of criticism. The company has been tracking the appearance of Google’s AI answers on results for a list of tens of thousands of sample searches since the feature was first offered as a beta test last year.

When AI Overviews rolled out to logged-in US users in English after Google’s I/O conference on May 14, BrightEdge saw the AI-generated answers on just under 27 percent of queries it tracked. But their presence dropped precipitously a few days later, the week before screenshots of AI Overviews’ errors went viral online. By the end of last week, when Google published a blog post acknowledging its AI feature’s flubs, BrightEdge saw AI Overviews appearing on only 11 percent of search result pages. Their prevalence was essentially the same on Monday.

Jim Yu, BrightEdge’s founder and executive chairman, says the drop-off suggests that Google has decided to take an increasingly cautious approach to this rollout. “There’s obviously some risks they’re trying to tightly manage,” he says. But Yu adds that he’s generally optimistic about how Google is approaching AI Overviews, and sees these early problems as a “blip” rather than a feature.

“We’re continuing to refine when and how we show AI Overviews so they’re as useful as possible, including a number of technical updates in the past week to improve response quality,” says Google spokesperson Ned Adriance. Google declined to share its internal statistics about how frequently AI Overviews appear in search, but Adriance says that the BrightEdge numbers don’t reflect what the company sees internally.

It’s unclear why Google may have decided to significantly reduce the appearance of AI Overviews shortly after it launched, but the company’s blog post last week acknowledged that having millions of people use the feature provided new data on its performance and errors. The company’s head of search, Liz Reid, said Google had made “more than a dozen technical improvements,” like limiting satirical content from cropping up in its results. Her post noted that these changes would trigger restrictions on when AI Overviews were offered but did not detail how exactly those restrictions would change the frequency with which AI results appeared.

BrightEdge began tracking AI Overviews using its list of sample queries after Google allowed users to opt in to a beta test of the feature late last year. The test queries spanned nine categories, including ecommerce, insurance, and education, and were designed to span common but also rarer searches. They were tested over and over, in some cases multiple times a day.

In December 2023, BrightEdge found that the summaries appeared on 84 percent of its searches but saw that figure drop over time. Google’s Adriance said it did not trigger AI Overviews automatically on 84 percent of searches but did not clarify its internal measurements. After Google opened up AI Overviews to all, BrightEdge continued tracking their appearance using a mixture of accounts that had previously enrolled in the beta test and others which had not but saw no significant difference between what the two groups saw.

Google declined to share exactly how much it changed how many AI Overviews it showed the general public versus people enrolled in the beta test, but Adriance said that people who had opted in to the test were shown AI Overviews on a wider range of queries.

BrightEdge’s data also sheds light on the topics where Google believes AI Overviews can be most helpful. AI answers appeared on the majority of health care keyword searches, most recently at a frequency of 63 percent. Sample queries included in BrightEdge’s data included “foot infection,” “bleeding bowel,” and “telehealth urgent care.” In comparison, queries about ecommerce return AI Overviews at around 23 percent, while restaurants or travel very rarely trigger AI overview answers.

Yu calls those results “surprising,” because health queries can be especially sensitive, and Google has made a concerted push in recent years to refine what it shows people who ask health questions.

Mark Traphagen, an executive at the search-engine-optimization platform seoClarity, has also taken special notice of how Google is handling health-care-related queries. To track how AI Overviews are rolling out, the company is monitoring the responses to a list of thousands of searches over time. For 26 popular health-related keywords, including “how to treat insomnia” and “symptoms of Lyme disease,” Google shows an AI response for around 58 percent.

Like Yu, Traphagen has been surprised by how often AI Overviews appear in response to this type of question. But they say the way Google’s feature sources its responses to health queries, often relying on trusted websites like the Mayo Clinic or the US Centers for Disease Control and Prevention, is encouraging. “They have really turned up the safeguards,” Traphagen says. “They’re all from well-known, credible sources.”

Google’s AI answers still sometimes misfire, though, including on health queries. Some experts say that Google’s claims to cite high-quality sources for health answers doesn’t stand up. “They frequently cite pages that don’t rank anywhere, including for health queries,” says search engine optimization consultant Lily Ray. Her experiments have documented how AI Overviews seem to struggle to authoritatively answer “softer” health care queries on topics like aging, building muscle, and skin care. It’s much stronger on more straightforward medical queries, Ray says.

Last week, The New York Times reported concerns over the sources that Google’s algorithms used to answer some health queries, reporting that AI Overviews answered questions about the health benefits of chocolate by drawing on the websites of an Italian chocolate and gelato maker and a company that sells at-home “gut intelligence tests.”

When WIRED queried, “Is chocolate healthy?” on Monday morning, the AI Overview that appeared in response cited the same Italian chocolate company, as well as the website for a Minnesota-based chocolatier. But repeating the query later in the afternoon suggested Google had been making improvements: The chocolate companies had been removed from the citations list, which instead pointed to the websites of four reputable health care organizations, like Scripps Health. (The answer still notes that experts recommend eating a small amount of dark chocolate every day, which is, at best, a contestable summary of current medical advice.)

Despite AI Overview’s rough beginnings, Yu of BrightEdge says that long term, AI search is here to stay. “Big picture is that the AI moment in search is inevitable, and I think it’s going to get much better,” Yu says. That may be the case—but it’s an open question whether a new-and-improved AI Overview will make a big enough leap to repair its reputational damage.

https://www.wired.com/feed/rss

Kate Knibbs

Leave a Reply