Adventures in advertising, technology, politics and beyond

The false promise of AI-powered search results

31
May
2024
Google’s AI agents are so eager to answer our questions that they aren’t able to tell us that actually running with scissors is dangerous, you shouldn’t take a bath with a toaster, and there is no recommended daily intake of rocks because you shouldn’t be eating them at all.

So it turns out that Google, the supremely powerful all-knowing search company with the admirable mission to organise the world's information, has some rather surprising advice to offer.

For example, did you know that running with scissors is good exercise?

Or that taking a bath with a toaster is a fun way to unwind?

Meanwhile many of you may be concerned about not eating the recommended intake of at least one small rock per day.

So what have these results got in common, aside from each being nonsense, and quite dangerous advice?

All of these snippets are powered by Google's new feature - the "AI Overview", which is an admirable attempt at solving a real problem.

After all, it takes time and effort when you have to click through multiple pages to scroll through information relating to your query, consider the context of the various search results, and make your own conclusion about the answer to your search result.

Wouldn't it be easier if when you searched something into Google, it just told you the answer right away, and you didn't even have to click anything?

AI Overview in itself isn't a completely new step for Google - they have been experimenting with a similar product "featured snippets" for years where an excerpt from a web page relevant to your search is highlighted at the top of your search results.

The wider problem with featured snippets

Of course, there are often many cases when there isn't a single true answer to a query.

Featured snippets make sense for simple questions that have clear answers like "What is the capital city of Canada?" - Google knows the only valid answer is Ottawa, so it’s obviously worth pulling that city name out nice and large at the top of the page.

But what about questions with more complex answers that can’t be easily summarised in a word or a short sentence? An attempt to oversimplify here can mask the contested answers to debated topics.

Other drawbacks of featured snippets have been extensively documented. Take when the question has a false premise - e.g. “who is the King of America”.

Rather than refuting the premise and saying “there is no King of America”, the featured snippet from this notable example in 2014 falsely claims that the answer is Barack Obama.

But setting aside the philosophical questions about the extent to which featured snippets should be applied, there are some very real concerns about how AI makes this problem worse even when there should be simple answers.

Where AI gets it wrong

What the AI Overview enables (and where it all starts to get messy) is a manufactured snippet based on a broad corpus of training data, where Google uses large language models (LLMs) to generate a new answer to your specific query rather than extracting a snippet from an existing webpage.

Interestingly enough, it seems like questions with a false premise are often what confuses an LLM - similar to the long-standing challenges with featured snippets.

Google’s AI agents are so eager to answer our questions that they aren’t able to tell us that actually running with scissors is dangerous, you shouldn’t take a bath with a toaster, and there is no recommended daily intake of rocks because you shouldn’t be eating them at all.

Part of the problem is also where the training data comes from. When a snippet is taken from a search result then it’s relatively easy to find out the context from where the snippet originated.

But an AI overview generated by an LLM might be fusing together snippets from multiple articles, and also giving weight to nonsensical or comical results without understanding the humorous context.

Take the example about eating at least one small rock per day, which seems to have been generated from this satirical Onion article.

It’s safe to assume that the widely-reported content licensing deal between Reddit and Google is also largely responsible for polluting a lot of the AI Overview’s training data with user comments from Reddit.

For example, another viral AI Overview result was the suggestion to add glue to pizza to stop the cheese falling off. This result seems to have been generated from a comment by Reddit user “/fucksmith” 11 years ago. Thank you fucksmith for your wisdom!

From AI nonsense to training data inbreeding

So AI results that have been trained on nonsense source material end up generating nonsense results. But that’s not the only problem with training data.

See, as more and more AI-generated content is set loose into the world, it creates a recursive loop where AI models are being trained on more and more AI content. This reduces the weighting of human content in AI models, and over time it will make AI content more and more, well, obviously generated by AI..

Ed Zitron puts it best in his recent piece Are We Watching The Internet Die? as per the below extract:

The Habsburgs, European monarchs who ruled Austria, Germany and eventually the Holy Roman Empire, were notoriously protective about their bloodline, which after two centuries resulted in all sorts of genetic abnormalities.

Is the future of search going to be defined by inbred abnormalities and fake results?

The only solution (for now) is to turn back time

Hopefully Google will learn from these mistakes and take steps to exclude satire and poor quality data from their training results, and also improve the ability of AI Overview and featured snippets to recognise false premises.

But this could take months or years. What can we do in the meantime if you don’t want health tips that could kill you in your search results?

The answer is &udm=14 which is a URL parameter you can add to your search queries to remove the AI results, as well as other recent features like knowledge panels and even ads.

It’s essentially a time machine to an early 2000s version of Google, accessible by using a magical code. After all, how else would you get a time machine to work?

Even better, udm14.com is a search engine that automatically appends this URL parameter to your searches, to make it even easier to get unadulterated results.

So that’s some positive news, but a tool like &udm14 really shouldn’t need to exist.

Let’s hope that Google learn from this and put quality control ahead of chasing the hype train with their next release.